Posts tagged ‘Types Of Assessments’

Keeping In Touch with The Assessment Maturity Model

I have been busy today finalizing links, logos and systems to help people keep in touch with what we’re doing with the Assessment Maturity Model.  Added links for email subscriptions (using Constant Contact), Twitter and an RSS feed to the blog.  Also made the blog to be a sub-domain for easier access (i.e.

If you want to keep in touch click here to discover your options.


August 23, 2009 at 12:34 am Leave a comment

Assessment Maturity Model

Over the last few years I have been working with customers and friends on an Assessment Maturity Model. It started with brainstorming, it was developed using a wiki and it has been tested by representing the idea to a number of groups around the world.  Now I feel that it is ready to take to the next step.

Over the last few weeks I have been formalizing the model to make it easy to understand and building a web site 

The premise behind the Assessment Maturity Model is that if you can’t measure it you can’t manage it. But to measure it you need to know what "it" is. 



The Assessment Maturity Model proposes that there are six key performance indicators, known as Measures, within the three key Areas of Assessments, namely:

  • Assessment Development
    This area includes all aspects of authoring (creating and maintaining) the items and assessments.
  • Assessment Delivery
    This area includes all aspects of administering the assessment to candidates, respondents, participants, etc.
  • Presenting Results
    This areas deals with presenting results in a trustworthy way to the stakeholder in meaningful context.

Six key performance indicators, known as Measures, are tracked to provide an indication of maturity and these are:

  • Stakeholder Satisfaction
  • Security
  • Strategic Goals
  • Processes
  • Data Management
  • Communications with Stakeholders

image By tracking these Measures an organization can determine where they are and can plan for where they want to be. These can be tracked by a single Area, as shown in the graphic to the left, or with the three Areas combined for an overview. The graphic to the left shows how an organization can track by area based on "Quality" and "Efficiency".

If you manage any type of assessment program I’d encourage you to take time to learn more about the Areas, Measures and Phases of the Assessment Maturity Model.  or just

Please feel free to link to the Assessment Maturity Model web site at, cross link from other web sites and blogs, twitter about it, email me about it, comment here about it and/or even tell your friends about it! 

Watch out for more – this isn’t finished yet!

August 14, 2009 at 12:14 am Leave a comment

Types of Assessments – Learn, Qualify, Perform

image I wanted to share some work with you on how to logically organize assessments and came up with this simple model which seems to resonate with the people that I present it to.  You might also be interested in my previous article on Types Of Assessments (Formative, Diagnostic, Summative, and Surveys)..

There are assessments for learning and assessments to qualify people for certain activities or job roles; then they perform tasks. Throughout the process of learning, qualifying and performing, we accumulate evidence and feedback that help us make decisions. 

The definitions and distinction of the various assessment types that I refer to in this article are documented within the Assessment Maturity Model at


image Assessments Through The Learning Process can be distinguished and linked together by thinking of:

  • Pre-Learning Assessments
  • In-Learning Assessments
  • Of-Learning Assessments

Pre-Learning Assessments

The following assessments are used prior to learning events:

  • Job Task Analysis
  • Needs Analysis Surveys
  • Diagnostic assessments
  • Pre Tests
  • Placement tests
  • Self-diagnostic tools
  • In-Learning Assessments
  • Of-Learning Assessments


The following assessments are used within learning events to assist the learner learn:

  • Formative assessments
  • Quizzes during Learning
  • 360 Learner Peer Review
  • Practice Tests


The following assessments are used to detect what a learner has learned:

  • Level 1 Surveys
  • Course evaluations
  • 360 (Level 3) Surveys
  • Post Course Tests
  • Graduation Exams
  • Internal exams
  • Open exams
  • Licensing exams


image I struggled with this for a long time on whether qualify should be its own category but finally I came to the conclusion that it should because it provides sufficient distinctions on its own and could be separated from learning experiences. 

There are three basic categories of Assessments To Qualify and these have various styles of assessments associated with this:

  • Academic: Graduation Exams
  • Pre-Employment: Pre-employment Screening, Pre-employment Skills Tests, Personality assessments, Psychological assessment
  • Certification & Licensing: Graduation exams, Internal exams, Open exams, Licensing exams


imageAfter learning and qualifying we end of up performing manual and intellectual tasks. And as we do things evidence surfaces that can be referenced to help us make decisions about more learning, more qualifications and our performance.

This evidence can be used and assessed within an organizations’ Performance and Talent Management processes to help people become more successful in their work 

Performance and Talent Management

image Performance Management is a method used to influence and manage behaviors and results with the goal of bringing out the best in people.  Talent Management refers to the phases of finding, developing and retaining people to perform activities. 

Within these phases people engage in learning and qualifications and their activities tends to accumulate evidence that assists them and others assess their performance.

Talent Management assessments can be categorized with the three phases of Talent Management:

  • Talent Acquisition – finding and employing the right people
    • Job Task Analysis
    • As per Pre-Employment assessments i.e. Pre-employment Screening, Pre-employment Skills Tests, Personality assessments, Psychological assessment
  • On-Boarding – orienting new people to the workplace
    • As per Assessments Through The Learning Process
  • Performance, Talent and Team Management – to help people be successful
    • As per Assessments Through The Learning Process
    • Appraisal (360s)
    • Employee Attitudes
    • Opinion Surveys

I hope you have found this posting useful in showing how the types of assessments relate to Learning, Qualifying and Performing as well as getting things done!

August 3, 2009 at 12:09 pm Leave a comment

Types Of Assessments (Formative, Diagnostic, Summative, and Surveys)

I was working with Dr. Will Thalheimer of Work-Learning Research a few days ago on an interview style video around learning, e-learning, assessments and the macro economy’s impact on learning and assessment.  It was fun and I learned a bunch about the challenges that this type of production entailed. I’ll provide some links out to this when I get them form Will.

My discussions will Will, an expert in effective learning interventions, learning environments and the challenges of the forgetting curve prompted me to think more deeply about how we distinguish different style of assessments. And so I though it might be useful to share some distinctions with you on the types of assessments that we use during the process of learning.

Formative Assessments

My definition: Formative Assessments (quizzes and practices tests) are used to strengthen memory recall by practice and to correct misconceptions and to promote confidence in ones knowledge.


In the learning process we are trying to transfer knowledge and skills to a persons’ memory so that they become competent to perform a task. During that process people might fail to pay attention, fail to grasp everything taught or simply forget things even though they once knew it. Most learning environments use simple Formative questions as they can:

  1. Create intrigue in order to create a learning moment that motivates the learner to pay attention.
  2. Focus the leaner’s attention towards the importance of key topics.
  3. Reduce the forgetting curve; by recalling previous knowledge and skills we strengthen the ability to recall that knowledge or skill.
  4. We can correct misconceptions where someone formed invalid connections, however, that does border on the purpose of a Diagnostic assessment.

Typically a Formative Assessment does not need to store results as the job of the assessment is completed by providing the stimulus which causes the memory ‘muscle’ to be strengthened just as lifting a weight in a gym strengthen other muscles. Sometimes results are stored in order to track how instruction might be improved.

Diagnostic Assessments

DiagnosticAssessments When we visit our doctor we’d become concerned if our doctor prescribed pills without asking us any questions. Doctors typically ask where the pain is located, when the pain happens, is the pain associated with certain activities. The doctor might run tests on our blood or other bodily fluid (ugh)! And so it is with Diagnostic Assessments.

First we seek to understand the current knowledge, skill, and/or ability of the Participant so that we can diagnose the gap and thereby provide a prescription for learning if required.

Diagnostic questions might be self-assessment style of questions such as “Please rate your ability to ….” or test questions such as “Which savings plan would you suggest to a married man of 42 with 3 kids and dog and a large mortgage?”.  Either way the goal is to match the responses to the benchmark required in order to diagnose the gaps and prescribe something useful.

Diagnostic Assessments can be used to direct people to the right learning experience such as a class, conversation with a Subject Matter Expert (SME), a web search, a book, an elearning course, etc.

Diagnostic Assessments are not designed to stregthen memory recall, however, by their very nature they do provide some of those characterics.

Summative Assessments

SummativeAssessments Tests and exams designed to measure knowledge, skills, and abilities are known as Summative Assessments.  These are typically used to certify people have a certain level of knowledge, skills, and/or ability. Often these certifications grant people access to something previously not permitted such as a license to drive or be promoted within an organization or have physical access to dangerous materials.  because of this “Grant of Access” Summative Assessments are typically Higher Stakes assessments.  Typically Summative Assessments has “Pass” and “Fail” associated with them which distinguishes them from Formative and Diagnostic which don’t.  There are two basic types of Summative Assessments:

  1. Norm Referenced

    Where a Participant’s pass is determined by their positioning within a group of test takers.  The Participants’ results are compared to the others in the group after everyone has completed the assessment. This is often used in environments where the number of places in the next course or job role is limited and so only a certain number of people can pass.  A Norm Referenced test will tease out the best people within the group that took that test but the quality of competence passing will vary from one sitting of the test to the next.

  2. Criterion Referenced

    When the criterion for passing a test has been predetermined it is known as a Criterion Reference Assessment.  The most used Criterion Reference Assessment on the planet is the driving test. The criteria for passing has been determined prior to the test and you normally know whether you have passed or failed immediately; well you certainly could although sometimes administrative processes slow things up.

Criterion Reference Assessments are often used to certify Regulatory Compliance tests, HIPPA compliance, pre-employment test, and IT certification exams.


image We’ve all completed surveys and we all can recognize that results have to be stored and aggregated to help with the analytics. A common question type used within surveys is a Likert scale item developed by Rensis Likert, an American educator and organizational psychologist. Likert scales prompt a Participant with a statement and they respond by specifying their level of agreement to a statement which is then transposed to a number to ease the measurement and analytics process.

Course Evaluation are the most commonly used survey type within the Learning Process but others survey types include Job Task Analysis, Needs Analysis,  360  and other forms of peer review assessments, Employee Attitude, Customer Satisfaction, Partner Satisfaction and Political Opinion surveys.

Each form of Assessment has its own set of challenges for developing and maintaining the instrument, for delivering questions, providing feedback to the participants and stakeholders, and then performing the analysis of results; but that will have to wait for another blog entry!

March 22, 2009 at 2:50 pm 4 comments

Distinguishing Low, Medium and High Stakes Assessments

Trying to classify an assessments into a Low, Medium and High stakes category has some pros and cons.  On the plus side we can quick “range “ the time to create, deliver and report on an assessment, and range the impact to the person taking the assessment, and it can help conversations about an assessment.  All conversations have to be considered in context.  If you were having the conversation in the context of Psychometrics you would probably be distinguishing between low stakes and high stakes exams; if you were having the conversation with instructors and training you might be distinguishing assessments prior to a course, during a learning experience, tests after and course evaluations.


This chart can help us think about Low, Medium and High Stakes assessments in both context but without specific measure. Essentially it aids the conversation by promoting some distinctions and a vocabulary rather than providing a measurable outcome. That is in itself amusing because the world of assessments is all about the theory and technique of educational and psychological measurement.

In higher stakes assessments we tend to talk about candidates, in medium and lower stakes assessments we talk about students, employees and/or learners and in low stakes assessments we might talk about respondents. And so in our vocabulary, in the context of low, medium and high stakes assessments, we’ll talk about Participants who are the people that answer the questions in our assessments; they participate.

There are six terms that can help us provide some distinctions:

  1. Consequences to the Participant

    If the consequences to the participant are low then that helps us classify into a Low Stakes assessment but if the consequences are great (affecting Lives, Limbs, and/or Livelihoods) then it would be a High Stakes assessment.

  2. Legal Liabilities

    When stakes are high, consequences are high, and so in come the lawyers. I don’t want to turn this into a political debate but laws are written to protect our rights and lawyers help us understand the laws to do the right thing. All stakeholders in the assessment process have rights and responsibilities. Often this debate is taken from the side of the Participant but other Stakeholders have rights too. Assessments must be fair and reliable and fit for purpose. And we can’t go around certifying people to fix gas leaks that aren’t qualified.

  3. Proctoring/Invigilation

    When the stakes are high people are more motivated to cheat which requires that the assessment process is invigilated to prevent this and to promote trust in the assessment process.  As I fly around the world I woudl liek to know that my pilot and the air traffic controllers didn’t cheat on their exams. With certain kinds of, low stakes, assessments invigilation would provide an unwanted and undesirable level of supervision. Here’s some examples of assessments that we should probably not proctor/invigilate:

    1. Novice student taking an assessment where the goal is to provoke intrigue before a learning experience.
    2. Course evaluation where the moderator might choose to influence the outcome.
  4. Validity and Reliability

    Ideally all assessments should be reliable and valid.  They should work consistently over time and they should align with the subject matter that you are assessing. However, if we applied the same standards to low, medium and high stakes assessments we might never justify the costs for say Formative assessments or Course Evaluations.  We must always work ethically but we don’t need a $25,000 study on the validity and reliability of the assessment. When conducting a High Stakes test or exam we need to be sure that the assessment aligns with the topics that it is assessing and it is fair to all participant.

  5. Planning

    If you are administering a High Stakes test or exam to 5,000 Participants you’ll use a different plan than for 10 people evaluating a course. Planning will include considering how you’ll develop your assessment, how you’ll have expert(s) review, how will you maintain confidentiality and security, how will you deal with delivering the assessment, how will you provide accommodations for those with special needs, how will you report on the results to the Participant and Stakeholders.

  6. Psychometrician Involvement

    A Psychometrician is a Psychometrics professional familiar with the theory and techniques to measure knowledge, skills, abilities, attitudes, and personality traits.  Psychometricians are often involved with the development of High Stakes assessments such as tests and exams, and analyze the results to ensure that the assessments are valid and performing consistently and reliably.

It is worth reminding you that all assessments can be very valuable, within their context, regardless of how you categorize them.  Just because quizzes, designed to strengthen memory recall, and course evaluations, designed to measure in order to improve the learner’s environment, are low stakes does not mean that they provide low value. But those types of distinctions will follow in another blog entry!

March 22, 2009 at 1:26 pm 2 comments

Add to Technorati Favorites

Recent Posts