Posts tagged ‘Talent Management’

Types of Assessments – Learn, Qualify, Perform

image I wanted to share some work with you on how to logically organize assessments and came up with this simple model which seems to resonate with the people that I present it to.  You might also be interested in my previous article on Types Of Assessments (Formative, Diagnostic, Summative, and Surveys)..

There are assessments for learning and assessments to qualify people for certain activities or job roles; then they perform tasks. Throughout the process of learning, qualifying and performing, we accumulate evidence and feedback that help us make decisions. 

The definitions and distinction of the various assessment types that I refer to in this article are documented within the Assessment Maturity Model at http://assessmentmaturitymodel.wikispaces.com/Assessment+Solutions:

Learn

image Assessments Through The Learning Process can be distinguished and linked together by thinking of:

  • Pre-Learning Assessments
  • In-Learning Assessments
  • Of-Learning Assessments

Pre-Learning Assessments

The following assessments are used prior to learning events:

  • Job Task Analysis
  • Needs Analysis Surveys
  • Diagnostic assessments
  • Pre Tests
  • Placement tests
  • Self-diagnostic tools
  • In-Learning Assessments
  • Of-Learning Assessments

In-Learning

The following assessments are used within learning events to assist the learner learn:

  • Formative assessments
  • Quizzes during Learning
  • 360 Learner Peer Review
  • Practice Tests

Of-Learning

The following assessments are used to detect what a learner has learned:

  • Level 1 Surveys
  • Course evaluations
  • 360 (Level 3) Surveys
  • Post Course Tests
  • Graduation Exams
  • Internal exams
  • Open exams
  • Licensing exams

Qualify

image I struggled with this for a long time on whether qualify should be its own category but finally I came to the conclusion that it should because it provides sufficient distinctions on its own and could be separated from learning experiences. 

There are three basic categories of Assessments To Qualify and these have various styles of assessments associated with this:

  • Academic: Graduation Exams
  • Pre-Employment: Pre-employment Screening, Pre-employment Skills Tests, Personality assessments, Psychological assessment
  • Certification & Licensing: Graduation exams, Internal exams, Open exams, Licensing exams

Perform

imageAfter learning and qualifying we end of up performing manual and intellectual tasks. And as we do things evidence surfaces that can be referenced to help us make decisions about more learning, more qualifications and our performance.

This evidence can be used and assessed within an organizations’ Performance and Talent Management processes to help people become more successful in their work 

Performance and Talent Management

image Performance Management is a method used to influence and manage behaviors and results with the goal of bringing out the best in people.  Talent Management refers to the phases of finding, developing and retaining people to perform activities. 

Within these phases people engage in learning and qualifications and their activities tends to accumulate evidence that assists them and others assess their performance.

Talent Management assessments can be categorized with the three phases of Talent Management:

  • Talent Acquisition – finding and employing the right people
    • Job Task Analysis
    • As per Pre-Employment assessments i.e. Pre-employment Screening, Pre-employment Skills Tests, Personality assessments, Psychological assessment
  • On-Boarding – orienting new people to the workplace
    • As per Assessments Through The Learning Process
  • Performance, Talent and Team Management – to help people be successful
    • As per Assessments Through The Learning Process
    • Appraisal (360s)
    • Employee Attitudes
    • Opinion Surveys

I hope you have found this posting useful in showing how the types of assessments relate to Learning, Qualifying and Performing as well as getting things done!

August 3, 2009 at 12:09 pm Leave a comment

Learning Environments

This year I have been witnessing a change in thinking and common practices around learning environments. I’d like to share my point of view with you because I’m convinced that a significant shift is starting to occur. Let me start by giving a contextual overview in the form of a few tables:

Learning Delivery Systems

image

Assessment Systems

image

Tracking Systems 

image 

Marketing Hyper vs. IT’s Point of View

The stock market’s exuberance in the late ‘90s generated venture capital funding of learning environments and management systems start ups.This resulted in marketing hype that led to unrealistic expectations. Although vendors tried hard to meet these unrealistic expectations, customers were frustrated and the market was altogether unhappy and unhealthy.  This became obvious to me when customers would explain that they were on their third LMS. Clearly things had to change.

One strong benefit that came from the late ‘90s early ‘00s era was the clear separation that was drawn between “management systems” and “content”.  SCORM and AICC helped provide us with these clear distinctions, however, we didn’t manage to achieve sufficient distinctions between the key modules of a learning environment as web-services had not matured quickly enough.  Consequently, large monolithic systems tended to capture buyers’ imaginations as being the big pill to swallow to solve the learning problem.

From my point of view, IT departments initially pandered to users’ requirements and assisted with the deployment of customized and dedicated learning systems. However, as costs of tightly integrated systems increased and the number of dissatisfied users increased, IT departments started to look at alternative systems and architectures that could both meet the users’ requirements and align with organizations’ overall IT infrastructure.  IT started to look at learning from a users and performance perspective and started to apply IT methodologies to the issue.

Users’ requirements varied on one side from structured learning (course based), mostly used in schools, colleges and on-boarding new employees, through to self-service, where a motivated learner sought out the information that they needed in order to perform their tasks or gain qualifications.  Deploying only one learning methodology (course based vs. self-service) does not fit all requirements. Knowing the context of the user is key to providing a great user experience!

So what are we seeing now?

Authentication and Single Sign On (“SSO”) Portals

Learning materials, documents and content live in many systems. The issue is access control to these systems for the right person at the right time. This motivates a need for federated searches of multiple content repositories and to mashed up user interfaces that allow users to view multiple systems.

imageAs I have illustrated to the left, you can now provide access to multiple applications with a common look and feel using a portal such Microsoft’s SharePoint.  Each application is “skinned” by the portal’s feel and is presented in a “portlet”.  Depending on the user’s privileges they can add, change, delete, and move around portlets to suit their working style and job role.image

Behind the portal sits a number of systems that a user will use to perform their tasks. 

The portal looks after two important functions:

  • Authentication of the User
    This is usually performed by way of user names and passwords but could also be achieved via biometrics.
  • Access Control (Privileges)
    To limit access to the portlets that a user can actually see and use.

We have to be careful that we don’t return to dispersed SSO portals with one portal for learning, one for accounting, etc.; that would give us Multiple Sign On Portals which would be a retrograde step. 

True Single Sign On Portals provide IT departments with a single but centralized access control system and allow the user to define their portal to accommodate their style of working.

Authorization

imageEach application has its own unique set of privileges that would become difficult for a centralized IT team to control. For instance, in the context of testing, we might only allow users to take a test between certain hours and maybe require a proctor/invigilator. Whilst it would be possible for all of these user privileges to be stored more centrally and associated with the portal, this becomes impractical and slows the upgrade process.  However, it is common for application systems to receive data from the portal and then derive privileges based on the user’s associations.

Islands of Data

imageThe challenge with this architecture is that each application maintains its own databases for its operational needs such as storing course evaluation data and tests results.

We are seeing leading edge employers look at employee life cycle data in order to improve their talent management systems and processes.  From recruiting to on-boarding, appraisals, formal/structured learning, informal learning, career progression, and exit interviews we see present and future requirements for viewing and correlating data from multiple courses to help people understand the dynamics of their talent.

So learning environments are now being built in such a way that their data is consumable by web-services and a data warehouse.

Data Warehouses

imageOrganizations maintain Data Warehouses (DWs) for several reasons:

  • DWs can  be structured to quickly access data for reporting applications to consume rather than focus on efficiently collecting and maintaining data. 
  • DWs allow for data to be connected and correlated despite the fact that they are generated by different systems potentially using different types of databases.
  • DWs can insulate reporting systems from application upgrades which would otherwise necessitate reporting systems to be updated at the same time that the application upgrade is performed.

Wikis and Blogs

Wikis and blogs have become very important and popular tools for harvesting knowledge from subject matter experts (SMEs). With succession planning being important for many organizations that have aging work forces, harvesting knowledge from SMEs is a key initiative. 

Conclusion

When we combine IT department support for standardization on portals, wikis, blogs, data warehouses and reporting systems we can see that the time is right for a revolution within our learning environments.

The data and anecdotal evidence that I have access to makes me believe that learning environments will change and become more aligned with standard and supported IT systems.

We are now in an era where resonating with IT, their requirements, and their systems, will assist us in rapidly deploying powerful, integrated, and scalable environments for our learners, and performers.

June 16, 2009 at 8:33 am 1 comment

7 Talent Management Strategies for Transformational Change – Bersin & Associates

I learned this week about the 7 Talent Management Strategies for Transformational Change as represented by Josh Bersin that I could paraphrase as follows:

  1. Talent Engagement – a business strategy
  2. Build Learning Environments – enable prosumer mentalities
  3. Deep specialization – have networked experts
  4. Focus on First Line Management 
  5. Design organization for Talent Mobility – ability to learn is a key skill
  6. “We” learning – easily harvesting wisdom from crowds in a YouTube/Wiki world
  7. Do less with less – focus on what’s important and build self-learning organizations

You’ll find more on this and other Talent Management ideas at www.bersin.com and Josh Bersin has indicated that he’ll be publishing papers on this in 2009.

April 17, 2009 at 7:37 am Leave a comment

Distinguishing Low, Medium and High Stakes Assessments

Trying to classify an assessments into a Low, Medium and High stakes category has some pros and cons.  On the plus side we can quick “range “ the time to create, deliver and report on an assessment, and range the impact to the person taking the assessment, and it can help conversations about an assessment.  All conversations have to be considered in context.  If you were having the conversation in the context of Psychometrics you would probably be distinguishing between low stakes and high stakes exams; if you were having the conversation with instructors and training you might be distinguishing assessments prior to a course, during a learning experience, tests after and course evaluations.

image

This chart can help us think about Low, Medium and High Stakes assessments in both context but without specific measure. Essentially it aids the conversation by promoting some distinctions and a vocabulary rather than providing a measurable outcome. That is in itself amusing because the world of assessments is all about the theory and technique of educational and psychological measurement.

In higher stakes assessments we tend to talk about candidates, in medium and lower stakes assessments we talk about students, employees and/or learners and in low stakes assessments we might talk about respondents. And so in our vocabulary, in the context of low, medium and high stakes assessments, we’ll talk about Participants who are the people that answer the questions in our assessments; they participate.

There are six terms that can help us provide some distinctions:

  1. Consequences to the Participant

    If the consequences to the participant are low then that helps us classify into a Low Stakes assessment but if the consequences are great (affecting Lives, Limbs, and/or Livelihoods) then it would be a High Stakes assessment.

  2. Legal Liabilities

    When stakes are high, consequences are high, and so in come the lawyers. I don’t want to turn this into a political debate but laws are written to protect our rights and lawyers help us understand the laws to do the right thing. All stakeholders in the assessment process have rights and responsibilities. Often this debate is taken from the side of the Participant but other Stakeholders have rights too. Assessments must be fair and reliable and fit for purpose. And we can’t go around certifying people to fix gas leaks that aren’t qualified.

  3. Proctoring/Invigilation

    When the stakes are high people are more motivated to cheat which requires that the assessment process is invigilated to prevent this and to promote trust in the assessment process.  As I fly around the world I woudl liek to know that my pilot and the air traffic controllers didn’t cheat on their exams. With certain kinds of, low stakes, assessments invigilation would provide an unwanted and undesirable level of supervision. Here’s some examples of assessments that we should probably not proctor/invigilate:

    1. Novice student taking an assessment where the goal is to provoke intrigue before a learning experience.
    2. Course evaluation where the moderator might choose to influence the outcome.
  4. Validity and Reliability

    Ideally all assessments should be reliable and valid.  They should work consistently over time and they should align with the subject matter that you are assessing. However, if we applied the same standards to low, medium and high stakes assessments we might never justify the costs for say Formative assessments or Course Evaluations.  We must always work ethically but we don’t need a $25,000 study on the validity and reliability of the assessment. When conducting a High Stakes test or exam we need to be sure that the assessment aligns with the topics that it is assessing and it is fair to all participant.

  5. Planning

    If you are administering a High Stakes test or exam to 5,000 Participants you’ll use a different plan than for 10 people evaluating a course. Planning will include considering how you’ll develop your assessment, how you’ll have expert(s) review, how will you maintain confidentiality and security, how will you deal with delivering the assessment, how will you provide accommodations for those with special needs, how will you report on the results to the Participant and Stakeholders.

  6. Psychometrician Involvement

    A Psychometrician is a Psychometrics professional familiar with the theory and techniques to measure knowledge, skills, abilities, attitudes, and personality traits.  Psychometricians are often involved with the development of High Stakes assessments such as tests and exams, and analyze the results to ensure that the assessments are valid and performing consistently and reliably.

It is worth reminding you that all assessments can be very valuable, within their context, regardless of how you categorize them.  Just because quizzes, designed to strengthen memory recall, and course evaluations, designed to measure in order to improve the learner’s environment, are low stakes does not mean that they provide low value. But those types of distinctions will follow in another blog entry!

March 22, 2009 at 1:26 pm 2 comments

Questionmark Wins Best Assessment Tool Award at Learnx

I was pleased and excited to read that Questionmark has again won the Best Assessment Tool Award at Learnx the Australian E-Learning & Training Solutions International Conference:
http://learnx.net/learnx/2009awardwinnersBestTechnologies.html

Thank you to everyone that made this possible!

March 10, 2009 at 3:29 pm Leave a comment


Add to Technorati Favorites

Recent Posts

Categories