Posts tagged ‘Assessments’

Learning Environments

This year I have been witnessing a change in thinking and common practices around learning environments. I’d like to share my point of view with you because I’m convinced that a significant shift is starting to occur. Let me start by giving a contextual overview in the form of a few tables:

Learning Delivery Systems


Assessment Systems


Tracking Systems 


Marketing Hyper vs. IT’s Point of View

The stock market’s exuberance in the late ‘90s generated venture capital funding of learning environments and management systems start ups.This resulted in marketing hype that led to unrealistic expectations. Although vendors tried hard to meet these unrealistic expectations, customers were frustrated and the market was altogether unhappy and unhealthy.  This became obvious to me when customers would explain that they were on their third LMS. Clearly things had to change.

One strong benefit that came from the late ‘90s early ‘00s era was the clear separation that was drawn between “management systems” and “content”.  SCORM and AICC helped provide us with these clear distinctions, however, we didn’t manage to achieve sufficient distinctions between the key modules of a learning environment as web-services had not matured quickly enough.  Consequently, large monolithic systems tended to capture buyers’ imaginations as being the big pill to swallow to solve the learning problem.

From my point of view, IT departments initially pandered to users’ requirements and assisted with the deployment of customized and dedicated learning systems. However, as costs of tightly integrated systems increased and the number of dissatisfied users increased, IT departments started to look at alternative systems and architectures that could both meet the users’ requirements and align with organizations’ overall IT infrastructure.  IT started to look at learning from a users and performance perspective and started to apply IT methodologies to the issue.

Users’ requirements varied on one side from structured learning (course based), mostly used in schools, colleges and on-boarding new employees, through to self-service, where a motivated learner sought out the information that they needed in order to perform their tasks or gain qualifications.  Deploying only one learning methodology (course based vs. self-service) does not fit all requirements. Knowing the context of the user is key to providing a great user experience!

So what are we seeing now?

Authentication and Single Sign On (“SSO”) Portals

Learning materials, documents and content live in many systems. The issue is access control to these systems for the right person at the right time. This motivates a need for federated searches of multiple content repositories and to mashed up user interfaces that allow users to view multiple systems.

imageAs I have illustrated to the left, you can now provide access to multiple applications with a common look and feel using a portal such Microsoft’s SharePoint.  Each application is “skinned” by the portal’s feel and is presented in a “portlet”.  Depending on the user’s privileges they can add, change, delete, and move around portlets to suit their working style and job role.image

Behind the portal sits a number of systems that a user will use to perform their tasks. 

The portal looks after two important functions:

  • Authentication of the User
    This is usually performed by way of user names and passwords but could also be achieved via biometrics.
  • Access Control (Privileges)
    To limit access to the portlets that a user can actually see and use.

We have to be careful that we don’t return to dispersed SSO portals with one portal for learning, one for accounting, etc.; that would give us Multiple Sign On Portals which would be a retrograde step. 

True Single Sign On Portals provide IT departments with a single but centralized access control system and allow the user to define their portal to accommodate their style of working.


imageEach application has its own unique set of privileges that would become difficult for a centralized IT team to control. For instance, in the context of testing, we might only allow users to take a test between certain hours and maybe require a proctor/invigilator. Whilst it would be possible for all of these user privileges to be stored more centrally and associated with the portal, this becomes impractical and slows the upgrade process.  However, it is common for application systems to receive data from the portal and then derive privileges based on the user’s associations.

Islands of Data

imageThe challenge with this architecture is that each application maintains its own databases for its operational needs such as storing course evaluation data and tests results.

We are seeing leading edge employers look at employee life cycle data in order to improve their talent management systems and processes.  From recruiting to on-boarding, appraisals, formal/structured learning, informal learning, career progression, and exit interviews we see present and future requirements for viewing and correlating data from multiple courses to help people understand the dynamics of their talent.

So learning environments are now being built in such a way that their data is consumable by web-services and a data warehouse.

Data Warehouses

imageOrganizations maintain Data Warehouses (DWs) for several reasons:

  • DWs can  be structured to quickly access data for reporting applications to consume rather than focus on efficiently collecting and maintaining data. 
  • DWs allow for data to be connected and correlated despite the fact that they are generated by different systems potentially using different types of databases.
  • DWs can insulate reporting systems from application upgrades which would otherwise necessitate reporting systems to be updated at the same time that the application upgrade is performed.

Wikis and Blogs

Wikis and blogs have become very important and popular tools for harvesting knowledge from subject matter experts (SMEs). With succession planning being important for many organizations that have aging work forces, harvesting knowledge from SMEs is a key initiative. 


When we combine IT department support for standardization on portals, wikis, blogs, data warehouses and reporting systems we can see that the time is right for a revolution within our learning environments.

The data and anecdotal evidence that I have access to makes me believe that learning environments will change and become more aligned with standard and supported IT systems.

We are now in an era where resonating with IT, their requirements, and their systems, will assist us in rapidly deploying powerful, integrated, and scalable environments for our learners, and performers.


June 16, 2009 at 8:33 am 1 comment

Blooms Taxonomy

imageCreating valid and reliable assessments requires us to distinguish the differences between knowledge, comprehension, and higher levels of thinking.  We need to measure what we want to measure! 

image The Bloom’s Taxonomy is a first class example of how we can think through these distinctions. For a detailed understanding of Bloom’s Taxonomy I’d recommend the book, “A Taxonomy for Learning, Teaching, and Assessing,” by Lorin W. Anderson, David R. Krathwohl, Peter W. Airasian, and Kathleen A. Cruikshank.  With this article I’d like to provide you with a general set of distinctions of the 6 levels and maybe motivate you to learn more.

Knowledge (memory recall)

At the knowledge level we would expect people to remember things such as facts, words, colors, terminology, sequences, methods, etc. Knowledge checks memory recall and nothing more. 

Key words used within a question stem at this level might be: define, describe, identify, label, list, match, name, outline, recall, reproduce, select, state.

In this article I’ll use examples related to driving as this is readily understood by people from diverse cultures. A driving test question at this level could be:


 image What color traffic light requires you to stop”?

    • Yellow
    • Orange
    • Red
    • Green


Comprehension (understanding)

At the comprehension level we would expect people to understand the facts rather than just know them and have the ability to translate abstract ideas into concrete terms.

Words used within a question stem at this level might be: compare, convert, describe, defend, distinguish, estimate, explain, extrapolate, infer, interpret, organize, paraphrase, rewrite, state main idea, summarize, translate.

A driving test question a this level could be:


image image image image

Please check all that apply to stop signs and a traffic lights?

    • You must always stop at a stop sign
    • You must always stop at a traffic light
    • Traffic lights change color
    • Traffic lights have three colored lights
    • Stop signs change color



At the Application level we would expect people to solve problems in new situations by applying knowledge, facts, techniques and rules in different ways.

Words used within a question stem at this level might be: apply, change, compute, construct, demonstrate, discover, execute, manipulate, modify, operate, prepare, produce, relate, show, solve, and use.

High fidelity environments and observations might be used to accumulate evidence at this level but a driving test question could be:



You are driving at 25 miles/hour (50 Km /hour) with a traffic light changes from green to yellow/orange when you are 7 feet (2 meters) away.  In this situation what would you do: 

    • Brake hard and ensure you stop before the light
    • Continue through the light if it safe to do so
    • Accelerate to avoid crossing a red light
    • Refer to the rules of the road manual



At the Analysis level we would expect people to examine the facts, break information into parts by identifying motives or causes, compare and contrast, distinguish between fact and inference, and make inferences and find evidence to support theories.

Words used within a question stem at this level might be: analyze, break down, compare, contrast, diagram, deconstruct, differentiate, distinguish, identifies, investigate, infer, and solve.

At this level we could think of a policeman visiting the site of an accident, seeking and collecting evidence, analyzing possibilities, and organizing evidence to support various theories.  High fidelity environments and observations might be used to accumulate evidence at this level but a test question could be:


image As a policeman you have been called to the scene of an accident. Please look at the pictures and read the transcripts of the interviews to help you understand and analyze the scene.

Please write your report to record the evidence and your analysis. image .




At the Synthesis level we would expect people to compile information together in a different way by combining diverse elements in new patterns and/or proposing alternative solutions.

Words used within a question stem at this level might be: categorize, combine, compile, compose, create, develop, devise, design, explain, formulate, generate, modify, organize, plan, rearrange, reconstruct, reorganize, revise, rewrite, summarize

At this level we could think of a lawyer developing his arguments in a case to defend his client.  At this level information should be abundant and a test question could be:


image Please reference the attached police reports and witness depositions and develop a case to defend your client against a charge of go through a red light at speed.




At the Evaluation level we would expect people to present and defend opinions by making judgments about information, validity of ideas or quality of work based on a set of criteria/rules.

Words used within a question stem at this level might be:  appraise, conclude, criticize, critique, defend, evaluate, interpret, justify, measure, summarize, support, and test.

At this level we could think of a judge who has considered the evidence and listened to the lawyers’ arguments, and again information is abundant. At this level a test question could be:



Please reference the attached police reports, witness depositions and arguments presented by either side and judge the defendant to be guilty or not guilty and explain how you reached your conclusions.



Understanding the distinctions presented by the Blooms Taxonomy we are able to more accurately assess the knowledge, skills and abilities required to yield performance.

With the use of appropriate scenario style questions, potentially supplemented  with documents, pictures, sounds, and videos, it is possible to test at the higher levels of Blooms.

May 18, 2009 at 1:54 pm 1 comment

Delivering Assessments Safely and Securely

We have just updated a Questionmark White Paper called “Delivering Assessments Safely and Securely” and I wanted to share a little about what it took to get to this point. These documents aren’t as simple to create as they might seem!

We produced the first edition of this a few years ago when there was confusion around the security requirements for the various types of assessments.  It was painful, but also amusing, to hear people engaged in vague conversations about “Assessment Security” when they did not clearly understand the distinctions among low, medium and high stakes assessments (see previous post on Distinguishing Low Medium and High Stakes Assessments) and/or the different types of assessments (see previous post on Types of Assessments Formative Diagnostic Summative and Surveys).

When you understand these distinctions it is easy to see that the security demands for each type of assessment are different.  So the first edition of this paper dealt with the issues at a high level, clarifying the different types of assessment and their differing security requirements.

Since then, through our participation with industry groups, trade associations and our Questionmark Perception users, we have developed a deeper understanding that we wanted to share. We have also seen many changes that we wanted to address, such as the changes that we’ve seen with certification exams and testing center paradigms, the wide availability of hand-held devices and the increased use of bandwidth-intensive content such as video.  It’s taken about 2 years of learning and about 3 months of writing, re-writing and gaining consensus. But finally we can share our work publicly.

This updated “Delivering Assessments Safely and Securely” white paper not helps provide us will not only help readers with key distinguishing but will also help organizations prevent costly and time wasting over-engineering of low-stakes assessments and/or the under-engineering of high-stakes assessments which would expose the stakeholders to unnecessary risks.

Enjoy your learning!

March 29, 2009 at 8:12 pm 1 comment

Add to Technorati Favorites

Recent Posts