A key way to check if your learning program is effective is to check the learner’s understanding through a summative quiz or assessment.
Sometimes you might find yourself commissioning an elearning supplier to build an assessment for you. Or you might use a different question authoring too. The good news is that with Totara, you don’t need to do either. You can use the in-built quiz engine to create an assessment quiz. You can use a variety of question types - you have loads of choices including your old pal Multiple Choice, but also more interesting types including matching questions, calculator questions, all the way through to freeform essay questions for grading. You can see the whole list right here: http://help.totaralms.com/totara_v1.1_help/Trainer/Quiz_Question.htm
The mechanics of putting an assessment together in Totara are simple. The real skill, as any learning designer will tell you, is designing an effective assessment.
Here are ten top tips on how you can develop an effective assessment.
- Let learners do it whenever they want to: This tip is more about your overall learning approach. We’re big believers in adult learning principles. Too often an assessment is locked down until a range of other activities are completed. If learners may have prior learning in the topic, let them prove it and ‘test out’ if they’re competent - and if your test is tough enough. Which leads to…
- Make it tough enough: The acid test for this is simple - It should not be possible to guess answers and pass. Look out for the giveaway mistakes that make this easy for the guessers. As one learner famously said of a compliance course, “I tried to guess the answers but I kept failing. In the end it was just easier to learn the stuff and pass the test.” That should be your standard.
- Play fair: Provide clear guidelines for the assessment, including estimated time to complete and any additional resources your learners may need. Think about whether it’s an ‘open book’ assessment. Really it’s the only sensible type these days. Nobody cares about testing short term memory recall, it’s more about whether you can find and apply information available to you that you’d normally have on the job. Which let’s face it, is everything.
- Align the questions with the course objectives: Reinforce the core concepts through questions to help your learners retain key material and make your course effective. Ensure all questions relate to the material in the learning. Concentrate on the ost important content. One good tip for this is to write the assessment based on your learning objectives before you develop any course content. Then you’re being truthful in your learning design to the assesment.
- Test competency not memory: Assessment shouldn’t be about what people know, it’s about what they can do. In other works, phrase questions in terms of ‘what would you do if...’ not ‘do you remember what...’. Narrative helps here - take a case study approach. Write a short 2-3 paragraph case study to set up a situation, then asking 3-4 follow on questions that check behaviour, i.e . ‘what would you have done in Ken’s position’? This are more engaging and seraching than the ‘which of these 7 items are not permissable as evidence’ type questions.
- Don’t do dumb distractors: Multi choice questions are efficient for assessment . But they’re very guessable. Raise the stakes by making sure that the wrong answers are plausible. Simple test is whether a sane person trying to do the right thing would plausibly choose the option. If not, if it’s just there to fill a gap, or is an ‘all of the above’ choice, then it’s guessable. Eliminate it and come up with better options.
- Which of the following is also good practice for an MCQ design?
- Answers should be of close to equal length, because everyone will guess the longest one
- Answers should not be summative ie. option a do x, option b do x and also y, because people assume more is better
- C should not always be the right answer because C is the most common correct answer in any MCQ (and hence the most guessed)
- Always omit universal qualifiers words, never under any circumstances include them because people always eliminate extremes
(Answer - all of the above. which is also a terrible option).
- Build in feedback. This is critical to help make the quiz part of the learning and reinforcement process. You need to decide if you’re going to run under test conditions, and not provide feedback during the test, or provide feedback after each question. The former may be important if it’s a formal exam under timed conditions - you can still include a review mode on completion. Or give feedback as you go, which can provide valuable information to reinforce the learning. If you’re doing this, the feedback has to be kept concise so it doesn’t slow down your quiz.
Good feedback explains what the correct answer is and why. If the learner answered incorrectly your feedback might say ‘That’s not quite right’ and then explain the correct answer. Even if your learner answered correctly your feedback can still reinforce why the answer is correct eg That’s right—using open questions allows the interviewee to expand on their areas of experience.
- Have your quiz script checked and verified by a subject matter expert. It’s on you if a question is wrong. And also test your quiz with learners before rolling it out. This is a good way to see if there are misunderstandings, what seems clear to you may be less clear to others.
- Take the assessment further. Remember, whilst important, the quiz is only part of evaluating effectiveness, for example, you may work with line managers to check that learners are applying their learning and the difference it is making to performance. The true test of effectiveness is what changed in the business. You could set up a questionnaire or survey in Totara and send to managers a set time after assessment completion to check what’s changed - that’s the topic for another blog though…