Welcome to the second instalment of our new Disruption Debate series, where we speak to leading industry experts to discover more about disruption in the L&D industry. In this post, Lars Hyland speaks to Julie Dirksen. Julie Dirksen is an independent consultant and instructional designer who focuses on the science of sustainable behaviour change. She’s the author of Design For How People Learn and she’s happiest whenever she gets to learn something new. You can find her online at usablelearning.com.
“I’ve been doing e-learning since 1994, so I can now measure my experience in decades.” said Julie. “I’ve been trying to figure out how to leverage technology for learning since the early days of the internet. We’re moving forward into a greater presence of technology, but I don’t see our field doing things dramatically differently from when we started out. Good e-learning programmes are often not materially different to the early days - they’re just graphically nicer.”
With over 20 years of experience, Julie has seen many technologies come and go in the L&D space. Some of these have been adopted into learning programmes successfully, while others have disappeared after being touted as the next big thing. “Technology in so many other ways in unrecognisable, but by and large, we in L&D are still in the same framework we’ve always been in. We’re often not doing better stuff; we’re doing the same stuff but on different technologies.”
Of course, for many learning professionals, this is an alarming thought. If new technologies aren’t dramatically improving our learning programmes, then what are we missing?
The feedback issue
One of Julie’s main issues with L&D today is the lack of feedback we get about the efficacy of what we’re doing. “We focus on production - building things faster with rapid authoring tools. We think about how we can deliver information faster and more effectively. But in the absence of feedback, it’s hard to get the data to prove if your course has had an impact on your business goals.”
So why are we getting it so wrong when it comes to feedback? Julie told us about the streetlight effect, which means we often focus our attention where it’s easiest for us to do so - i.e. where the streetlight shines. We make our improvements where the light is better, meaning we focus on the production process - creating and delivering learning faster - rather than trying better, more interesting, but potentially more challenging things.
How to collect better feedback
Julie believes that many L&D professionals could be missing a trick by not drawing on their own previous experiences. “Most L&D people have also been involved with classroom training along the way. How many people have taught a classroom course and not learned something important the first time round that they needed to tweak the next time, whether that’s how to explain an activity, getting the timings right, people getting confused… but that’s not the case with e-learning. It’s often created, thrown on the LMS and that’s the end of it. Why is that acceptable for e-learning when we expect to make improvements to our classroom training?”
The problem, said Julie, is that many people believe that user testing is too expensive and complex, as shown in Jakob Nielsen’s research. But Nielsen conducted user testing for software with small and large groups, and found that 80% of issues with the software were flagged with groups of just five users. This was a big revelation in the software testing community - small sample sizes are good enough to get valuable data. According to Julie, this information could be greatly beneficial to the L&D community.
“We frequently have no access to our whole audience. But it’s so easy now to carry out user testing with so few people - you can even use GoToMeeting or WebEx, have the users share their screen while they pull up the course. You can learn an enormous amount by watching somebody go through your course. We can also use the Brinkerhoff success case method here - survey a recent training audience and carry out target interviews with the users who have applied the learning the most and the least. This gives us structured qualitative data that we can feed back into the system to help us assess the efficacy of what we’re doing.”
Making the most of social learning
Our next major topic of discussion was how social learning can be used to gather feedback from a community for learning. “A big benefit of social learning is that it’s very adaptable and responsive to the need right now,” said Julie. “I regularly throw questions out to the hive mind, whether that’s on Facebook groups or Twitter. This is a way to get better, more effective answers, for instance in cases where there is no concrete answer, or many variables, or for complex questions.”
Julie mentioned the value of xAPI for collecting this type of social feedback. xAPI (also known as Tin Can) reports back to a learning record store (LRS) about activities which we wouldn’t previously have been able to record, ensuring we get a holistic overview of the entire learning process. “Historically we’ve had very little ability to see what learners are doing,” said Julie. “We see that they’ve completed something, or how much time they spent on a course, or their test score. But we could be doing more with technology to see how learners are interacting with what we build.” Of course, social learning has the potential to play an important role in this process, whether this is recording a forum post on a social platform, or a conversation in a chat room, or a response to a poll posted by another learner. These social interactions all contribute towards learning, and our ability to track these actions helps us build a true picture of learning in an organisation.
Talk to the end users
It is clear that Julie believes that now is the time for L&D leaders to start collecting more, and more meaningful, feedback on our learning efforts. So what is her number one piece of advice to facilitate this?
“Connect with the people who are actually using your systems. I’ve seen whole projects where nobody talks to the end users. We in L&D all get tunnel vision - we get immersed in the topic and forget about the learners. We need to be conducting user testing and getting feedback from our audiences throughout the project. We need to be connecting with our audience to see what’s useful for them.”
Ultimately, it’s time for L&D professionals to take action and get creative about gathering feedback. Better understanding our learners is key to designing more engaging content, and we should consider this a constant process, not just a one-time event.
If you enjoyed this piece, you can find Julie Dirksen on Twitter @usablelearning, and follow us at @totaralearning to be the first to hear about our next instalment in the Disruption Debate series. You can also join the debate using #DisruptionDebate on Twitter.