Big questions from learning analytics
Big data and analytics have changed the web, marketing and medicine, but can they have a positive impact on learning? For the next twelve weeks, I'm going to be asking that question as part of the University of Edinburgh's Masters programme in Digital Education.
I'm grateful that Training Journal have been kind enough to offer me some blog space for the duration and I'd love to hear your thoughts throughout this period.
So what do we mean by 'learning analytics'? Well it depends who you ask. The first International Conference on Learning Analytics agreed upon: "the measurement, collection, analysis, and reporting of data about learners and their contexts, for the purposes of understanding and optimising learning and the environments in which it occurs."
Lofty goals, incorporating a wide spread of activities, that can be extrapolated to every level of learning. For the student, or workplace learner, how can data about their behaviour help them improve their performance? How can learner data help instructional designers improve course design?
If we have to start altering learning experiences to allow for better tracking, is there a danger that this is detrimental to the learner?
How does it help L&D managers increase their impact across the organisation? How does a vendor gain insight from multiple organisations to develop solutions? How can data about education change public policy?
Before we even get started on these questions, there are some major challenges that need to be flagged. For one, people rarely learn within a single digital environment. Imagine you read a passage in an elearning course.
You look up one of the words on Google, flit to Wikipedia for some background, send a tweet about what you've just learned and get an immediate response from an expert on the other side of the world. Learning keeps taking place, but the tracking is left far behind.
Standards like xAPI (or Tin Can API) are an attempt to address this, but it's hard to imagine it being implemented universally across the web (and with that one sweeping statement I've thrown xAPI under a bus - I'm willing to be convinced otherwise).
Then there's the subject of control. Why do we want to track all of this data? Is it to help the learner, or help the analyst? If we have to start altering learning experiences to allow for better tracking, is there a danger that this is detrimental to the learner? Will learners treasure their privacy over convenience? Or will constant tracking be embraced?
One hypothesis suggested by Dr George Siemens, a founding member of the Society for Learning Analytics Research, is that learners might be willing to use their personal data as a kind of currency, whereby they would allow themselves to be tracked 24-hours a day in exchange for more personalised support.
I already let my health insurance track me in exchange for a free coffee once a week, so there definitely seems to be some merit in this idea.
A third challenge is the question: what problem does learning analytics solve? At an eLearning Network event in Glasgow last week, I started a conversation about learning analytics. No one who joined had any specific challenge that was troubling them.
We can track, and in tracking we might be able to improve. But is there a demand for it? Or will those of us interested in learning analytics create that demand? Finally, are we ready for the results of big data analysis in education? What if we find out that what we are doing is a waste of time? Or, worse, detrimental to good learning?
This week, week one of my course, it's all questions I'm afraid.
About the author
Ross Garner is instructional design manager at GoodPractice
Rohit Talwar, Steve Wells, Alexandra Whittington, April Koury and Helena Calle on virtual reality’s impact on work and learning.
Tim Thomas-Peter reckons elearning still plays a big part in an engaged and learning-friendly workforce.
Julie Lock concludes her piece about the flaws in traditional performance ratings.