Learning analytics dashboards and automated nudges
Over the past few weeks of the University of Edinburgh’s MSc Digital Education programme, I’ve looked at big questions from learning analytics, getting started with analytics, and social network analysis. The pressures of assignment deadlines stopped me blogging for the past couple of weeks, but now I want to return with some thoughts on learning analytics outputs.
After all, most of us in our daily lives do not get particularly ‘hands on’ with the collection and processing of data. Instead, we encounter prepacked analytics that are ready to use in the form of FitBit charts, Twitter graphs and weather maps.
We give little thought to how this data was generated because we find it useful enough to inform our decision making. But in the context of learning, we need to be a little more thoughtful.
Learning analytics is, at its core, a form of giving feedback. Good feedback highlights our strengths and motivates us to work on our weaknesses. Bad feedback can be crippling.
So what are some of the options?
Over the past few weeks, we’ve been looking at a number of different tools that provide feedback in different formats. At one end of the spectrum are dashboard visualisations, while at the other end are descriptive statements.
Dashboards will be familiar to anyone with a health app on their phone or who is involved in monitoring web traffic. Automatically generated graphs show trends over time, for topics like ‘logins’, ‘page views’, ‘unique visitors’, and so on. Additional visualisations on a learning management system (LMS) dashboard might show a breakdown of class performance on tests, or participation in forum discussions.
Such dashboards are useful to the teacher or admin for analysing trends over time, particularly when they are viewed within the context of the learners’ experience. For example, is there a spike in LMS traffic in the days prior to a major assessment? Or, what percentage of users have failed to complete their mandatory compliance training?
What these dashboards often don’t reflect is the learning that takes place outside of the system. For example, a learner who eschews the learning management system because they prefer to develop their skills on YouTube or Lynda.com may be flagged as an individual with no interest in their own development.
An additional concern is that any visualisation then shown to the learner might inflate the importance of what is being measured. A learner who logs in and sees a graph of their logins over time might suspect that the act of logging in is more important to the authority figure (manager, teacher, etc) than the quality of those individual sessions.
These concerns can largely be overcome when the context of the measurement and analysis is understood but, in many systems, this is not the case. My Apple Watch rewards a 30-minute jog more generously than a 30-minute weights session, even though I find the latter much harder.
At the other end of the outputs spectrum are descriptive statements. Again, health apps can be found at the forefront of this technique, sending statements like: ‘You often don’t move after 7pm. Try going for a short walk!’
In the learning context, equivalent statements might include: ‘It’s been a week since you last logged into the LMS’ or ‘You are in the least active third of LMS users’. Both of these statements might encourage me to get learning, but they might also annoy me if it’s been a particularly busy week or if I think the content on the LMS is...a bit pants.
The impact of these nudges is something I plan to look at in the coming weeks, so I may well blog about them again. In the meantime, I’d love to hear from anyone who is already using automated nudges in a learning context. Find me on Twitter @RossGarnerGP if you’d like to share your thoughts!
Doug Stephen says 2018 is the year of video. Read on to find out why.
Blake Henegan outlines steps to improve productivity through culture.
Rowena Bach discusses how digi-mentoring and HR tech can bring change and increased social mobility.