Getting started with learning analytics

Written by Ross Garner on 4 October 2017

Last week in this blog series I asked big questions from learning analytics. This week, I want to take a more practical approach by exploring the steps needed to gain useful insights from students' digital activity.

First of all, we need to decide what we want to measure (Slater et al, 2017). That might be time spent on a particular resource, number of visits, hyperlinks clicked, and so on. That data then needs to be collected. If errors or test data are present, the data will need to be cleaned.

If data comes from multiple sources, or in different formats, it will need to be transformed so that it can be combined. Once transformed, it needs to be analysed and validated. Finally, if this process is to have a positive impact on learner outcomes, the data should be presented in a useful format so that we - or our clients - can draw conclusions. 

If this process sounds long, complicated, and vulnerable to human error, then you’re right to be cautious. We have a tendency to assume that numbers are objective, but learning analytics requires a great deal of human input: either from a researcher completing it manually, or by a software developer creating an algorithm. That human input makes it difficult to strip analytics of bias.

if this process is to have a positive impact on learner outcomes, the data should be presented in a useful format so that we - or our clients - can draw conclusions. 

A classic example from outside of education is ProPublica’s investigation into the risk assessment software, COMPAS, used in the US criminal justice system to predict the likelihood of reoffending. ProPublica found that the software was unfair to black people. The company who created the software, NorthPointe, disputed the results.

A third analysis, by Krishna Gummadi, head of the Networked Systems Research Group at the Max Planck Institute for Software Systems, argued that the two positions didn’t contradict each other. It depended how you understood ‘fairness’.

The questions here are: what was measured? How was data collected? How was it cleaned? And so on. At each point in this process, someone made a decision, so any insights gained should be treated with caution.

Returning to learning analytics, we work in an industry where commercial organisations are unlikely to tell us how insights were gathered, while a more transparent process could lead to learners ‘gaming the system’ or changing their behaviour because of the tracking.

To give a basic example from my context: GoodPractice sells an online performance support toolkit. What does a successful toolkit session look like? Five minutes, because the learner found their answer quickly? Or five hours, because the learner was working on a proposal and kept referring to different resources?

When getting started with learning analytics, we should think carefully about the assumptions we bring to the process, and prompt the end users of our insights to think carefully about the conclusions they draw.

References

Slater, S., Joksimović, S., Kovanovic, V., Baker, R. S., & Gasevic, D. (2017). Tools for educational data mining: A review. Journal of Educational and Behavioral Statistics, 42(1), 85-106.

 

About the author

Ross Garner is instructional design manager at GoodPractice

Share this page

Tags

Categories

Related Articles