Online course creation top tips: measures of success

Measure success word using blue ruler, representing a review, evaluation or assessment of an employee

Ginette Tessier reminds us that it’s not just the video equivalent of bums on seats we should be looking at

One of the most quoted measures of success for online courses are completion rates. But is this really the best way to tell if an online course is any good?

You don’t have to look very far to find statistics about ‘poor’ completion rates for online courses. A simple search will bring up numbers ranging anywhere from 5% to 35%.  These are most often used in one of three ways:

a) to dismiss online courses as an effective training tool

b) to create click bait content that sounds authoritative around course creation, or

c) to sell a ‘magic bullet’ – a course that bucks the trend or a remedy to poor completion rates

So what’s the problem?

The surface logic of using completion rates as a quality measurement is very attractive: if a course is any good, then obviously learners are going to want to watch all of it, aren’t they? It sounds plausible, reasonable, vaguely scientific and it’s easy to remember as a concept. It doesn’t work our brains too hard, and lots of people are saying it, so it must be true!

The problem is that anyone who’s ever taken an online course will be able to tell you that even courses they’ve found helpful, useful and well-presented often don’t get completed. There are lots of potential reasons for this:

  • The course covers a broad range of topics within an area, which might not all be relevant to learners at the first viewing
  • The course doesn’t need a learner to complete every video to be successful
  • Something happens halfway through a course that prevents a learner from finishing
  • The learner isn’t mandated to complete everything – the course might only be a ‘hobby’ purchase for example, where initial interest quickly wanes
  • The course really isn’t any good

I’m sure there are more, but the important point here is that a course not being any good is only one possible reason for a learner not to finish a course. So straight off the bat, we’re pouring cold water on completion rates as a suitable measure of quality.

Flawed thinking

The list above highlights another issue to consider, around what ‘completion’ really means in the context of online courses. The traditional measure of ‘number of videos watched to 90% versus total number of videos in the course’ is fundamentally flawed in the case of courses that don’t require all videos to be watched. Mine is an example of this where I offer a ‘fast track’ and a ‘deep dive’ option in every lesson. I’d be surprised if anyone went through every single video!

For some, the reasons above might be enough, but I think there’s another angle to busting this myth too, with the actual statistics quoted. This is always a fun way to spend an idle five minutes – trying to trace the source of quoted statistics. In this case, they often lead back to a very specific study about a very specific type of course – and usually not one relevant to the audience reading the statistic in the first place.

My argument is that there are simply too many unknowns to provide a meaningful measure of completion rates for ‘online courses’ as a whole category. To get such a measure, we would need to know the total number of courses and learners in the world. For those numbers we’d need to know the number of platforms being used and then factor in how to deal with courses no longer in public circulation. We’d also need to know about courses that require all videos to be completed for ‘success’ versus those that don’t. The list goes on!

It’s probably possible to provide a meaningful statistic around completion rates for a specific subset of online courses, but if they aren’t a great measure for quality anyway, why bother?

Purposeful measures

A well-designed online course will have the same level of rigour around learning objectives as any other training intervention. These might be expressed in terms of knowledge or skills gained and should be demonstrable. It should be possible for a learner to do the thing the course promises they will be able to do. And if they can do what’s been promised, to the required standard, who cares if they watched every single video or not?

Which leads me nicely into next month’s article which will be all about how quizzes are commonly and shockingly misused as an indicator of learning transfer!


Ginette Tessier

Learn More →