Large-scale training

Written by Kevin Lovell on 1 January 2013 in Features
Features

Kevin Lovell has some advice on how to keep large programmes focused and effective

Large-scale training programmes are commonplace in many businesses. They provide an opportunity to train a large group of people in the same way and at the same time. This ensures a consistent approach and has the added benefit of saving on costs - training, venue and staff costs are only incurred once.

However, according to a study recently conducted by KnowledgePool, large-scale training programmes consistently fail to deliver from a performance improvement point of view.

Since July 2006, I have been involved in running a series of learning outcomes questionnaires, which have surveyed learners and their line managers three months after completing training to gather feedback.

The questionnaires comprised a series of scored questions, which explore:

the extent to which the learner has had the opportunity to use what he learned

the extent to which the line manager has helped the learner use what he learned

perceptions of performance improvement as a result of the learning.

We gathered more than 25,000 responses for 2,034 different course types. And the result is a reliable indication of the impact that learning has had on learners' performance.

Defining large-scale training programmes as courses with more than 100 learners, the data shows that 51 per cent of all learners are trained as part of a large-scale programme yet these programmes underperform in comparison to the other 50 per cent of training activity.

This prompts some questions. Why do large (usually bespoke) training programmes underperform? And for those large programmes that buck the trend and perform well - what marks them out from the crowd?

That the average performance improvement scores for learners on large training programmes consistently fall below the average for all learners raises some important concerns about how we conceive and deliver such programmes - programmes that are expensive to run in terms of manpower and cost.

Figure 1 below shows the extent of this phenomenon. Where the number of learners completing a particular training course is ten or fewer, almost 60 per cent of those courses will achieve an above-average PI score. However, where the number of learners completing a course exceeds 100, the proportion drops to 40 per cent. Furthermore, the courses with 100+ learners account for 51 per cent of all learners.

Top four reasons for lack of performance improvement

Using my experience of managing large volumes of training, from very large programmes down to one-off bookings of public-schedule courses, I would suggest there are four main reasons why larger training programmes are more prone to lower PI scores:

  • the 'one size fits all' approach A familiar situation, in which one training intervention is applied to a large number of people. The larger the group, the greater the chance that the course content does not match each individual's training need
  • lack of opportunity to apply the learning For whatever reason, the learning cannot be applied, or can only be partially applied. On a small programme the impact is minimal but, if application of learning fails on a large programme, the implications are considerable
  • copycat training This occurs when a training course is found to be very successful. Word gets around; with the result that many more people enrol on the course on the assumption that it will benefit a wider audience to the same extent. However, the wider audience may not have the same training needs or context as the initial one
  • poor timing of training This occurs when the training delivery is mis-timed relative to the opportunity to use it: either the training is delivered too late to be of use, or too early and the learning is forgotten. Larger programmes may suffer from this problem because of the challenges in co-ordinating large amounts of delivery.

L&D teams aiming to improve PI scores should look at each of them

One size fits all

When training is needed for large audiences, typically hundreds of people, providing a tailored solution for each learner's need is impractical. A single one-size-fits-all solution is highly cost-effective but it inevitably leaves some learners bored and others needing more. This approach is often taken, particularly when the training is for legal or regulatory reasons or when it supports a widespread change such as technology rollout, but how does it affect performance improvement?

Figure 2 below shows the distribution of the 2,034 course types by their average PI score. Unsurprisingly, most courses achieve a PI score of around 50 per cent (the overall average PI score for all learners is 44 per cent). However, what is significant is that no large programmes achieve higher than 50 per cent and some score as low as 30 per cent, which is in the 14th percentile of courses.

The best-performing large programmes are the result of a well-researched and clearly-identified skills gap (usually in management or leadership and therefore carrying a high price tag). On such programmes, some individuals achieve very high PI scores while others score very low, bringing the average back down. This prompts the question about whether the selection of delegates for such programmes could be improved, so that only those individuals thought to benefit from the learning are chosen.

The worst performing large programmes tend to be those in which large groups are systematically 'sheep-dipped' through a single intervention. If the overriding driver is regulatory compliance, the desire for the least expensive intervention that meets the requirement, regardless of performance improvement, is understandable.

But there are other low-performing examples with more profound behavioural objectives, such as diversity training or dealing with workplace stress. In such cases, it is perhaps understandable that a low PI score is registered three months after the training event, since the need to use what was learned may not have arisen and, even if it has, the individual may not attribute a performance improvement. Nevertheless it is of some concern that programmes considered worthy of significant investment appear to under-perform.

Lack of application

A common experience that the learning outcomes evaluation data provides clear evidence of is that performance improvement only occurs if you can use what you learn. In general, the more you use it, the greater the performance improvement. This relationship is shown in Figure 3 below: the opportunity to use what was learned has a major impact on performance improvement.

An ever-present risk to any training is the possibility that learners will not, or cannot, apply what they have learned. In the context of large programmes, the stakes are higher, particularly if there are organisational barriers to the application of a large, high-profile (and expensive) learning programme. Occasionally we find examples of a programme being apparently well-executed but the performance improvement score being disappointingly low.

Copycat training

When a training course is particularly successful at addressing a need, those involved will naturally publicise that success. This encourages others to enrol on the course, to share in the success, and the original course sponsors will be understandably happy with this outcome. However, there is a risk that the follow-on learners may not have the same development needs as the original cohort.

Table 1 left shows the PI scores for the first four deliveries of just such a course. It was designed specifically with the first delivery cohort in mind but, by the fourth delivery, its PI impact is reduced by 20 per cent. This is still a high-impact course, but it must not be assumed that everyone will benefit equally.

On a larger programme, courses that were initially successful may not continue with such high levels of success. Figure 4 (below) shows an example, of a course whose early results were well above average but have since slipped to noticeably below average. The PI scores indicate a possible change either in the skills and experience of the learners, or in the development needs of the organisation.

The lesson here is: don't assume that courses continue to perform at a constant level. Learner needs may not always match the needs that the course was designed to meet. Unfortunately, our statistics indicate that it is much more likely for performance improvement scores to tail off rather than improve over time, so ongoing vigilance is necessary to ensure frequently-used content is up to date.

Poor timing of training

Training being delivered at inappropriate times happens more often than you might think: selection interview training takes place just before an unexpected recruitment ban is imposed, for example, or appraisal interview training is delivered too early or too late for the annual appraisals. All of these factors drag the performance improvement scores down significantly (see Figure 3).

Perhaps most organisations' largest programme - induction training - is consistently difficult to get the timing right for. Of all the large training programmes in our dataset, induction training delivered the most volatile outcomes in terms of performance improvement scores. From the comments received, it is clear that timing is the key issue, closely followed by relevance of content. When a 'new starter' gets his induction training four months late, there's not much he hasn' t already found out.

Interestingly, if the timing issues were solved for induction training, it could become the highest performing large training programme. As it stands, however, induction training ranks in the 30th percentile of all 2,034 courses.

Five ways to ensure more-successful large programmes

  • Keep a close eye on the quality of matching between the skills and experience of the learners, the development need and the course content. At the design phase, consider carefully the trade-off between a simplified training solution and the likely impact on performance improvement. It is hard to estimate, but often the decision to simplify is driven too much by cost with insufficient consideration of the impact on performance improvement. Ultimately, you need to be clear where the business benefit lies for this programme
  • When selecting learners for large programmes, consider which ones can be expected to gain the least from the training - and consider not nominating them
  • Make sure that learners can apply the learning after they complete the training. Especially, do not overlook organisational barriers to the application of learning: lack of management support; support for new ways of working or new processes; learning that requires whole teams to be trained before it can be applied; incentives that bind learners to established behaviours
  • Don't forget that, although a programme may register success on week one, when all eyes are upon it, there is no guarantee that it will continue that way. Regularly review your major programmes: the learners' skills and experience, the development need and the course content. Two of those three things will change over time, and all the signs are that they will diminish, not enhance, the impact of the training
  • The timing of training can be critical to its success: just-in-time training is easier said than done. Consider using technology: e-learning and social media open up many possibilities for more flexible/informal learning, without the constraints of traditional delivery (you may even achieve some delivery before the start date). Can a brief face-to-face component be delivered informally by HR or a senior manager?

Large training programmes have been extremely popular. They command a substantial portion of the total training spend, yet our statistics show that they consistently perform below average, with smaller, more focused interventions seeing higher success rates.

The key lesson L&D can learn from these findings is that large programmes, whether one-off projects or part of an ongoing training provision, require regular attention, to ensure they remain focused on the development needs of your organisation.

About the author

Kevin Lovell is learning strategy director at Knowledge-Pool. He can be contacted via www.knowledge-pool.com

Share this page

CONTRIBUTIONS FROM READERS

Please login to post a comment or register for a free account.

Related Articles

19 November 2019

Saul Carliner and Margaret Driscoll on the many types of external training providers.

12 November 2019

TJ meets Dr Neil McDonnell, LKAS Fellow for Virtual and Augmented Reality at University of Glasgow and speaker at AR & VR in the Public Sector.

Tags

Related Sponsored Articles

10 September 2015

Hurix Systems announced today it has been short-listed for Red Herring's Top 100 Asia award, a prestigious list honoring the year’s most promising private technology ventures in Asia. 

7 June 2018

TJ announces a new collaboration with Imparta to raise the skills and confidence of L&D professionals in dealing with the sales function.