The future of training evaluation

Share this page

Written by Richard Griffin on 4 September 2013

I am writing a book at the moment about training evaluation. One of the nice things about writing it is that I am learning new things. Something that has really struck me is that I am more convinced than ever that we are at a real turning point as far as evaluation techniques are considered. This boils down, in my view, to the design of more approaches that are based on solid research and embrace the full messy complexity of workplace learning. 

More specifically the following trends appear to be emerging. Many, of course, reflect the changing focus of training itself. In no particular order:

1. There will be an increasing focus on the evaluation of open skills (like empowerment), attitudes, values and informal learning more generally.

2. There will be a growing recognition that in many cases the changes resulting from training are not immediate or straightforward. In the past, (and actually predominantly still) it was assumed that learning results in linear, immediate and enduring change. Wrong. Change can take up to a year to stick, is shaped and reshaped by what happens back at the workplace (for example in teams) as much as in training and can be forgotten over time. An evaluation sheet handed out at the end of a programme is unlikely to capture this. As a result more evaluations will take place once training has finished.

3. There will be a greater recognition and understanding of how the many variables that affect learning relate to each other and which ones matter the most. Most evaluations activity gathers trainee reactions to learning. Research is showing that there are different types of reactions, which in turn are affected by a plethora of personal traits, training design, content and delivery issues and organisational factors. Narrowing down the factors that really matter will make life a lot easier for evaluators.

4. Trend four is a short-term consequence of Trend three. The fact that research is revealing what seems like an ever growing list of factors impacting on training but has yet to identify those that are primary means evaluation designs are likely to get more complex. One example of this is the growing use of mixed methods (for example using surveys and interviews) in evaluation designs.

5. There will be a growing use of qualitative methods like focus groups to gather data about the impact of training, although I suspect that senior stakeholders will still need some convincing that words are as powerful evidence of impact as numbers.

6. There will be a growing use of 'alternative' evaluation methods like observations, the use of pictures, storytelling and discourse analysis.

7. In academic circles economists will start talking to sociologists and occupational psychologists to learning theorists about their insights in to learning and present their findings in ways that practitioners can use (….well you can hope!)

As an aside and thinking about Trend two, I am often asked when the best time to evaluate is. Frequently this is actually determined by pragmatic factors, particularly if evaluation was not planned before the training started but there is some research that suggests that measuring the amount of learning retained and applied a month after training has finished is a good predictor of how much training will still be applied a year on.

The above adds up to a potential revolution, which frankly is what I think we need. A major reason is one trend that will not change - businesses will increasingly want to know what has changed as a result of investing in training.

About the author
Richard Griffin is director of the Institute of Vocational Learning and Workplace Research at Buckinghamshire New University. He can be contacted at:Richard.Griffin@bucks.ac.uk