This article builds on L&D is not about training courses, it’s about improving workplace performance
In the L&D business, evaluation is the step in the process that gets done least well.
It is the poor relation, the neglected tail-end-Charlie at the end of the cycle that feels more like a box-ticky obligation than a critical cog in the machine.
I think this is dangerous.
If we are unable to provide a professional set of results to justify the investment made in our services, we are doomed to be stuck on the periphery.
This leads to what Charles Jennings calls the “Conspiracy of convenience” where everyone is happy that the training happened and the ragtag of MI measures and happy sheet smiley faces confirm that the box was ticked properly.
As a socially-awkward INTP, I am never quite sure when I am being super clever and when I am being hyperbolic, so please tell me to calm down if this is over the top, but I believe that showing senior leaders a jumble of unimportant graphs and expecting a pat on the head is infantilising the profession, reinforcing the idea that we are not central to the organisation’s success.
This is not a groundbreaking observation.
Kirkpatrick has been telling us since 1959 that we need to look beyond happy sheets and knowledge tests, to the implementation of learning and its impact, yet many L&D interventions remain stubbornly stuck to training and learning, not the transfer of that learning and its impact on performance.
This is because it is not easy to do well.
There are two big barriers that get in our way:
- First, most organisations don’t measure employee performance in any meaningful sense now, yet suddenly expect us to do so the moment we invest a few quid on a learning programme
- Second, we scope L&D interventions around the training cycle – including some learning transfer activities – but rarely is the full journey from learning to performance improvement (what I call L2P) integrated into the project
There is no simple way to solve these issues; if measuring human performance were easy, we’d be doing it already.
My argument is that we are trying to crack this nut by using the wrong tools.
In short, the poor state of L&D evaluation will not be solved through the application of a different evaluation model.
This is not to criticise Kirkpatrick or any other model out there. They are all valuable and all contain useful insight, tools and ideas.
I am not suggesting we chuck them out, I am only suggesting that we don’t start there.
Instead I am arguing two things.
First, to address my first bullet above, we should adopt project management techniques.
This means we need to treat every L&D project as if it were like any other project, and therefore apply project management methodology to it.
As any project management professional knows, there is a huge bulk of work involved in the project initiation phase during which the problem (or opportunity) is diagnosed and the business case developed.
Only once the organisation is convinced that the deliverables outlined in the business case are worth the time and money is the project given the green light.
Adopting this approach allows us to do the same, starting not with a needs analysis, but with a clear business case, based on valuable performance outcomes that the organisation can buy into.
Immediately we have a basis for evaluation: did we deliver the business case to the right quality standards, on time, and on budget?
This doesn’t completely get us around the problem – human performance is a nightmare to measure meaningfully whatever process we adopt – but it gives us a solid foundation to work from, based on measures that the organisation values and is willing to pay for.
I will develop this approach more in future blog posts.
The other obstacle relates to the scope of L&D projects.
One problem in adopting the L2P journey as the project boundary, rather than the training cycle, is that we are taking on responsibility for things we have no control over.
This is why I argue that it is necessary for us to adopt change management methodologies.
Change management takes the broader view about how performance is developed to include anything and everything that helps or hinders the change sticking and succeeding within the organisation.
I believe that change management is the plural of L&D.
If L&D is about improving individual performance through learning, then change management is about improving organisational performance by helping individuals change how they deliver.
Change management includes training, but it also attempts to address other gaps by creating awareness and desire for change (to borrow the A and D from PROSCI’s ADKAR model), or “unfreezing” (to slightly misquote Kurt Lewin) using, for example, force-field analysis, and then ensuring the change sticks by “refreezing”. It engages senior stakeholders, and uses tools such as communications and proactively addressing areas of resistance because it includes the whole process.
So, this is my argument: we can only provide solid measures the organisation values, based on workplace performance, if we expand our scope to cover the L2P journey, and borrow techniques from project and change management.
I will expand on these ideas in future posts, but first I need to know something: am I right?