Here we go again. To “Kirkpatrick” or not to “Kirkpatrick”, that is the question.
To be, or not to be, that is the question—
Whether ’tis Nobler in the mind to suffer
The Slings and Arrows of outrageous Fortune,
Or to take Arms against a Sea of troubles,
And by opposing, end them? To die, to sleep—
William Shakespeare, Hamlet in Act III, scene i
Many a person has debated the Kirkpatrick evaluation taxonomy. To name a few:
- Dan Pontefract: Dear Kirkpatrick’s: You Still Don’t Get It (a personal favorite)
- Jane Bozarth: Alternatives to Kirkpatrick
- Roger Chevalier, CPT: Evaluation, The Link Between Learning and Performance
- Donald Clark: Using Kirkpatrick’s Four Levels to Create and Evaluate Informal and Social Learning Process
- And right now, a hot and heavy debate between Clark Quinn and Will Thalheimer: Kirkpatrick Model Good or Bad? The Epic Mega Battle!
So here I am poised to have the discussion again. But with a twist – ah, you knew a twist was afoot didn’t you? Anecdotally, I find the Kirkpatrick “model” puts people in one of three camps.
- I use the words “Kirkpatrick” but don’t use the taxonomy per se.
- I have heard of Kirkpatrick, agree with it in theory, and want to use the different levels but have no idea as to how/where to begin beyond “Smile Sheets”.
- No idea what you are talking about. Let me Google it and come back to you. (Fine, we’ll wait)
But there is a quiet fourth camp and it’s this group of people I wish to address. In this camp, whether or not you use the “Kirkpatrick” levels or believe in its link to “Learning Performance” doesn’t matter.
Camp #4) My organization doesn’t require anything beyond “Smile” sheets so that is all we measure.
A portion of the debate happening between Clark and Will (and one they both can agree upon) is the Learning Industry, as a whole, has an accountability problem. We cannot point the finger at the leadership of an organization and say, “They don’t ask me for it – so I don’t have to provide it.” or “All they are asking for are smile sheets and butts in seats numbers – so that’s all I give them.” Then act all shocked and surprised when leadership says training doesn’t add value.
There has been plenty of research stating there is no causal link between “Level One” and “Level Two” learning; meaning there doesn’t have to be a positive reaction to learning for learning to have taken place. Therefore providing leadership with just smile sheets is the equivalent of having them (Spoiler Alert) watch only Star Wars I: The Phantom Menace, and expecting them to understand that Anakin Skywalker turns out to be the bad guy. They don’t know the whole story.
Sorry people, our jobs just doesn’t work that way.
If we want a seat at the table, we have to take accountability and responsibility for our position and the end results it subsequently produces. Back in the day when I was a corporate L&D person – I would want my performance review to be based on the same criteria as other operational leadership. You have to take the good with bad – credit with the blame. I know, the argument to this is always – organizations are quick to say “training” fails and assess blame, but are unwilling to give credit when “training” is successful. To this I have good news and bad news.
The bad news: That will never change and if you think L&D is the only department to experience this type of blame assessment, you are mistaken. When profits fall it’s usually sales that takes the hit. (Forget it could be the company has a lousy product) or when workers comp claims goes up it’s usually Risk Management or HR who have a target on their back (not that operations takes unnecessary/risky shortcuts) – see where I’m going here? Finger pointing and the blame game are as old as time. And regarding credit…well, no – L&D is never fully responsible for improved performance and never will be. Performance support/success requires a village. We cannot ever take full credit for performance improvement, behavior change, or whatever the flavor of the day is for organizations.
The good news: You can do something about it – if you want to. This takes the form of measuring success. Regardless of the taxonomy, tool, or method you use – some measurement process is required. I also want to go on record stating it’s a cop-out to say that your organization doesn’t require deeper measuring so you don’t do it. That may seem harsh, and I don’t mean to harsh your buzz, but it’s true. It is our responsibility to show organizations a better way to measure performance improvement.
Let’s not get all L&D geeky on people. Speak the language of your organization. How about writing “Performance Objectives” rather than “Learning Objectives”; such as: “Within one month of completing this course the participant will put their project plan into action with project participants evaluating project success.” Those are course results that directly impact organizational success – that is if you have done your due diligence to ensure the learning aligns with business goals. It doesn’t matter what participants can regurgitate, it matters what they can actually do with said information. This is why solid measurement matters.
This is where fear gets in the way. We may not want to show those results, what if the student project fails? Is the failure the fault of training? It’s comfy here in the dark, where we know that the participants loved the class but the concepts didn’t stand a chance in hell of seeing success. No one wants to have that conversation with the boss. “Sorry boss, our ‘training idea’ didn’t work”. How do you prove what works (or not) without a supporting metric? You don’t, you can’t. It is therefore the responsibility (and obligation) for us to overthrow the current “only smile sheets required” mentality.
More good news: You don’t have get knee deep in Excel or your LMS system to measure success. You just need to be able to answer some key questions and in order to measure the success of any initiative, be it learning or improved kangaroo hopping, one must begin at the end.
Start here and please ditch the L&D vocabulary.
- What has happened in the business that now requires a course on Kangaroo Hopping? (Our competitors are using Super Hoppers to improve speed of delivery, we need to keep up.)
- How will we know our Kangaroo Hopping course will be successful for the business? (An improvement of 20% hopping length, within 60 days, will improve kangaroo package delivery speed.)
- How will the kangaroo elders know that 20% improvement has occurred? (The elders will be measuring hops via surprise hopping audits)
- After all this, are we sure we need a course on Kangaroo Hopping? (Perhaps we can teach stronger kangaroos to coach those that need help?)
From here we can create an assessment report, or evaluation process that really tells a story. In this case it’s not about, to use Kirkpatrick or not to use Kirkpatrick – the point is to use something that will measure performance results. Don’t settle for smile sheets just because that’s all the organization asks from you. I’m willing to bet that everyone reading goes above and beyond in other aspects of the job.
I know measuring performance can hard and perhaps scary but we need to do it. Why? Because saying it’s good enough, isn’t.
Related Post: My post supporting Dan: To be or not to be: The Kirkpatrick Question: http://learningrebels.com/2014/02/06/to-be-or-not-to-bethe-kirkpatrick-question/
Don’t miss a post, sign up and become part of the Learning “Rebel Alliance”!
You won’t get just normal, boring tips and ideas, but tips and ideas that make a difference! Start your “Rebellion” now!
JD Dillon says
Measurement always comes down to a few key considerations for me.
(1) A model will never work for everyone or everything you do. What are you trying to accomplish and therefore what will you need to measure? Use the approach that gets you what you need.
(2) Do we need to measure? Everything you do and the resulting impact can never be accounted for. This is ESPECIALLY true for non-course support. We can’t measure the impact of every social exchange, question answered, job aid provided, etc. We have to start the discussion about value before we start measuring stuff.
(3) We shortcut measurement using simplified models. How many L&D pros actually understand and can dig into raw data – or (more importantly) design for data from the beginning? We need to build a stronger foundational understanding of the role data plays in our work, especially operational data (not just “learning data”).
PS – LOVE the ongoing Thalheimer v. Quinn debate … I’m currently Team Thalheimer (sorry, Clark).
Shannon Tipton says
JD – As usual, you target a very important point – Do we need to measure? Of course the point of this post is not to build up your anxiety levels by trying to determine when and where the end results fit on the Kirkpatrick scale, but just to do something that will determine if your course has value to the organization. At times we don’t have to, or simply cannot, formally measure results, just as you stated out in your second point. This doesn’t mean performance improvement didn’t happen. It’s measuring something differently. Which brings up your third point – begin with the end in mind. I agree. Not enough L&D pros begin with that thought process. Not only do we need to ask why a certain learning is needed, but as with our Kangaroo’s above, how will we know we have been successful?
I am also enjoying the square off between Clark and Will – but for the readership I’m going to remain neutral. I hope everyone has gone to the debate and formed their own opinions. Although, wouldn’t it be interesting to take a survey to see where people land?
JD Dillon says
Or we could ask people to fill out a smile sheet, take a quiz, report on observed behavior changes, and determine ROI for the debate … Maybe? No? 🙂
Ryan Tracey says
Performance Objectives (rather than Learning Objectives) is a great place to start from, as that’s what really matters to the business at the end of the day.
As for the Kirkpatrick Model, I think it makes sense to use it as a guide. In other words, I think it’s important to check the reaction of your target audience, confirm their understanding, observe the desired beahviour change, and measure the impact on the business.
If you choose a different evaluation model, power to you, so long as you evaluate!
Shannon Tipton says
Ryan, thank you for the comment and absolutely! We don’t care what model you use, just use something – and measure deeper than “Level One” or smile sheets.
Nick Leffler says
Always important to figure out the exact (or close to) effect your dept. has on the organization.
My department has 2 methods of determining value and effect and I’m just not sure if they are of true value. I’d personally like to have a greater affect on the business but it gets a bit confusing exactly what the business is in a healthcare setting. I know it’s not hard to, but it seems like it because you’re talking about people and their health rather than money (I hope).
Anyway, the two methods we use are as follows:
1. Reduced tech support calls. If we are creating thorough enough training for nurses etc. then they won’t need to call tech support, and we work directly with them to pull numbers and figure out reductions.
I think this could go further though and things are starting to move this way giving them the option for self help. I’d like to see it go even further and give them the tools for user generated content. Right now it’s a closed off knowledge base that’s very highly controlled by a few people, not good.
2. They have to go through 5 hours of training now, we’re going to build a course that only takes 30 minutes. While it does save time from the current situation and ultimately saves money from that, it’s not good enough.
Why do they have to take that 5 hours training? Why do they need to take any of it? It could go beyond the 30 minutes even because some people don’t even need that, they’re confident with self help through job aids if needed and asking others to show them. There’s nothing wrong with that so requiring them to do 5 hours of training or 30 minutes is still a waste of time.
Anyway, don’t think our methods are that great for proving business necessity, but it’s where we are at. In good times that’s great but in bad times I don’t think it’s enough to keep a whole department around. There’s where creating a true value should come in.
So, Kirkpatrick. Who needs it? Not me! I just need real business value and I don’t care what name you put on it.
Shannon Tipton says
Hi Nick, thank you for taking the time to comment! I love that you’re asking the “why” question, very critical. So often we, as L&D pro’s, neglect to do so. When we get the answer to why – then we are also getting the answer as to “what to measure”. It really doesn’t matter how it fits within the taxonomy, just as long as at the end of the day, we can say we have given the business and the end-user the value they were looking for. That’s really all that matters.
Phil Weber says
OK, here’s my question: I am a technical trainer in a Customer Success organization; I teach customers how to use my company’s products. I have no influence over or visibility into the trainee’s workplace; how can I measure anything beyond “smile sheets” (or possibly Level 2, if I administer an assessment)?
Theoretically, I add value by reducing tech support calls, or helping customers get more value from our products, so they renew or buy more. But, as in your “blame game” examples, if those things happen, there’s no way to prove that they’re a result of training.
Shannon Tipton says
Phil – Great question and one I think you answered yourself. There is a rub with technical training that is external facing, you don’t know what happens as a result of any training the customer participates in. So what can you measure? Just as you indicated, turn your focus inward – is there a decrease in a certain type of call? Can names be crossed referenced? Those that call in crossed with names that may have participated in training? What were the statistics in relation to customer support before and after a certain learning series was implemented. For the future, is it possible to get those types of stats so you can measure a decrease in call volume?
And you’re correct – just as I said, there is no definitive way to say that training was responsible for success, but if you come to the party armed with as much information as you can, you may be able to say training influenced the outcome. This means planning your courses not only to address external goals but internal goals. By participating in XYZ class, what will your organization see as success stories? Maybe it’s testimonials? Maybe it will be actual numbers in call decline – all good things and being able to say training influenced the result is even better. I hope this helps!
I’m curious – anyone else out there in Phil’s shoes? What advice would you offer?
Alan Montague, CPLP says
To my way of thinking, it is a question of what business metric is being addressed by the training.
As someone with more years in the software training game than I care to remember I’ve almost never anything beyond in class (or sometimes pre-class) contact with my learners.
So the metrics I look at are based on what the company believes the purpose of the training is.
Dare to ask the question,’ Why do we offer product training to our customers?’
The responses will tend to be in two main areas.
Properly trained users make more use of more functionality of the software than those who pick it up as they go along.
Properly trained users have less impact of our support team.
In the modern SaaS world it is fair to say that training impacts product adoption and customer satisfaction, and these are the two biggest drivers of retention.
Aim for those metrics and then try asking another question.
If we did no product training what do business leaders expect the impact to be on those two metrics?
Then take credit for that impact as long as you are doing your job well.
Shannon Tipton says
Alan – Exactly! Find out the organizational pain point. How is this course going to solve that pain point? So it becomes less about determining the formal measurement tool, but really looking at the overall results – just as you said. I think your question, “Why do we offer product training to our customers?” is spot on. Once you really drill down you’ll find what is really important to the organization and know what to measure. Thank you Alan!
Shannon Tipton says
Hi Phil – I hope you are seeing and are finding helpful, the excellent responses from a few readers to your question above, “How can I measure anything beyond “smile sheets” (or possibly Level 2, if I administer an assessment)?” Nothing like sharing real-life experiences!