In the past couple of years, much has been written about, for and against, the efficacy of so-called “Smiley Sheet” evaluations. These evaluations exist to capture reaction-level feedback from the learner population. I don’t get it. If we are truly walking the talk of ISD, Continuous Process Improvement, Social Learning, and Working Out Loud, then some feedback is better than no feedback. Am I right? And, when did gauging reaction and receiving immediate feedback become a bad thing?
If it is true there are no bad tools, then it is unproductive to malign this type of evaluative feedback. It is a viable alternative in a variety of circumstances. If the feedback you’re getting isn’t working, then there is some work to do to improve the evaluative process. The level of effectiveness depends largely on the effort spent crafting the evaluation.
“Crafting the evaluation?”, you ask. Yes. There is a lot to consider when crafting a well-written evaluation–no matter what “level” it is addressing. (my opinion) Too often this is a minimal, “drive by” effort tacked on to the completion of the curriculum.
To consider: Format, Layout, Writing the Questions, Types of Questions, Blending Types of Questions, Number of Questions, Timing of When the Learner Population Receives the Invitation to Evaluate the Curriculum, Whether the Learner Population Has Been Informed They Can Provide Honest Feedback and How the Feedback Will Be Used, and so on and so forth.
So where does it all go so wrong? (again, my opinion) Too often, the evaluations themselves contain only standard or general questions that don’t get to the heart of what is addressed through delivery and completion of the curriculum. (Why, oh why, do we ask about the temperature of the room? The food?)
Therefore, the key to getting more effective feedback that matters lies in what you learned during the needs assessment. What specific needs were articulated and were to be addressed with the development of the curriculum? What specific outcomes is completion of the curriculum being addressed? Were there specific timeframes for completion or other conditions of the learner population that were to be taken into consideration when crafting the feedback questions? Was this learning experience new for the learner population, in any way? For example, was new technology used? See? There is much to consider when crafting the questions.
Take the time. Review the background information carefully. Test the questions. Rework, if needed. Don’t use it until it is the best that it can be.
Do “reaction” evaluations administered upon completion of the curriculum work when addressing changes in performance and behavior? The answer is obvious: a resounding “No”. But I maintain, it IS a valid method for gathering immediate feedback on things like ease of use, technology, and interface. It is also valid as a method for checking in on whether the learner population grasped key aspects of the curriculum and know what is to come next. And isn’t using social media, especially Twitter, essentially the same thing? Right? Right.
One more thing. Effectiveness also depends upon doing something with the feedback. Don’t make the mistake of conditioning your learner population to ignore the feedback step because nothing is ever done with the information they provide. Even better, acknowledge the learner population and credit them for the updates and changes.
So… if we hate reaction level surveys and lack of quality results, we have no one to blame but ourselves. Hate the player. Not the game.