I’ve been reminded recently about Kirkpatrick’s evaluation model.

Donald L. Kirkpatrick (1959) developed a four level model used primarily for evaluating training.  This model is still used extensively in the training field and is espoused by ASTD, the American Society of Training and Development.

It also occurred to me that Extension conducts a lot of training from pesticide handling to logic model use and that Kirkpatrick’s model is one that isn’t talked about a lot in Extension–at least I don’t use it as a reference.  And that may not be a good thing, given that Extension professionals are conducting training a lot of the time.

Kirkpatrick’s four levels are these:

  1. Reaction:  To what degree participants react favorably to the training
  2. Learning:  To what degree participants acquire intended knowledge, skills, and attitudes based on the participation in learning event
  3. Application:  To what degree do participants apply what they learned during training on the job
  4. Impact:  To what degree targeted outcomes occur, as a result of the learning event(s) and subsequent reinforcement

Sometimes it is important to know what the affective reaction our participants are having during and at the end of  the training.  I would call this a formative evaluation and formative evaluation is often used for program improvement.  Reactions are a way that participants can tell the Extension professional how things are going–i.e., what their reaction is–using a continuous feedback mechanism.  Extension professionals can use this to change the program, revise their approach, adjust the pace, etc.  The feedback mechanism doesn’t have to be constant–which is often the interpretation of “continuous”.  Soliciting feedback at natural breaks, using a show of hands, is often enough for on-the- spot adjustments.  It is a form of formative evaluation as it is an “in-process” evaluation.  Kirkpatrick’s level one (reaction)  doesn’t provide a measure of outcomes or impacts.  I might call it a “happiness” evaluation or a satisfaction evaluation–tells me only what is the participants’ reaction.  Outcome evaluation–to determine a measure of effectiveness–happens in a later level and is another approach to evaluation which I would call summative–although, Michael Patton might call developmental in a training situation where the outcome is always moving, changing, developing.

Kirkpatrick, D. L. (1959) Evaluating Training Programs, 2nd ed., Berrett Koehler, San Francisco.

Kirkpatrick, D. L. (comp.) (1998) Another Look at Evaluating Training Programs, ASTD, Alexandria, USA.

For more information about the Kirkpatrick model, see their site, Kirkpatrick Partners.

After experiencing summer in St. Petersburg,  FL, then peak color in Bar Harbor and Arcadia National Park, ME,  I am once again reminded of how awesome these United States truly are.  Oregon holds its own special brand of beauty and it is nice to be back home.  Evaluation was everywhere on this trip.

A recent  AEA365 post talks about systems thinking and evaluating educational programs.  Bells went off with me because Extension DOES educational programs and does them in existing systems.  Often, Extension professionals neglect the systems aspect of their programming and attempt to implement the program in isolation.  In today’s complex world, isolation isn’t possible.  David Bella, an Emeritus professor at OSU uses the term “complex messy systems”.  I think that clearly characterizes what Extension faces in developing programs. The AEA365 post has some valuable points for Extension professionals to remember (see the link for more details):

1.  Build relationships with experts from across disciplines.

2.  Ensure participation from stakeholders across the entire evaluated entity.

3.  Create rules of order to guide the actions of the evaluation team.

These are points for Extension professionals to keep in mind as they develop their programs.  By keeping them in mind and using them,  Extension professionals can strengthen their programs.  More and more, extension programs are multi-site as well as multi-discipline.   Ask yourself:   What part of the program is missing because of failure to consult across disciplines? or What part of the program won’t be recognized because of failure to include as many stakeholders as possible in helping to design the evaluation?  Who will know better what makes an effective program than those individuals in the target audience?  Helping everyone know what the expectations are helps systems work, change, and grow.

It is also important consider the many contextual factors.  When working in community-based programs, Extension professionals need to develop partnerships and those partnerships need to work in agreement.  This is another example Extension work and evaluation of that work occurs withing an existing system.