I’ve mentioned language use before.

I’ll talk about it today and probably again.

What the word–any word– means is the key to a successful evaluation.

Do you know what it means? Or do you think you know what it means? 

How do you find out if what you think it means is what your key funder (a stakeholder) thinks it means?  Or what the participants (target audience) thinks it means?  Or any other stakeholder (partners, for example) thinks it means…

You ask them.

You ask them BEFORE the evaluation begins.  You ask them BEFORE you have implemented the program.  You ask them when you plan the program.

During program planning, I bring to the table relevant stakeholders–folks similar to and different from those who will be the recipients of the program.  I ask them this evaluative question: “If you participated in this program, how will you know that the program is successful?  What has to happen/change to know that a difference has been made?”

Try it–the answers are often revealing, informative, and enlightening.  They are not often the answers you thought.  Listen to those stakeholders.  They have valuable insights.  They actually know something.

Once you have those answers, clarify any and all terminology so that everyone is on the same page.  What something means to you may means something completely different to someone else.

Impact is one of those words–it is both a noun and a verb.  Be careful how you use it and how it is used.  Go to a less loaded word–like results or effects.  Talk about measurable results that occur within a certain time frame–immediately after the program; several months after the program; several years after the program–depending on your program.  (If you are a forester, you may not see results for 40 years…)

Mar
18

Historically, April 15 is tax day (although in 2011, it is April 18 )–the day taxes are due to the revenue departments.

State legislatures are dealing with budgets and Congress is trying to balance a  Federal budget.

Everywhere one looks, money is the issue–this is especially true in these recession ridden time.  How does all this relate to evaluation, you ask?  This is the topic for today’s blog.  How does money figure into evaluation.

Let’s start with the simple and move to the complex.  Everything costs–and although I’m talking about money, time, personnel, and resources  (like paper, staples, electricity, etc.)  must also be taken into consideration.

When we talk about evaluation, four terms typically come to mind:  efficacy, effectiveness, efficiency, and fidelity.

Efficiency is the term that addresses money or costs.  Was the program efficient in its use of resources?  That is the question asked addressing efficiency.

To answer that question, there are three (at least) approaches that are used to address this question:

  1. Cost  or cost analysis;
  2. Cost effectiveness analysis; and
  3. Cost-benefit analysis.

Simply then:

  1. Cost analysis is the number of dollars it takes to deliver the program, including salary of the individual(s) planning the program.
  2. Cost effectiveness analysis is a computation of the target outcomes in an appropriate unit in ratio to the costs.
  3. Cost-benefit analysis is also a ratio of the costs of outcomes to the benefits of the program measured in the same units, usually money.

How are these computed?

  1. Cost can be measured by how much the consumer is willing to pay.  Costs can be the value of each resource that is consumed in the implementation of the program.  Or cost analysis can be “measuring costs so they can be related to procedures and outcomes” (Yates, 1996, p. 25).   So you list the money spent to implement the program, including salaries, and that is a cost analysis.  Simple.
  2. Cost effectiveness analysis says that there is some metric in which the outcomes are measured (number of times hands are washed during the day, for example) and that is put in ratio of the total costs of the program.  So movement from washing hands only once a day (a bare minimum) to washing hands at least six times a day would have the costs of the program (including salaries) divided by the changed number of times hands are washed a day (i.e., 5).  The resulting value is the cost-effectiveness analysis.  Complex.
  3. Cost-benefit analysis puts the outcomes in the same metric as the costs–in this case dollars.  The costs  (in dollars) of the program (including salaries) are put in ratio to the  outcomes (usually benefits) measured in dollars.  The challenge here is assigning a dollar amount to the outcomes.  How much is frequent hand washing worth? It is often measured in days saved from communicable/chronic/ acute  illnesses.  Computations of health days (reduction in days affected by chronic illness) is often difficult to value in dollars.  There is a whole body of literature in health economics for this topic, if you’re interested.  Complicated and complex.

Yates, B. T. (1996).  Analyzing costs, procedures, processes, and outcomes in human services.  Thousand Oaks, CA: Sage.

Mar
11
Filed Under (program evaluation) by Molly on 11-03-2011

There has been a lot of buzz recently about the usefulness of the Kirkpatrick model

I’ve been talking about it (in two previous posts) and so have others.   This model has been around a long time and has continued to be useful in the training field.  Extension does a lot of training.  Does that mean this model should be used exclusively when training is the focus?  I don’t think so.  Does this model have merits.  I think so.  Could it be improved upon?  That depends on the objective of your program and your evaluation, so probably.

If you want to know about whether your participants react favorably to the training, then this model is probably useful.

If you want to know about the change in knowledge, skills,  attitudes, then this model may be useful.  You would need to be careful because knowledge is a slippery concept to measure.

If you want to know about the change in behavior, probably not. Kirkpatrick on the website says that application of learning is what is measured in the behavioral stage.  How do you observe behavior change at a training?  Observation is the obvious answer here and one does not necessarily observe behavior change at a training.  Intention to change is not mentioned in this level.

If you want to know what difference you made in the social, economic, and/or environmental conditions in which your participants live, work, and practice, then the Kirkpatrick model won’t take you there.  The 4th level (which is where evaluation starts for this model, according to Kirkpatrick) says:  To what degree targeted outcomes occur as a result of the training event and subsequent reinforcement. I do not see this as condition change or what I call impact.

A faculty member asked me for specific help in assessing impact.  First, one needs to define what is meant by impact.  I use the word to mean change in social, environmental, and/or economic conditions over the long run.  This means changes in social institutions like family, school, employment (social conditions). It means changes in the environment which may be clean water or clean air OR it may mean removing the snack food vending machine from the school (environmental conditions).  It means changes in some economic indicator, up or down, like return on investment, change in employment status,  or increase revenue (economic conditions).  This doesn’t necessarily mean targeted outcomes of the training event.

I hope that any training event will move participants to a different place in their thinking and acting that will manifest in the LONG RUN in changes in one of the three conditions mentioned above.  To get there, one needs to be specific in what one is asking the participants.  Intention to change doesn’t necessarily get to impact.  You could anticipate impact if participants follow through with their intention.  The only way to know that for sure  is to observe it.  We approximate that by asking good questions.

What questions are you asking about condition change to get at impacts of your training and educational programs?

Next week:  TIMELY TOPIC.  Any suggestions?

Mar
02
Filed Under (program evaluation) by Molly on 02-03-2011

You’ve developed your program.  You think you’ve met a need.  You conduct an evaluation.  Low and behold!  Some of your respondents give you such negative feedback you wonder what program they attended.  Could it really have been your program?

This is the phenomena I call “all of the people all of the time”, which occurs regularly  in evaluating training  programs.  And it has to do with use–what you do with the results of this evaluation.  And you can’t do it–please all of the people all of the time, that is.  There will always be some sour grapes.  In fact, you will probably have more negative comments than positive comments.  People who are upset want you to know; people are happy are just happy.

Now, I’m sure you are really confused.  Good.  At least I’ve got your attention and maybe you’ll read to the end of today’s post.

You have seen this scenario:  You ask the participants for formative data so that you can begin planning the next event or program.  You ask about the venue, the time of year, the length of the conference, the concurrent offerings, the plenary speakers.  Although some of these data are satisfaction data (the first level, called Reaction,  in Don Kirkpatrick’s training model and the Reaction category in Claude Bennett’s TOPs Hierarchy [see diagram]

they are important part of formative evaluation; an important part of program planning.  You are using the evaluation report.  That is important.  You are not asking if the participants learned something.  You are not asking if they intend to change their behavior.  You are not asking about what conditions have changed.  You only want to know about their experience in the program.

What do you do with the sour grapes?  You could make vinegar, only that won’t be very useful and use is what you are after.  Instead, sort the data into those topics over which you have some control and those topics over which you have no control.  For example–you have control over who is invited to be a plenary speaker, if there will be a plenary speaker, how many concurrent sessions, who will teach those concurrent sessions;  you have no control over the air handling at the venue, the chairs at the venue, and probably, the temperature of the venue.

You can CHANGE those topics over which you have control.  Comments say the plenary speaker was terrible.  Do not invite that person to speak again.  Feedback says that the concurrent sessions didn’t provide options for classified staff, only faculty.  Decide the focus of your program and be explicit in the program promotional materials–advertise it explicitly to your target audience.  You get complaints about the venue–perhaps there is another venue; perhaps not.

You can also let your audience know what you decided based on your feedback.  One organization for which I volunteered sent out a white paper with all the concerns and how the organization was addressing them–or not.  It helped the grumblers see that the organization takes their feedback seriously.

And if none of this works…ask yourself: Is it a case of all of the people all of the time?