Terry Smutlyo sings a song about impact during an outcome mapping seminar he conducted. Terry Smutlyo is the Director, Evaluation International Development Research Development Research Center, Ottawa, Canada. He ought to know a few things about evaluation terminology. He has two versions of this song, Impact Blues, on YouTube; his comments speak to this issue. Check it out.
Just a gentle reminder to use your words carefully. Make sure everyone knows what you mean and that everyone at the table agrees with the meaning you use.
This week the post is short. Terry says it best.
Next week I’ll be at the American Evaluation Association annual meeting in Anaheim, CA, so no post. No Disneyland visit either…sigh
A colleague asked me what I considered an output in a statewide program we were discussing. This is a really good example of assumptions and how they can blind side an individual–in this case me. Once I (figuratively) picked myself up, I proceeded to explain how this terminology applied to the program under discussion. Once the meeting concluded, I realized that perhaps a bit of a refresher was in order. Even the most seasoned evaluators can benefit from a reminder every so often.
So OK–inputs, outputs, outcomes.
As I’ve mentioned before, Ellen Taylor-Powell, former UWEX Evaluation specialist has a marvelous tutorial on logic modeling. I recommend you go there for your own refresher. What I offer you here is a brief (very) overview of these terms.
Logic models whether linear or circular are composed of various focus points. Those focus points include (in addition to those mention in the title of this post) the situation, assumptions, and external factors. Simply put, the situation is a what is going on–the priorities, the needs, the problems that led to the program you are conducting–that is program with a small p (we can talk about sub and supra models later).
Inputs are those resources you need to conduct the program. Typically, they are lumped into personnel, time, money, venue, equipment. Personnel covers staff, volunteers, partners, any stakeholder. Time is not just your time–also the time needed for implementation, evaluation, analysis, and reporting. Money (speaks for itself). Venue is where the program will be held. Equipment is what stuff you will need–technology, materials, gear, etc.
Outputs are often classified into two parts–first, participants (or target audience) and the second part, activities that are conducted. Typically (although not always), those activities are counted and are called bean counts.. In the example that started this post, we would be counting the number of students who graduated high school; the number of students who matriculated to college (either 2 or 4 year); the number of students who transferred from 2 year to 4 year colleges; the number of students who completed college in 2 or 4 years; etc. This bean count could also be the number of classes offered; the number of brochures distributed; the number of participants in the class; the number of (fill in the blank). Outputs are necessary and not sufficient to determine if a program is being effective. The field of evaluation started with determining bean counts and satisfactions.
Outcomes can be categorized as short term, medium/intermediate term, or long term. Long term outcomes are often called impacts. (There are those in the field who would classify impacts as something separate from an outcome–a discussion for another day.) Whatever you choose to call the effects of your program, be consistent–don’t use the terms interchangeably; it confuses the reader. What you are looking for as an outcome is change–in learning; in behavior; in conditions. This change is measured in the target audience–individuals, groups, communities, etc.
A faculty member asked me to provide evaluation support for a grant application. Without hesitation, I agreed.
I went to the web site for funding to review what was expected for an evaluation plan. What was provided was their statement about why evaluation is important.
Although I agree with what is said in that discussion, I think we have a responsibility to go further. Here is what I know.
Extension professionals evaluate programs because there needs to be some evidence that the imputs for the program–time, money, personnel, materials, facilities, etc.–are being used advantageously, effectively. Yet, there is more to the question, “Why evaluate” than accountability. (Michael Patton talks about the various uses to which evaluation findings can be put–see his book on Utilization Focused Evaluation.) Programs are evaluated to determine if people are satisfied, if their expectations were met, whether the program was effective in changing something.
This is what I think. None of what is stated above addresses the “so what” part of “why evaluate”. I think that answering this question (or attempting to) is a compelling reason to justify the effort of evaluating. It is all very well and good to change people’s knowledge of a topic; it is all very well and good to change people’s behavior related to that topic; and it is all very well and good to have people intend to change (after all, stated intention to change is the best predictor of actual change). Yet, it isn’t enough. Being able to answer the “so what” question gives you more information. And doing that–asking and answering the “so what” question–makes evaluation an everyday activity. And, who knows. It may even result in world peace.
My wishes to you: Blessed Solstice. Merry Christmas. Happy Kwanzaa. and the Very Best Wishes for the New Year!
A short post today.
When you down load this PDF to save a copy, think of where your values come into the model; where others values can affect the program, and how you can modify the model to balance those values.
Ellen says: “I just wanted to let everyone know that the online logic model course, “Enhancing Program Performance with Logic Models” has been produced as a PDF in response to requests from folks without easy or affordable internet access or with different learning needs. The PDF version (216 pages, 3.35MB) is available at:
Please note that no revisions or updates have been made to the original 2003 online course.
There is an ongoing discussion about the difference between impact and outcome. I think this is an important discussion because Extension professionals are asked regularly to demonstrate the impact of their program.
There is no consensus about these terms. They are often used interchangeably. Yet, the consensus is that they are not the same. When Extension professionals plan an evaluation, it is important to keep these terms separate. Their meaning is distinct and different.
What points do we need to keep in mind when considering if the report we are making is a report of OUTCOMES or a report of IMPACTS. Making explicit the meaning of these words before beginning the program is important. If there is no difference in your mind, then that needs to be stated. If there is a difference from your perspective, that needs to be stated as well. It may all depend on who the audience is for the report. Have you asked your supervisor (Staff Chair, Department Head, Administrator) what they mean by these terms?
One way to look at this issue is to go to simpler language:
I would contend that impact is the effect (i.e., the result) and outcome is the affect (i.e., the influence).
Now to complicate this discussion a bit–where do OUTPUTS fit?
OUTPUTS are necessary and NOT sufficient to determine the influence (affect) or results (effect) of an intervention. Outputs count things that were done–number of people trained; feet of stream bed reclaimed; number of curriculum written; number of…(fill in the blank). Outputs do not tell you either the affect or the effect of the intervention.
The difference I draw may be moot if you do not draw the distinction. If you don’t that is OK. Just make sure that you are explicit with what you mean by these terms: OUTCOMES and IMPACT.