Sheila Robinson has an interesting post which she titled “Outputs are for programs. Outcomes are for people.”  Sounds like a logic model to me.

Evaluating something (a strategic plan, an administrative model, a range management program) can be problematic. Especially if all you do is count. So “Do you want to count?” OR “Do you want to determine what difference you made?” I think it all relates to outputs and outcomes.

 

Logic model

 

The model below explains the difference between outputs and outcomes.

.logicmodel (I tried to find a link on the University of Wisconsin website and UNFORTUNATELY it is no longer there…go figure. Thanks to Sheila, I found this link which talks about outputs and outcomes) I think this model makes clear the  difference between Outputs (activities and participation) and Outcomes-Impact (learning, behavior, and conditions). Continue reading

Probable? Maybe. Making a difference is always possible.

Oxford English Dictionary defines possible as capable of being (may/can exist, be done, or happen). It  defines probable as worthy of acceptance, believable.

Ray Bradbury Ray Bradbury: “I define science fiction as the art of the possible. Fantasy is the art of the impossible.”

Somebody asked me what was the difference between science fiction and fantasy. Certainly the simple approach is that science fiction deals with the possible (if you can think it, it can happen). Fantasy deals with monsters, fairies, goblins, and other mythical creatures, i.e., majic and majical creatures.

(Disclaimer: I personally believe in majic; much of fantasy deals with magic.) I love the Arthurian legend (it could be fantasy; it has endured for so long it is believable). It is full of majic. I especially like  the Marion Zimmer Bradley MarionZimmerBradley book, The Mists of Avalon Mists_of_Avalon-1st_ed. (I find the feminist perspective refreshing.)

Is fantasy always impossible as Bradbury suggests, or is it just improbable?  (Do the rules of physics apply?) This takes me back to Bradbury’s quote and evaluation after the minor digression. Bradbury also says that “Science fiction, again, is the history of ideas, and they’re always ideas that work themselves out and become real and happen in the world.” Not unlike evaluation. Evaluation works itself out and becomes real and happens. Usually.

Evaluation and the possible.

Often, I am invited to be the evaluator of record after the program has started. I sigh. Then I have a lot of work to do. I must teach folks that evaluation is not an “add on” activity. I  must also teach the folks how to identify the difference the program made. Then there is the issue of outputs (activities, participants) vs. outcomes (learning, behavior, conditions). Many principal investigators want to count differences pre-post.

Does the “how many” provide a picture of what difference the program made? If you start with no or few participants  and you end with many participants, have you made a difference? Yes, it is possible to count. Counts often meet reporting requirements. They are possible. So is documenting the change in knowledge, behavior, and conditions. It takes more work and more money. It is possible. Will you get to world peace? Probably not. Even if you can think it. World peace may be probable; it may not be possible (at least in my lifetime).

my two cents.

molly.

 

“fate is chance; destiny is choice”.destiny-calligraphy-poster-c123312071

Went looking for who said that originally so that I could give credit. Found this as the closest saying: “Destiny is no matter of chance. It is a matter of choice: It is not a thing to be waited for, it is a thing to be achieved.

William Jennings Bryan

 

Evaluation is like destiny. There are many choices to make. How do you choose? What do you choose?

Would you listen to the dictates of the Principal Investigator even if you know there are other, perhaps better, ways to evaluate the program?

What about collecting data? Are you collecting it because it would be “nice”? OR are you collecting it because you will use the data to answer a question?

What tools do you use to make your choices? What resources do you use?

I’m really curious. It is summer and although I have a list (long to be sure) of reading, I wonder what else is out there, specifically relating to making choices? (And yes, I could use my search engine; I’d rather hear from my readers!)

Let me know. PLEASE!

my two cents.

molly.

I’ve long suspected I wasn’t alone in the recognition that the term impact is used inappropriately in most evaluation. 

Terry Smutlyo sings a song about impact during an outcome mapping seminar he conducted.  Terry Smutlyo is the Director, Evaluation International Development Research Development Research Center, Ottawa, Canada.  He ought to know a few things about evaluation terminology.  He has two versions of this song, Impact Blues, on YouTube; his comments speak to this issue.  Check it out.

 

Just a gentle reminder to use your words carefully.  Make sure everyone knows what you mean and that everyone at the table agrees with the meaning you use.

 

This week the post is  short.  Terry says it best.

Next week I’ll be at the American Evaluation Association annual meeting in Anaheim, CA, so no post.  No Disneyland visit either…sigh

 

 

A colleague asked me what I considered an output in a statewide program we were discussing.  This is a really good example of assumptions and how they can blind side an individual–in this case me.  Once I (figuratively) picked myself up, I proceeded to explain how this terminology applied to the program under discussion.  Once the meeting concluded, I realized that perhaps a bit of a refresher was in order.  Even the most seasoned evaluators can benefit from a reminder every so often.

 

So OK–inputs, outputs, outcomes.

As I’ve mentioned before, Ellen Taylor-Powell, former UWEX Evaluation specialist has a marvelous tutorial on logic modeling.  I recommend you go there for your own refresher.  What I offer you here is a brief (very) overview of these terms.

Logic models whether linear or circular are composed of various focus points.  Those focus points include (in addition to those mention in the title of this post) the situation, assumptions, and external factors.  Simply put, the situation is a what is going on–the priorities, the needs, the problems that led to the program you are conducting–that is program with a small p (we can talk about sub and supra models later).

Inputs are those resources you need to conduct the program. Typically, they are lumped into personnel, time, money, venue, equipment.  Personnel covers staff, volunteers, partners, any stakeholder.  Time is not just your time–also the time needed for implementation, evaluation, analysis, and reporting.  Money (speaks for itself).  Venue is where the program will be held.  Equipment is what stuff you will need–technology, materials, gear, etc.

Outputs are often classified into two parts–first, participants (or target audience) and the second part, activities that are conducted.  Typically (although not always), those activities are counted and are called bean counts..  In the example that started this post, we would be counting the number of students who graduated high school; the number of students who matriculated to college (either 2 or 4 year); the number of students who transferred from 2 year to 4 year colleges; the number of students who completed college in 2 or 4 years; etc.  This bean  count could also be the number of classes offered; the number of brochures distributed; the number of participants in the class; the number of  (fill in the blank).  Outputs are necessary and not sufficient to determine if a program is being effective.  The field of evaluation started with determining bean counts and satisfactions.

Outcomes can be categorized as short term, medium/intermediate term, or long term.  Long term outcomes are often called impacts.  (There are those in the field who would classify impacts as something separate from an outcome–a discussion for another day.)  Whatever you choose to call the effects of your program, be consistent–don’t use the terms interchangeably; it confuses the reader.  What you are looking for as an outcome is change–in learning; in behavior; in conditions.  This change is measured in the target audience–individuals, groups, communities, etc.

I’ll talk about assumptions and external factors another day.  Have a wonderful holiday weekend…the last vestiges of summer–think tomatoes, corn-on-the-cob , state fair, and  a tall cool drink.

A faculty member asked me to provide evaluation support for a grant application.  Without hesitation, I agreed.

I went to the web site for funding to review what was expected for an evaluation plan.  What was provided was their statement about why evaluation is important.

Although I agree with what is said in that discussion, I think we have a responsibility to go further.  Here is what I know.

Extension professionals evaluate programs because there needs to be some evidence that the imputs for the program–time, money, personnel, materials, facilities, etc.–are being used advantageously, effectively.  Yet, there is more to the question, “Why evaluate” than accountability. (Michael Patton talks about the various uses to which evaluation findings can be put–see his book on Utilization Focused Evaluation.) Programs are evaluated to determine if people are satisfied, if their expectations were met, whether the program was effective in changing something.

This is what I think.  None of what is stated above addresses the  “so what” part of “why evaluate”.  I think that answering this question (or attempting to) is a compelling reason to justify the effort of evaluating.  It is all very well and good to change people’s knowledge of a topic; it is all very well and good to change people’s behavior related to that topic; and it is all very well and good to have people intend to change (after all, stated intention to change is the best predictor of actual change).  Yet, it isn’t enough.  Being able to answer the “so what” question gives you more information.   And doing that–asking and answering the “so what” question–makes evaluation an everyday activity.   And, who knows.  It may even result in world peace.

My wishes to you:  Blessed Solstice.  Merry Christmas.  Happy Kwanzaa. and the Very Best Wishes for the New Year!

A short post today.

Ellen Taylor-Powell, my counterpart at University of Wisconsin Extension, has posted the following to the Extension Education Evaluation TIG list serve.  I think it is important enough to share here.

When you down load this PDF to save a copy, think of where your values come into the model; where others values can affect the program, and how you can modify the model to balance those values.

Ellen says:  “I just wanted to let everyone know that the online logic model course, “Enhancing Program Performance with Logic Models has been produced as a PDF in response to requests from folks without easy or affordable internet access or with different learning needs.  The PDF version (216 pages, 3.35MB) is available at:

http://www.uwex.edu/ces/pdande/evaluation/pdf/lmcourseall.pdf

Please note that no revisions or updates have been made to the original 2003 online course.

Happy Holidays!

Ellen”

There is an ongoing discussion about the difference between impact and outcome.  I think this is an important discussion because Extension professionals are asked regularly to demonstrate  the impact of their program.

There is no consensus about these terms.  They are often used interchangeably. Yet, the consensus is that they are not the same.  When Extension professionals plan an evaluation, it is important to keep these terms separate.  Their meaning is distinct and different.

So what exactly is IMPACT?

And what is an OUTCOME?

What points do we need to keep in mind when considering if the report we are making is a report of OUTCOMES or a report of IMPACTS.  Making explicit the meaning of these words before beginning the program is important.  If there is no difference in your mind, then that needs to be stated.  If there is a difference from your perspective, that needs to be stated as well.  It may all depend on who the audience is for the report.  Have you asked your supervisor (Staff Chair, Department Head, Administrator) what they mean by these terms?

One way to look at this issue is to go to simpler language:

  • What is the result (effect) of the intervention (read ‘program’)–that is, SO WHAT?  This is impact.
  • What is the intervention influence (affect) on the target audience–that is, WHAT HAPPENED?  This is outcome.

I would contend that impact is the effect (i.e., the result) and outcome is the affect (i.e., the influence).

Now to complicate this discussion a bit–where do OUTPUTS fit?

OUTPUTS are necessary and NOT sufficient to determine the influence (affect) or results (effect) of an intervention.  Outputs count things that were done–number of people trained; feet of stream bed reclaimed; number of curriculum written; number of…(fill in the blank).  Outputs do not tell you either the affect or the effect of the intervention.

The difference I draw may be moot if you do not draw the distinction.  If you don’t that is OK.  Just make sure that you are explicit with what you mean by these terms:  OUTCOMES and IMPACT.