I’ve long suspected I wasn’t alone in the recognition that the term impact is used inappropriately in most evaluation. 

Terry Smutlyo sings a song about impact during an outcome mapping seminar he conducted.  Terry Smutlyo is the Director, Evaluation International Development Research Development Research Center, Ottawa, Canada.  He ought to know a few things about evaluation terminology.  He has two versions of this song, Impact Blues, on YouTube; his comments speak to this issue.  Check it out.

 

Just a gentle reminder to use your words carefully.  Make sure everyone knows what you mean and that everyone at the table agrees with the meaning you use.

 

This week the post is  short.  Terry says it best.

Next week I’ll be at the American Evaluation Association annual meeting in Anaheim, CA, so no post.  No Disneyland visit either…sigh

 

 

I am reading the book, Eaarth, by Bill McKibben (a NY Times review is here).  He writes about making a difference in the world on which we live.  He provides numerous  examples that have all happened in the 21st century, none of them positive or encouraging. He makes the point that the place in which we live today is not, and never will be again, like the place in which we lived when most of us were born.  He talks about not saving the Earth for our grandchildren but rather how our parents needed to have done things to save the earth for them–that it is too late for the grandchildren.  Although this book is very discouraging, it got me thinking.

 

Isn’t making a difference what we as Extension professionals strive to do?

Don’t we, like McKibben, need criteria to determine what that difference can/could/would be made and look like?

And if we have that criteria well established, won’t we be able to make a difference, hopefully positive (think hand washing here)?  And like this graphic, , won’t that difference be worth the effort we have put into the attempt?  Especially if we thoughtfully plan how to determine what that difference is?

 

We might not be able to recover (according to McKibben, we won’t) the Earth the way it was when most of us were born; I think we can still make a difference–a positive difference–in the lives of the people with whom we work.  That is an evaluative opportunity.

 

 

A colleague asks for advice on handling evaluation stories, so that they don’t get brushed aside as mere anecdotes.  She goes on to say of the AEA365 blog she read, ” I read the steps to take (hot tips), but don’t know enough about evaluation, perhaps, to understand how to apply them.”  Her question raises an interesting topic.  Much of what Extension does can be captured in stories (i.e., qualitative data)  rather than in numbers (i.e., quantitative data).  Dick Krueger, former Professor and Evaluation Leader (read specialist) at the University of Minnesota has done a lot of work in the area of using stories as evaluation.  Today’s post summarizes his work.

 

At the outset, Dick asks the following question:  What is the value of stories?  He provides these three answers:

  1. Stories make information easier to remember
  2. Stories make information more believable
  3. Stories can tap into emotions.

There are all types of stories.  The type we are interested in for evaluation purposes are organizational stories.  Organizational stories can do the following things for an organization:

  1. Depict culture
  2. Promote core values
  3. Transmit and reinforce the culture
  4. Provide instruction to employees
  5. Motivate, inspire, and encourage

He suggests six common types of organizational stories:

  1. Hero stories  (someone in the organization who has done something beyond the normal range of achievement)
  2. Success stories (highlight organizational successes)
  3. Lessons learned stories (what major mistakes and triumphs teach the organization)
  4. “How it works around here” stories (highlight core organizational values reflected in actual practice
  5. “Sacred bundle” stories (a collection of stories that together depict the culture of an organization; core philosophies)
  6. Training and orientation stories (assists new employees in understanding how the organization works)

To use stories as evaluation, the evaluator needs to consider how stories might be used, that is, do they depict how people experience the program?  Do they understand program outcomes?  Do they get insights into program processes?

You (as evaluator) need to think about how the story fits into the evaluation design (think logic model; program planning).  Ask yourself these questions:  Should you use stories alone?  Should you use stories that lead into other forma of inquiry?  Should you use stories that augment/illustrate results from other forms of inquiry?

You need to establish criteria for stories.  Rigor can be applied to story even though the data are narrative.  Criteria include the following:   Is the story authentic–is it truthful?  Is the story verifiable–is there a trail of evidence back to the source of the story?  Is there a need to consider confidentiality?  What was the original intent–purpose behind the original telling?  And finally, what does the story represent–other people or locations?

You will need a plan for capturing the stories.  Ask yourself these questions:  Do you need help capturing the stories?  What strategy will you use for collecting the stories?  How will you ensure documentation and record keeping?  (Sequence the questions; write them down the type–set up; conversational; etc.)  You will also need a plan for analyzing and reporting the stories  as you, the evaluator,  are responsible for finding meaning.

 

Three weeks ago, I promised you a series of posts on related topics–Program planning, Evaluation implementation, monitoring and delivering, and Evaluation utilization.  This is the third one–using the findings of evaluation.

Michael Patton’s book  is my reference.

I’ll try to condense the 400+ page book down to 500+ words for today’s post.  Fortunately, I have the Reader’s Digest version as well (look for Chapter 23 [Utilization-Focused Evaluation] in the following citation: Stufflebeam, D. L., Madaus, G. F. Kellaghan, T. (2000). Evaluation Models: Viewpoints on educational and human services evaluation, 2ed. Boston, MA: Kluwer Academic Publishers).  Patton’s chapter is a good summary–still it is 14 pages.

To start, it is important to understand exactly how the word “evaluation” is used in the context of utilization.  In the Stufflebeam, Madaus, & Kellaghan publication cited above, Patton (2000, p. 426) describes evaluation as, “the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness and/or inform decisions about future programming.  Utilization-focused evaluation (as opposed to program evaluation in general) is evaluation done for and with specific intended primary users for specific, intended uses (emphasis added). ”

There are four different types of use–instrumental, conceptual, persuasive, and process. The interest of potential stakeholders cannot be served well unless the stakeholder(s) whose interests are being served is made explicit.

To understand the types of use,  I will quote from a document titled, “Non-formal Educator Use of Evaluation Findings: Factors of Influence” by Sarah Baughman.

“Instrumental use occurs when decision makers use the findings to change or modify the program in some way (Fleisher & Christie, 2009; McCormick, 1997; Shulha & Cousins, 1997). The information gathered is used in a direct, concrete way or applied to a specific decision (McCormick, 1997).

Conceptual use occurs when the evaluation findings help the program staff or key stakeholders understand the program in a new way (Fleisher & Christie, 2009).

Persuasive use has also been called political use and is not always viewed as a positive type of use (McCormick, 1997). Examples of negative persuasive use include using evaluation results to justify or legitimize a decision that is already made or to prove to stakeholders or other administrative decision makers that the organization values accountability (Fleisher & Christie, 2009). It is sometimes considered a political use of findings with no intention to take the actual findings or the evaluation process seriously (Patton, 2008). Recently persuasive use has not been viewed as negatively as it once was.

Process use is the cognitive, behavioral, program, and organizational changes resulting, either directly or indirectly, from engagement in the evaluation process and learning to think evaluatively” (Patton, 2008, p. 109). Process use results not from the evaluation findings but from the evaluation activities or process.”

Before beginning the evaluation, the question, “Who is the primary intended user of the evaluation?” must not only be asked; it also must be answered.  What stakeholders need to be at the table? Those are the people who have a stake in the evaluation findings and those stakeholders may be different for each evaluation.  They are probably the primary intended users who will determine the evaluations use.

Citations mentioned in the Baughman quotation include:

  • Fleischer, D. N. & Christie, C. A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation Association members. American Journal of Evaluation, 30(2), 158-175
  • McCormick, E. R. (1997). Factors influencing the use of evaluation results. Dissertation Abstracts International: Section A: The Humanities and Social Sciences, 58, 4187 (UMI 9815051).
  • Shula, L. M. & Cousins, J. B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195-208.
  • Patton, M. Q. (2008). Utilization Focused Evaluation (4th ed.). Thousand Oaks: Sage Publications.

My older daughter (I have two–Morgan, the older, and Mersedes, the younger, ) suggested I talk about the evaluative activities around the holidays…hmmm.

Since I’m experiencing serious writers block this week, I thought I’d revisit evaluation as an everyday activity, with a holiday twist.

Keep in mind that the root of evaluation is from the French after the Latin is value (Oxford English Dictionary on line says:  [a. Fr. évaluation, f. évaluer, f. é- =es- (:{em} L. ex) out + value VALUE.]).


Perhaps this is a good time to mention that the theme for Evaluation 2011 put forth by incoming AEA President, Jennifer Greene, is Values and Valuing in Evaluation.  I want to quote from her invitation letter, “…evaluation is inherently imbued with values.  Our work as evaluators intrinsically involves the process of valuing, as our charge is to make judgments (emphasis original) about the “goodness” or the quality, merit, or worth of a program.”

Let us consider the holidays “a program”. The Winter Holiday season starts (at least in the US and the northern hemisphere) with the  Thanksgiving holiday followed shortly thereafter by the first Sunday in Advent.  Typically this period of time includes at least the  following holidays:  St. Nicholas Day, Hanukkah, Winter Solstice, Christmas, Kwanzaa, Boxing Day, New Year’s, and Epiphany (I’m sure there are ones I didn’t list that are relevant).  This list typically takes us through January 6.  (I’m getting to the value part–stay with me…)

When I was a child, I remember the eager expectation of anticipating Christmas–none of the other holidays were even on my radar screen.  (For those of you who know me, you know how long ago that was…)  Then with great expectation (thank you, Charles),   I would go to bed and, as patiently as possible, await the moment when my father would turn on the tree lights, signaling that we children could descend to the living room.  Then poof!  That was Christmas. In 10 minutes it was done. The emotional bath I always took diminished greatly the value of this all important holiday.  Vowing that my children would grow up without the emotional bath of great expectations and dashed hopes, I choose to Celebrate the Season.  In doing so,  found value in the waiting of Advent, the majic of Hanukkah,  sharing of Kwanzaa, the mystery of Christmas and the traditions that come with all of these holidays.  There are other traditions that we revisit yearly, yet we find delight in remembering what the Winter Holiday traditions are and mean; remembering the foods we eat; the times we’ve shared.  From all this we find value in our program.  Do I still experience the emotional bath of childhood during this Holiday Season–not any more–and my children tell me that they like spreading the holidays out over the six week period.  I think this is the time of the year where we can take a second look at our programs (whether they are the holidays, youth development, watershed stewardship, nutrition education, or something else) and look for value in our programs–the part of the program that matters.  Evaluation is the work of capturing that value.  How we do that is what evaluation is all about.

I’ve been writing for almost a year, 50 some columns.  This week, before the Thanksgiving holiday, I want to share evaluation resources I’ve found useful and for which I am thankful.  Although there are probably others with which I am not familiar, these are ones for which I am thankful.

\

My colleagues at UWEX, University of  Wisconsin Extension Service, Ellen Taylor-Powell, and at Penn State Extension Service,

Nancy Ellen Kiernan,



both have resources that are very useful, easily accessed, clearly written.  Ellen’s can be found at the Quick Tips site and Nancy Ellen’s can be found at her Tipsheets index.  Both Nancy Ellen and Ellen have other links that may be useful as well.  Access their sites through the links above.

Last week, I mentioned the American Evaluation Association.     One of the important structures in AEA is the Topical Interest Groups (or TIGs).  Extension has a TIG called the Extension Education Evaluation which helps organize Extension professionals who are interested or involved in evaluation.  There is a wealth of information on the AEA web site.  about the evaluation profession,  access to the AEA elibrary, links to AEA on Facebook, Twitter, and LinkedIn.  You do NOT have to be a member,  to subscribe to blog, AEA365, which as the name suggests, is posted daily by different evaluators.  Susan Kistler, AEA’s executive director, posts every Saturday.  The November 20 post talks about the elibrary–check it out.

Many states and regions have local AEA affiliates.  For example, OPEN, Oregon Program Evaluators Network, serves southern Washington and Oregon.  It has an all volunteer staff who live mostly in Portland and Vancouver WA.  The AEA site lists over 20 affiliates across the country, many with their own website.  Those websites have information about connecting with local evaluators.

In addition to these valuable resources, National eXtension (say e-eXtension) has developed a community of practice devoted to evaluation and Mike Lambur, eXtension Evaluation and Research Leader, who can be reached at mike.lambur@extension.org. According to the web site, National eXtension “…is an interactive learning environment delivering the best, most researched knowledge from the smartest land-grant university minds across America. eXtension connects knowledge consumers with knowledge providers—experts like you who know their subject matter inside out.”

Happy Thanksgiving.  Be safe.

Recently, I attended the American Evaluation Annual (AEA) conference is San Antonio, TX. And although this is a stock photo, the weather (until Sunday) was like it seems in this photo.  The Alamo was crowded–curious adults, tired children, friendly dogs, etc.  What I learned was that  San Antonio is the only site in the US where there are five Spanish missions within 10 miles of each other.  Starting with the Alamo (the formal name is San Antonio de Valero), as you go south out of San Antonio, the visitor will experience the Missions Concepcion, San Juan, San Jose, and Espada, all of which will, at some point in the future, be on the Mission River Walk (as opposed to the Museum River Walk).  The missions (except the Alamo) are National Historic Sites.  For those of you who have the National Park Service Passport, site stamps are available.

AEA is the professional home for evaluators.  The AEA has approximately 6000 members and about 2500 of them attended the conference, called Evaluation 2010.  This year’s president, Leslie Cooksy, identified “Evaluation Quality”

as the theme for the conference.  Leslie says in her welcome letter, “Evaluation quality is an umbrella theme, with room underneath for all kinds of ideas–quality from the perspective of different evaluation approaches, the role of certification in quality assurance, metaevaluation and the standards used to judge quality…”  Listening to the plenary sessions, attending the concurrent sessions, networking with long time colleagues, I got to hear so many different perspectives on quality.

In the closing plenary, Hallie Preskill, 2007 AEA president, was asked to comment on the themes she heard throughout the conference.  She used mind mapping (a systems tool) to quickly and (I think) effectively organize the value of AEA.  She listed seven main themes:

  1. Truth
  2. Perspectives
  3. Context
  4. Design and methods
  5. Representation
  6. Intersections
  7. Relationships

Although she lists, context as a separate theme, I wonder if evaluation quality is really contextual first and then these other things.

Hallie listed sub themes under each of these topics:

  1. What is (truth)?  Whose (truth)?  How much data is enough?
  2. Whose (perspectives)?  Cultural (perspectives).
  3. Cultural (context). Location (context).  Systems (context).
  4. Multiple and mixed (methods).  Multiple case studies.  Stories.  Credible.
  5. Diverse (representation).  Stakeholder (representation).
  6. Linking (intersections).  Interdisciplinary (intersections).
  7. (Relationships) help make meaning.  (Relationships) facilitate quality.   (Relationships) support use.  (Relationships) keep evaluation alive.

Being a member of AEA is all this an more.  Membership is affordable ($80.00, regular; $60.00 for joint membership with the Canadian Evaluation Society; and $30.00 for full time students).  Benefits are worth that and more.  The conference brings together evaluators from all over.  AEA is quality.

Welcome back.  It is Tuesday.

Some folks have asked me–now that I’ve pointed out that all of you are evaluators–where will I take this column.  That was food for thought…and although I’ve got a topic ready to go, I’m wondering if jumping off into working evaluation is the best place to go next.  One place I did go is to update the “About” tab on this page…

Maybe thinking some more about what you evaluated today; maybe thinking about a bigger picture of evaluation; maybe just being glad that the sun is shining is enough (although the subfreezing temperatures remind me of Minnesota without the snow).   The saying goes, “Minnesota has two seasons–green and white.”  Maybe Oregon has two seasons–dry and wet.   That is an evaluative question, by-the-way.  Hmmm…thinking about evaluation, having an evaluation question of the week sounds like a good idea.  What’s yours—small or large?  I may not have an answer, I will have an idea.

Ok–so now that I’ve dealt with the evaluative question of the day–I think it is time to go to more substance, like “what exactly IS program evaluation?”  Good question–if we are going to have this conversation, then we need to be using the same language.

First, let me address why the 489px-Wikipedia-logo-en-big link to Wikipedia is on the far right in my Blogroll list.  I’ve learned Wikipedia is a readily available, general reference that gets folks started understanding a subject.  It is NOT the definitive word on that subject.  Wikipedia (see link on the right) describes program evaluation as “…a systematic method for collecting, analyzing, and using information to answer basic questions about projects, policies, and programs.”  Well…yes…except that Wikipedia seems to be defining evaluation when it includes “projects and policies.”  Program evaluation deals with programs.  Wikipedia does have an entry for evaluation as well as an entry for program evaluation. Read both.

Evaluation can be applied to projects, policies, personnel, processes, performances, proposals, products AND programs. I won’t talk much about personnel, performances, proposals, or products. Projects may be another word for program; policies usually result in programs; and processes are often part of programs, so they may be talked about sometimes.

Most of what thisScriven book cover blog will address is program evaluation because most of you (including me) have programs that need evaluating.  When I talk about program evaluation I am talking about “…determining the merit, worth, or value of (a program).” (Michael Scriven uses this definition in his book, Evaluation Thesaurus, 1991, Sage Publications.)

It is available at the following site (or through the publisher).

So for me and what you need to know about program evaluation is this:

  • The root of evaluation is value (OED lists the etymology as [a.  Fr.  évaluation, f.  évaluer, f.  é- =es- (: L.  ex) out + value])
  • Program evaluation IS systematic.
  • Program evaluation DOES collect, analyze, and utilize information.
  • Program evaluation ATTEMPTS to determine the merit, worth, or value of a program.
  • Program evaluation ANSWERS this question:

“What difference does this program make in the lives and well being of (fill in the blank here—citizens of Oregon, my 4-H club, residents of the watershed, you get the idea).”

NOTE: I talk about “lives and well-being” because most programs are delivered to individuals who will be CHANGED as a result of participating in the program, i.e., experience a difference.

For those of us in Extension, when we do evaluation we are trying to determine if we “improved something;” we are not trying to “prove” that what we did accomplished something.  Peter Bloom always said, “Extension is out to improve something (attribution), not prove something (causation).”  We are looking for attribution not causation.

Many references exist that talk more about what is program evaluation.  My favorite reference is by Jody Fitzpatrick, Jim Sanders, and Blaine Worthen and is called, Program Evaluation: Alternative Approaches and Practical Guidelines (2004, Pearson Education)Fitzpatrick book cover_

It is available at the following site (or through the publisher).