I’ve long suspected I wasn’t alone in the recognition that the term impact is used inappropriately in most evaluation. 

Terry Smutlyo sings a song about impact during an outcome mapping seminar he conducted.  Terry Smutlyo is the Director, Evaluation International Development Research Development Research Center, Ottawa, Canada.  He ought to know a few things about evaluation terminology.  He has two versions of this song, Impact Blues, on YouTube; his comments speak to this issue.  Check it out.

 

Just a gentle reminder to use your words carefully.  Make sure everyone knows what you mean and that everyone at the table agrees with the meaning you use.

 

This week the post is  short.  Terry says it best.

Next week I’ll be at the American Evaluation Association annual meeting in Anaheim, CA, so no post.  No Disneyland visit either…sigh

 

 

I am reading the book, Eaarth, by Bill McKibben (a NY Times review is here).  He writes about making a difference in the world on which we live.  He provides numerous  examples that have all happened in the 21st century, none of them positive or encouraging. He makes the point that the place in which we live today is not, and never will be again, like the place in which we lived when most of us were born.  He talks about not saving the Earth for our grandchildren but rather how our parents needed to have done things to save the earth for them–that it is too late for the grandchildren.  Although this book is very discouraging, it got me thinking.

 

Isn’t making a difference what we as Extension professionals strive to do?

Don’t we, like McKibben, need criteria to determine what that difference can/could/would be made and look like?

And if we have that criteria well established, won’t we be able to make a difference, hopefully positive (think hand washing here)?  And like this graphic, , won’t that difference be worth the effort we have put into the attempt?  Especially if we thoughtfully plan how to determine what that difference is?

 

We might not be able to recover (according to McKibben, we won’t) the Earth the way it was when most of us were born; I think we can still make a difference–a positive difference–in the lives of the people with whom we work.  That is an evaluative opportunity.

 

 

A colleague asks for advice on handling evaluation stories, so that they don’t get brushed aside as mere anecdotes.  She goes on to say of the AEA365 blog she read, ” I read the steps to take (hot tips), but don’t know enough about evaluation, perhaps, to understand how to apply them.”  Her question raises an interesting topic.  Much of what Extension does can be captured in stories (i.e., qualitative data)  rather than in numbers (i.e., quantitative data).  Dick Krueger, former Professor and Evaluation Leader (read specialist) at the University of Minnesota has done a lot of work in the area of using stories as evaluation.  Today’s post summarizes his work.

 

At the outset, Dick asks the following question:  What is the value of stories?  He provides these three answers:

  1. Stories make information easier to remember
  2. Stories make information more believable
  3. Stories can tap into emotions.

There are all types of stories.  The type we are interested in for evaluation purposes are organizational stories.  Organizational stories can do the following things for an organization:

  1. Depict culture
  2. Promote core values
  3. Transmit and reinforce the culture
  4. Provide instruction to employees
  5. Motivate, inspire, and encourage

He suggests six common types of organizational stories:

  1. Hero stories  (someone in the organization who has done something beyond the normal range of achievement)
  2. Success stories (highlight organizational successes)
  3. Lessons learned stories (what major mistakes and triumphs teach the organization)
  4. “How it works around here” stories (highlight core organizational values reflected in actual practice
  5. “Sacred bundle” stories (a collection of stories that together depict the culture of an organization; core philosophies)
  6. Training and orientation stories (assists new employees in understanding how the organization works)

To use stories as evaluation, the evaluator needs to consider how stories might be used, that is, do they depict how people experience the program?  Do they understand program outcomes?  Do they get insights into program processes?

You (as evaluator) need to think about how the story fits into the evaluation design (think logic model; program planning).  Ask yourself these questions:  Should you use stories alone?  Should you use stories that lead into other forma of inquiry?  Should you use stories that augment/illustrate results from other forms of inquiry?

You need to establish criteria for stories.  Rigor can be applied to story even though the data are narrative.  Criteria include the following:   Is the story authentic–is it truthful?  Is the story verifiable–is there a trail of evidence back to the source of the story?  Is there a need to consider confidentiality?  What was the original intent–purpose behind the original telling?  And finally, what does the story represent–other people or locations?

You will need a plan for capturing the stories.  Ask yourself these questions:  Do you need help capturing the stories?  What strategy will you use for collecting the stories?  How will you ensure documentation and record keeping?  (Sequence the questions; write them down the type–set up; conversational; etc.)  You will also need a plan for analyzing and reporting the stories  as you, the evaluator,  are responsible for finding meaning.

 

Last week, the National Outreach Scholarship Conference was held at Michigan State University campus.  There was an impressive array of speakers and presentations.  I had the luxury of attending Michael Quinn Patton’s session on Utilization-focused Evaluation. And although the new edition of the book is 600+ pages, Michael distilled the essentials down.  He also announced a new book (only 400+ pages) called The Essentials of Utilization Focused Evaluation. .  This volume is geared to practitioners as opposed to the classroom or the academic.

 

One take away message for me was this:  “Context changes the focus of ‘use’ “.  So if you have a context whereby the reports are only for accounting purposes, the report will look very different from a context whereby the reports are for detailing the difference being made.  Now, this sounds very intuitive.  Like, DUH, Molly, tell me something I don’t know.  Yet this is so important because you, as the evaluator, have the responsibility and the obligation to prepare stakeholders to use data in OTHER ways than as a reporting activity. That responsibility and obligation is tied to the Program Evaluation Standards.  The Joint Committee revised the standards after soliciting feedback from multiple sources.  This 3rd Ed. addresses with numerous examples and discussion the now five standards.  These standards are:

  1. Utility
  2. Feasibility
  3. Propriety
  4. Accuracy
  5. Accountability.

Apparently, there was considerable discussion as the volume was being compiled that Accountability needed to be first.  Think about it, folks.  If Accountability was first, then evaluations would build on “the responsible use of resources to produce value.”  Implementation, improvement, worth, and costs would drive evaluation.  By placing utilization first, evaluators have the responsibility and obligation to base judgements “…on the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs…to examine the variety of possible uses for evaluation processes, findings, and products.”

Certainly validates use as defined in Utilization-Focused Evaluation.  Take Michael’s workshop.  The American Evaluation Association is offering this workshop at its annual meeting, in Anaheim, CA and the workshop is on Wednesday, November 2.  Go to eval.org and click on Evaluation Conference.  If you can’t join the workshop–Read the book!  (either one).  It is well worth it.