Happy Thanksgiving.  A simple evaluative statement if ever there was one.

Did you know that there are eight countries in the world that have a holiday dedicated to giving thanks.  That’s not very many.  (If you want to know which ones go to this site–the site also has a nice image.)

Thanksgiving could be considered the evaluator’s holiday.  We take the time, hopefully, to recognize what is of value, what has merit, what has worth in our lives and to be grateful for those contributions, opportunities, friends, family members, and (of course, in the US) the food (although I know that this is not necessarily the case everywhere).

My daughters and I, living in a vegetarian household, have put a different twist on Thanksgiving–we serve foods for which we are thankful–foods we have especially enjoyed over the year.  Sometimes they are the same foods–like chocolate pecan pie; sometimes not.  One year, we had all green foods–we had a good laugh that year.  This year, my younger daughter is home from boarding school and has asked (!!!) for Kale and White bean soup (I’ve modified it some).  A dear friend of mine would have new foods for which the opportunity to enjoy has presented itself (like in this recipe).

What ever you choose to have on your table, remember the folks who helped to put that food there; remember the work that it took to make the feast; and most of all, remember that there is value in being grateful.

 

 

Last week, I mentioned that I would address contribution analysis–an approach to exploring cause and effect.  Although I had seen the topic appear several times over the last 3 – 4 years, I never pursued it.  Recently, though, the issue has come to the forefront of many conversations.  I hear Extension faculty saying that their program caused this outcome.  This statement is implied when they come to ask how to write “good” impact statements, not acknowledging that the likelih0od of actually having an impact is slim–long term outcomes, maybe.  Impact?  Probably not.  So finding a logical defensible approach to discussing the lack of causality (as in the A caused B of randomized control trials-type of causality) that is inherent in Extension programing is important.  John Mayne, an independent advisor on public sector performance, writes articulately on this topic (citations are listed below).

The article I read, and from which this blog entry is based, was written in 2008.  Mayne has been writing on this topic since 1999, when he was with the Canadian Office of the Auditor General.  For him the question became critical when the use of randomized control trials (RCT) was not appropriate yet program performance needed to be addressed.

In that article, referenced below, he details six iterative steps in contribution analysis:

  1. Set out the attribution problem to be addressed;
  2. Develop a theory of change and risks to that theory of change;
  3. Gather the existing evidence on the theory of change;
  4. Assemble and assess the contribution story, and challenges to that story;
  5. Seek out additional evidence; and
  6. Revise and strengthen the contribution story

He loops step six back to step four (the iterative process).

By exploring the contribution the program is making to the observed results, one can address the attribution of the program to the desired results.  He goes on to say that (and since I’m quoting, I’m using the Canadian spellings), “Causality is inferred from the following evidence:

  1. The programme is based on a reasoned theory of change: the assumptions behind why the program is expected to work are sound, are plausible, and are agreed upon by at least some of the key players.
  2. The activities of the programme were implemented.
  3. The theory of change is verified by evidence: the chain of expected results occurred.
  4. Other factors influencing the programme were assessed and were either shown not to have made a significant contribution or, if  they did, the relative contributionwas recognised.”

He focuses on clearly defining the theory of change; modeling that theory of change, and revisiting that theory of change regularly across the life of the program.

 

REFERENCES:

Mayne, J. (1999).  Addressing Attribution Through Contribution Analysis: Using Performance Measures Sensibly.  Available at: dsp-psd.pwgsc.gc.ca/Collection/FA3-31-1999E.pdf

Mayne, J. (2001).  Addressing attribution through contribution analysis: Using performance measures sensibly.  Canadian Journal of Program Evaluation, 16: 1 – 24.  Available at:  http://www.evaluationcanada.ca/secure/16-1-001.pdf

Mayne, J. & Rist, R. (2006). Studies are not enough:  The necessary transformation of evaluation.  Canadian Journal of Program Evaluation, 21: 93-120.  Available at: http://www.evaluationcanada.ca/secure/21-3-093.pdf

Mayne, J. (2008).  Contribution analysis:  An approach to exploring cause and effect. Institutional Learning and Change Initiative, Brief 16.  Available at:  http://www.cgiar-ilac.org/files/publications/briefs/ILAC_Brief16_Contribution_Analysis.pdf

 

 

Ellen Taylor-Powell, UWEX Evaluation Specialist Emeritus, presented via webinar from Rome to the WECT (say west) cohorts today.  She talked about program planning and logic modeling.  The logic model format that Ellen developed was picked up by USDA, now NIFA, and disseminated across Extension.  That dissemination had an amazing effect on Extension, so much so that most Extension faculty know the format and can use it for their programs.

 

Ellen went further today than those resources located through hyperlinks on the UWEX website.  She cited the work by Sue Funnell and Patricia J. Rogers, Purposeful program theory: Effective use of theories of change and logic models  . It was published in March, 2011.  Here is what the publisher (Jossey-Bass, an imprint of Wiley) says:

Between good intentions and great results lies a program theory—not just a list of tasks but a vision of what needs to happen, and how. Now widely used in government and not-for-profit organizations, program theory provides a coherent picture of how change occurs and how to improve performance. Purposeful Program Theory shows how to develop, represent, and use program theory thoughtfully and strategically to suit your particular situation, drawing on the fifty-year history of program theory and the authors’ experiences over more than twenty-five years.

Two reviewers who I have mentioned before, Michael Quinn Patton and E. Jane Davidson, say the following:

“From needs assessment to intervention design, from implementation to outcomes evaluation, from policy formulation to policy execution and evaluation, program theory is paramount. But until now no book has examined these multiple uses of program theory in a comprehensive, understandable, and integrated way. This promises to be a breakthrough book, valuable to practitioners, program designers, evaluators, policy analysts, funders, and scholars who care about understanding why an intervention works or doesn’t work.” —Michael Quinn Patton, author, Utilization-Focused Evaluation

“Finally, the definitive guide to evaluation using program theory! Far from the narrow ‘one true way’ approaches to program theory, this book provides numerous practical options for applying program theory to fulfill different purposes and constraints, and guides the reader through the sound critical thinking required to select from among the options. The tour de force of the history and use of program theory is a truly global view, with examples from around the world and across the full range of content domains. A must-have for any serious evaluator.” —E. Jane Davidson, PhD, Real Evaluation Ltd.

Jane is the author of the book, Evaluation Methodology Basics: The nuts and bolts of sound evaluation, published by Sage..  This book “…provides a step-by-step guide for doing a real evaluation.  It focuses on the main kinds of “big picture” questions that evaluators usually need to answer, and how the nature of such questions is linked to evaluation methodology choices.”  And although Ellen didn’t specfically mention this book, it is a worthwhile resource for nascent evaluators.

Two other resources that were mentioned today were Jonny Morell’s book, Evaluation in the face of uncertainty:  Anticipating surprise and responding to the inevitable. This volume was published by Guilford Press..  Ellen also mentioned John Mayne and his work in contribution analysis.  A quick web search provided this reference:  Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. ILAC Brief No. 16. Rome, Italy: Institutional Learning and Change (ILAC) Initiative.  I’ll talk more about contribution analysis next week in TIMELY TOPICS.

 

If those of you who listened to Ellen remember other sources that she mentioned, let me know and I’ll put them here next week.