I’ll be gone next week so this is the last post of 2011 and  some reflection of 2011 is in order, I think.

For each of you, 2011 was an amazing year.  I know you are thinking, “Yeah, right.”  Truly, 2011 was amazing–and I invoke Dickens here–because 2011 “…was the best of times, it was the worst of times”.  Kiplinger’s  magazine used as its masthead for many years, the saying “We live in interesting times”.  So even if your joys were great; your sorrows, overwhelming; your adventures, amazing; the day-to-day, a grind, we live in interesting times and because of that 2011 was an amazing year.  Think about it…you’ll probably agree.  (If not, that is an evaluative question–what criteria are you using; what biases have inadvertently appeared; what value was at stake?)

So let’s look forward to 2012 

Some folks believe that 2012 marks the end of the Mayan long count calendar, the advent of cataclysmic or transformative events, and, with the end of the long count calendar, the end of the world on December 21, 2012.  Possibly; probably not.  Everyone has some end of the world scenario in mind.   For me,  end of the world as I know it happened when the carbon parts per million passed 350 (the carbon foot print for November was 390.31).  Let’s think evaluation.

Jennifer Greene, the outgoing AEA president, looking forward and keeping in mind the 2011 global catastrophes asks, “…what does evaluation have to do with these contemporary global catastrophes and tribulations?” (of which there were many).  She says:

  • “If you’re not part of the solution, then you’re part of the problem” (Eldridge Cleaver). Evaluation offers opportunities for inclusive engagement with the key social issues at hand. (Think 350.0rg, Heifer Foundation, Habitat for Humanity, and any other organization reflecting social issues.)
  • Most evaluators are committed to making our world a better place. Most evaluators wish to be of consequence in the world.

Are you going to be part of the problem or part of the solution?  How will you make the world a better place?  What difference will you make?  What new year’s resolutions will you make to answer these questions?  Think on it.

 

May 2012 bring you all another amazing year!

I came across this quote from Viktor Frankl today (thanks to a colleague)

“…everything can be taken from a man (sic) but one thing: the last of the human freedoms – to choose one’s attitude in any given set of circumstances, to choose one’s own way.” Viktor Frankl (Man’s Search for Meaning – p.104)

I realized that,  especially at this time of year, attitude is everything–good, bad, indifferent–the choice is always yours.

How we choose to approach anything depends upon our previous experiences–what I call personal and situational bias.   Sadler* has three classifications for these biases.  He calls them value inertias (unwanted distorting influences which reflect background experience), ethical compromises (actions for which one is personally culpable), and cognitive limitations (not knowing for what ever reason).

When we approach an evaluation, our attitude leads the way.  If we are reluctant, if we are resistant, if we are excited, if we are uncertain, all these approaches reflect where we’ve been, what we’ve seen, what we have learned, what we have done (or not).  We can make a choice how to proceed.

The America n Evaluation Association (AEA) has long had a history of supporting difference.  That value is imbedded in the guiding principles.  The two principles which address supporting differences are

  • Respect for People:  Evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
  • Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

AEA also has developed a Cultural Competence statement.  In it, AEA affirms that “A culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation.”

Both of these documents provide a foundation for the work we do as evaluators as well as relating to our personal and situational bias. Considering them as we  enter into the choice we make about attitude will help minimize the biases we bring to our evaluation work.  The evaluative question from all this–When has your personal and situational biases interfered with you work in evaluation?

Attitude is always there–and it can change.  It is your choice.

 

 

 

 

Sadler, D. R. (1981). Intuitive data processing as a potential source of bias in naturalistic evaluations.  Education Evaluation and Policy Analysis, 3, 25-31.

I’m involved in evaluating a program that is developing as it evolves.  There is some urgency to get predetermined, clear, and measurable outcomes to report to the administration.  Typically, I wouldn’t resist (see resistance post) this mandate; only this program doesn’t lend itself to this approach.  Because this program is developing as it is implemented, it can’t easily be rolled out to all 36 counties in Oregon at once, as much as administration would love to see that happen.  So what can we do?

We can document the principles that drive the program and use them to stage the implementation across the state.

We can identify the factors that tell us that the area is ready to implement the program (i.e., the readiness factors).

We can share lessons learned with key stakeholders in potential implementation areas.

These are the approaches that Michael Patton’s Developmental Evaluation advocate.  Michael says, “Developmental evaluation is designed to be congruent with and nurture developmental, emergent, innovative, and trans-formative processes.” I had the good fortune to talk with Michael about this program in light of these processes.  He indicated that identifying principles not a model supports developmental evaluation and a program in development.  By using underlying principles, we inform expansion.  Can these principles be coded…yes.  Are they outcome indicators…possibly.  Are they outcome indicators in the summative sense of the word?  Nope.  Not even close.  These principles, however, can help the program people roll out the next phase/wave of the program.

As an evaluator, employing developmental evaluation, do I ignore what is happening on the ground–at each phase of the program implementation.  Not a chance.  I need to encourage the program people at that level to identify clear and measurable outcomes–because from those clear and measurable outcomes will come the principles needed for the next phase.  (This is a good example of the complexity concepts that Michael talks about in DE and are the foundation for systems thinking.)  The readiness factors will also become clear when looking at individual sites.  From this view, we can learn a lot–we can apply what we have learned and, hopefully, avoid similar mistakes.  Will mistakes still occur?  Yes.  Is it important that those lessons are heeded; shared with administrators; and used to identify readiness factors when the program is going to be implemented in a new site?  Yes.  Is this process filled with ambiguity?  You bet.  No one said it would be easy to make a difference.

We are learning as we go–that is the developmental aspect of this evaluation and this program.

Jennifer Greene, the current American Evaluation President, expanded on the theme of Thanksgiving and gratitude.  She posted her comments in the AEA  Newsletter.  I liked them a lot.  I quote them below…

 

Thanksgiving is a ‘time out’ from the busyness of daily life, a time for quiet reflection, and a time to contemplate pathways taken and pathways that lie ahead.

In somewhat parallel fashion, evaluation can also offer a ‘time out’ from the busyness and routine demands of daily life, notably for evaluation stakeholders and especially for program developers, administrators, and staff. Our educative traditions in particular are oriented toward goals of learning, enlightenment, reflection, and redirection. These traditions, which are anchored in the evaluation ideals of Lee Cronbach and Carol Weiss, aspire to provide a data-based window into how well the logic of a program translates to particular experiences in particular contexts, into promising practices evident in some contexts even if they are not part of the program design, into who is being well served by the program and who remains overlooked. Our educative practices position evaluation as a lens for critical reflection (emphasis added) on the quality of a program’s design and implementation, for reconsideration of the urgency of the needs the program is intended to address, for contemplation of alternative pathways that could be taken, and thus broadly as a vehicle by which society learns about itself (from Cronbach’s 95 theses).

She concludes her comments with a statement that I have lived by and believed throughout my career as an evaluator.  “…I also believe that education remains the most powerful of all social change alternatives.”

 

Education is the great equalizer and evaluation works hand-in-hand with education.

Happy Thanksgiving.  A simple evaluative statement if ever there was one.

Did you know that there are eight countries in the world that have a holiday dedicated to giving thanks.  That’s not very many.  (If you want to know which ones go to this site–the site also has a nice image.)

Thanksgiving could be considered the evaluator’s holiday.  We take the time, hopefully, to recognize what is of value, what has merit, what has worth in our lives and to be grateful for those contributions, opportunities, friends, family members, and (of course, in the US) the food (although I know that this is not necessarily the case everywhere).

My daughters and I, living in a vegetarian household, have put a different twist on Thanksgiving–we serve foods for which we are thankful–foods we have especially enjoyed over the year.  Sometimes they are the same foods–like chocolate pecan pie; sometimes not.  One year, we had all green foods–we had a good laugh that year.  This year, my younger daughter is home from boarding school and has asked (!!!) for Kale and White bean soup (I’ve modified it some).  A dear friend of mine would have new foods for which the opportunity to enjoy has presented itself (like in this recipe).

What ever you choose to have on your table, remember the folks who helped to put that food there; remember the work that it took to make the feast; and most of all, remember that there is value in being grateful.

 

 

Last week, I mentioned that I would address contribution analysis–an approach to exploring cause and effect.  Although I had seen the topic appear several times over the last 3 – 4 years, I never pursued it.  Recently, though, the issue has come to the forefront of many conversations.  I hear Extension faculty saying that their program caused this outcome.  This statement is implied when they come to ask how to write “good” impact statements, not acknowledging that the likelih0od of actually having an impact is slim–long term outcomes, maybe.  Impact?  Probably not.  So finding a logical defensible approach to discussing the lack of causality (as in the A caused B of randomized control trials-type of causality) that is inherent in Extension programing is important.  John Mayne, an independent advisor on public sector performance, writes articulately on this topic (citations are listed below).

The article I read, and from which this blog entry is based, was written in 2008.  Mayne has been writing on this topic since 1999, when he was with the Canadian Office of the Auditor General.  For him the question became critical when the use of randomized control trials (RCT) was not appropriate yet program performance needed to be addressed.

In that article, referenced below, he details six iterative steps in contribution analysis:

  1. Set out the attribution problem to be addressed;
  2. Develop a theory of change and risks to that theory of change;
  3. Gather the existing evidence on the theory of change;
  4. Assemble and assess the contribution story, and challenges to that story;
  5. Seek out additional evidence; and
  6. Revise and strengthen the contribution story

He loops step six back to step four (the iterative process).

By exploring the contribution the program is making to the observed results, one can address the attribution of the program to the desired results.  He goes on to say that (and since I’m quoting, I’m using the Canadian spellings), “Causality is inferred from the following evidence:

  1. The programme is based on a reasoned theory of change: the assumptions behind why the program is expected to work are sound, are plausible, and are agreed upon by at least some of the key players.
  2. The activities of the programme were implemented.
  3. The theory of change is verified by evidence: the chain of expected results occurred.
  4. Other factors influencing the programme were assessed and were either shown not to have made a significant contribution or, if  they did, the relative contributionwas recognised.”

He focuses on clearly defining the theory of change; modeling that theory of change, and revisiting that theory of change regularly across the life of the program.

 

REFERENCES:

Mayne, J. (1999).  Addressing Attribution Through Contribution Analysis: Using Performance Measures Sensibly.  Available at: dsp-psd.pwgsc.gc.ca/Collection/FA3-31-1999E.pdf

Mayne, J. (2001).  Addressing attribution through contribution analysis: Using performance measures sensibly.  Canadian Journal of Program Evaluation, 16: 1 – 24.  Available at:  http://www.evaluationcanada.ca/secure/16-1-001.pdf

Mayne, J. & Rist, R. (2006). Studies are not enough:  The necessary transformation of evaluation.  Canadian Journal of Program Evaluation, 21: 93-120.  Available at: http://www.evaluationcanada.ca/secure/21-3-093.pdf

Mayne, J. (2008).  Contribution analysis:  An approach to exploring cause and effect. Institutional Learning and Change Initiative, Brief 16.  Available at:  http://www.cgiar-ilac.org/files/publications/briefs/ILAC_Brief16_Contribution_Analysis.pdf

 

 

Ellen Taylor-Powell, UWEX Evaluation Specialist Emeritus, presented via webinar from Rome to the WECT (say west) cohorts today.  She talked about program planning and logic modeling.  The logic model format that Ellen developed was picked up by USDA, now NIFA, and disseminated across Extension.  That dissemination had an amazing effect on Extension, so much so that most Extension faculty know the format and can use it for their programs.

 

Ellen went further today than those resources located through hyperlinks on the UWEX website.  She cited the work by Sue Funnell and Patricia J. Rogers, Purposeful program theory: Effective use of theories of change and logic models  . It was published in March, 2011.  Here is what the publisher (Jossey-Bass, an imprint of Wiley) says:

Between good intentions and great results lies a program theory—not just a list of tasks but a vision of what needs to happen, and how. Now widely used in government and not-for-profit organizations, program theory provides a coherent picture of how change occurs and how to improve performance. Purposeful Program Theory shows how to develop, represent, and use program theory thoughtfully and strategically to suit your particular situation, drawing on the fifty-year history of program theory and the authors’ experiences over more than twenty-five years.

Two reviewers who I have mentioned before, Michael Quinn Patton and E. Jane Davidson, say the following:

“From needs assessment to intervention design, from implementation to outcomes evaluation, from policy formulation to policy execution and evaluation, program theory is paramount. But until now no book has examined these multiple uses of program theory in a comprehensive, understandable, and integrated way. This promises to be a breakthrough book, valuable to practitioners, program designers, evaluators, policy analysts, funders, and scholars who care about understanding why an intervention works or doesn’t work.” —Michael Quinn Patton, author, Utilization-Focused Evaluation

“Finally, the definitive guide to evaluation using program theory! Far from the narrow ‘one true way’ approaches to program theory, this book provides numerous practical options for applying program theory to fulfill different purposes and constraints, and guides the reader through the sound critical thinking required to select from among the options. The tour de force of the history and use of program theory is a truly global view, with examples from around the world and across the full range of content domains. A must-have for any serious evaluator.” —E. Jane Davidson, PhD, Real Evaluation Ltd.

Jane is the author of the book, Evaluation Methodology Basics: The nuts and bolts of sound evaluation, published by Sage..  This book “…provides a step-by-step guide for doing a real evaluation.  It focuses on the main kinds of “big picture” questions that evaluators usually need to answer, and how the nature of such questions is linked to evaluation methodology choices.”  And although Ellen didn’t specfically mention this book, it is a worthwhile resource for nascent evaluators.

Two other resources that were mentioned today were Jonny Morell’s book, Evaluation in the face of uncertainty:  Anticipating surprise and responding to the inevitable. This volume was published by Guilford Press..  Ellen also mentioned John Mayne and his work in contribution analysis.  A quick web search provided this reference:  Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. ILAC Brief No. 16. Rome, Italy: Institutional Learning and Change (ILAC) Initiative.  I’ll talk more about contribution analysis next week in TIMELY TOPICS.

 

If those of you who listened to Ellen remember other sources that she mentioned, let me know and I’ll put them here next week.

 

I’ve long suspected I wasn’t alone in the recognition that the term impact is used inappropriately in most evaluation. 

Terry Smutlyo sings a song about impact during an outcome mapping seminar he conducted.  Terry Smutlyo is the Director, Evaluation International Development Research Development Research Center, Ottawa, Canada.  He ought to know a few things about evaluation terminology.  He has two versions of this song, Impact Blues, on YouTube; his comments speak to this issue.  Check it out.

 

Just a gentle reminder to use your words carefully.  Make sure everyone knows what you mean and that everyone at the table agrees with the meaning you use.

 

This week the post is  short.  Terry says it best.

Next week I’ll be at the American Evaluation Association annual meeting in Anaheim, CA, so no post.  No Disneyland visit either…sigh

 

 

I am reading the book, Eaarth, by Bill McKibben (a NY Times review is here).  He writes about making a difference in the world on which we live.  He provides numerous  examples that have all happened in the 21st century, none of them positive or encouraging. He makes the point that the place in which we live today is not, and never will be again, like the place in which we lived when most of us were born.  He talks about not saving the Earth for our grandchildren but rather how our parents needed to have done things to save the earth for them–that it is too late for the grandchildren.  Although this book is very discouraging, it got me thinking.

 

Isn’t making a difference what we as Extension professionals strive to do?

Don’t we, like McKibben, need criteria to determine what that difference can/could/would be made and look like?

And if we have that criteria well established, won’t we be able to make a difference, hopefully positive (think hand washing here)?  And like this graphic, , won’t that difference be worth the effort we have put into the attempt?  Especially if we thoughtfully plan how to determine what that difference is?

 

We might not be able to recover (according to McKibben, we won’t) the Earth the way it was when most of us were born; I think we can still make a difference–a positive difference–in the lives of the people with whom we work.  That is an evaluative opportunity.

 

 

A colleague asks for advice on handling evaluation stories, so that they don’t get brushed aside as mere anecdotes.  She goes on to say of the AEA365 blog she read, ” I read the steps to take (hot tips), but don’t know enough about evaluation, perhaps, to understand how to apply them.”  Her question raises an interesting topic.  Much of what Extension does can be captured in stories (i.e., qualitative data)  rather than in numbers (i.e., quantitative data).  Dick Krueger, former Professor and Evaluation Leader (read specialist) at the University of Minnesota has done a lot of work in the area of using stories as evaluation.  Today’s post summarizes his work.

 

At the outset, Dick asks the following question:  What is the value of stories?  He provides these three answers:

  1. Stories make information easier to remember
  2. Stories make information more believable
  3. Stories can tap into emotions.

There are all types of stories.  The type we are interested in for evaluation purposes are organizational stories.  Organizational stories can do the following things for an organization:

  1. Depict culture
  2. Promote core values
  3. Transmit and reinforce the culture
  4. Provide instruction to employees
  5. Motivate, inspire, and encourage

He suggests six common types of organizational stories:

  1. Hero stories  (someone in the organization who has done something beyond the normal range of achievement)
  2. Success stories (highlight organizational successes)
  3. Lessons learned stories (what major mistakes and triumphs teach the organization)
  4. “How it works around here” stories (highlight core organizational values reflected in actual practice
  5. “Sacred bundle” stories (a collection of stories that together depict the culture of an organization; core philosophies)
  6. Training and orientation stories (assists new employees in understanding how the organization works)

To use stories as evaluation, the evaluator needs to consider how stories might be used, that is, do they depict how people experience the program?  Do they understand program outcomes?  Do they get insights into program processes?

You (as evaluator) need to think about how the story fits into the evaluation design (think logic model; program planning).  Ask yourself these questions:  Should you use stories alone?  Should you use stories that lead into other forma of inquiry?  Should you use stories that augment/illustrate results from other forms of inquiry?

You need to establish criteria for stories.  Rigor can be applied to story even though the data are narrative.  Criteria include the following:   Is the story authentic–is it truthful?  Is the story verifiable–is there a trail of evidence back to the source of the story?  Is there a need to consider confidentiality?  What was the original intent–purpose behind the original telling?  And finally, what does the story represent–other people or locations?

You will need a plan for capturing the stories.  Ask yourself these questions:  Do you need help capturing the stories?  What strategy will you use for collecting the stories?  How will you ensure documentation and record keeping?  (Sequence the questions; write them down the type–set up; conversational; etc.)  You will also need a plan for analyzing and reporting the stories  as you, the evaluator,  are responsible for finding meaning.