CAVEAT: This may be too political for some readers.
Sometimes, there are ideas that appear in other blogs that may or may not be directly related to my work in evaluation. Because I read them, I see evaluative relations and think they are important enough to pass along. Today is one of those days. I’ll try to connect the dots between what I read and share here and evaluation. (For those of you who are interested in the Connect the Dots, a major event day on climate change and weather on May 5, 2012, go here.)
First, Valerie Williams, AEA365 blog, April18, 2012 says, “…Many environmental education programs struggle with the question of whether environmental education is a means to an end (e.g. increased stewardship) or an end itself. This question has profound implications for how programs are evaluated, and specifically the measures used to determine program success.”
I think that many educational programs (whether environmentally focused or not) struggle with this question. Is the program a means to an end or the end itself? I am reminded of programs which are instituted for cost savings and then the program designers want that program evaluated. Means or end?
Williams also offers comments about evaluability assessment–that evaluation task that helps evaluators decide whether to evaluate a new programs, especially if that new program’s readiness for evaluation is in question. (She provides resources if you are interested.) She offers reasons for conducting an evaluability assessment. Specifically:
- Surfacing disagreements among stakeholders about the program theory, design and/or structure;
- Highlighting the need for changes in program design; and
- Clarifying the type of evaluation most helpful to the program.
Evauability assessment is a topic for future discussion.
Second, a colleague offered the following CDC reference and says, “The purpose of this workbook is to help public health program managers, administrators, and evaluators develop an effective evaluation plan in the context of the planning process. It is intended to assist in developing an evaluation plan but is not intended to serve as a complete resource on how to implement program evaluation.” I offer it here because I know that evaluation plans are often added after the program has been implemented. Although it has as a focus pubic health programs, one source familiar with this work commented that there is enough in the workbook that can be applied to a variety of settings. Check it out; the link is below
Next, Nigerian novelist Chimamanda Ngozi Adichie is quoted as saying, “The single story creates stereotypes, and the problem with stereotypes is not that they are untrue, but that they are incomplete. They make one story become the only story.”
Given that
- Extension uses story to evaluate a lot of programs; and
- Story is used to convince legislators of Extension’s value; and
- Story, if done right, is a powerful tool;
Then it behooves us all to remember this–are we using the story because it captures the effect or because it is the only story? If only story, is it promoting a stereotype? Adichie, though a novelist, may be an evaluator at heart.
Finally, there is this quote, also from an AEA365 blog (Steve Mayer) “There are elements of Justice and Injustice everywhere – in society, in reform efforts, and in the evaluation of reform efforts. The choice of outcomes to be assessed is a political act. “Noticing progress” probably takes us further than “measuring impact,” always being mindful of who benefits.”
We often are stuck on “measuring impact”; after all, isn’t that what everyone wants to know. If world peace is the ultimate impact, then what is the likelihood of measuring that? I think that “noticing progress” (i.e., change) will take us much further because of the justice it can capture (or not–and that is telling). And by capturing “noticing progress”, we can make explicit who benefits.
This runs long today.