Think about it. How does what is happening in the world affect your program? Your outcomes? Your goals?
When was the last time you applied that peripheral knowledge to what you are doing. Informational literacy is being aware of what is happening in the world. Knowing this information, even peripherally, adds to your evaluation capacity.
Now, this is not advocating that you need to read the NY Times daily (although I’m sure they would really like to increase their readership); rather it is advocating that you recognize that none of your programs (whether little p or big P) occur in isolation. What your participants know affects how the program is implemented. What you know affects how the programs are planned. That knowledge also affects the data collection, data analysis, and reporting. This is especially true for programs developed and delivered in the community, as are Extension programs.
Let me give you a real life example. I returned from Tucson, AZ and the capstone event for an evaluation capacity program I was leading. The event was an outstanding success–not only did it identify what was learned and what needed to be learned, it also demonstrated the value of peer learning. I was psyched. I was energized. I was in an automobile accident 24 hours after returning home. (The car was totaled–I no longer have a car; my youngest daughter and I experienced no serious injuries.) The accident was published in the local paper the following day. Several people saw the announcement; those same several people expressed their concern; some of those several people asked how they could help. Now this is a very small local event that had a serious effect on me and my work. (If I hadn’t had last week’s post already written, I don’t know if I could have written it.) Solving simple problems takes twice as long (at least). This informational literacy influenced those around me. Their knowing changed their behavior to me. Think of what September 11, 2001 did to people’s behavior; think about what the Pope’s RESIGNATION is doing to people’s behavior. Informational literacy. It is all evaluative. Think about it.
Graphic URL: http://www.otterbein.edu/resources/library/information_literacy/index.htm
The topic of complexity has appeared several times over the last few weeks. Brian Pittman wrote about it in an AEA365; Charles Gasper used it as a topic for his most recent blog. Much food for thought, especially as it relates to the work evaluators do.
Simultaneously, Harold Jarche talks about connections. To me connections and complexity are two side of the same coin. Something which is complex typically has multiple parts. Something which has multiple parts is connected to the other parts. Certainly the work done by evaluators has multiple parts; certainly those parts are connected to each other. The challenge we face is logically defending those connections and in doing so, make explicit the parts. Sound easy? Its not.
That’s why I stress modeling your project before you implement it. If the project is modeled, often the model leads you to discover that what you thought would happen because of what you do, won’t. You have time to fix the model, fix the program, and fix the evaluation protocol. If your model is defensible and logical, you still may find out that the program doesn’t get you where you want to go. Jonny Morell writes about this in his book, Evaluation in the face of uncertainty. There are worse things than having to fix the program or fix the evaluation protocol before implementation. Keep in mind that connections are key; complexity is everywhere. Perhaps you’ll have an Aha! moment.
I’ll be on holiday and there will not be a post next week. Last week was an odd week–an example of complexity and connections leading to unanticipated outcomes.
Ellen Taylor-Powell, UWEX Evaluation Specialist Emeritus, presented via webinar from Rome to the WECT (say west) cohorts today. She talked about program planning and logic modeling. The logic model format that Ellen developed was picked up by USDA, now NIFA, and disseminated across Extension. That dissemination had an amazing effect on Extension, so much so that most Extension faculty know the format and can use it for their programs.
Ellen went further today than those resources located through hyperlinks on the UWEX website. She cited the work by Sue Funnell and Patricia J. Rogers, Purposeful program theory: Effective use of theories of change and logic models . It was published in March, 2011. Here is what the publisher (Jossey-Bass, an imprint of Wiley) says:
Between good intentions and great results lies a program theory—not just a list of tasks but a vision of what needs to happen, and how. Now widely used in government and not-for-profit organizations, program theory provides a coherent picture of how change occurs and how to improve performance. Purposeful Program Theory shows how to develop, represent, and use program theory thoughtfully and strategically to suit your particular situation, drawing on the fifty-year history of program theory and the authors’ experiences over more than twenty-five years.
Two reviewers who I have mentioned before, Michael Quinn Patton and E. Jane Davidson, say the following:
“From needs assessment to intervention design, from implementation to outcomes evaluation, from policy formulation to policy execution and evaluation, program theory is paramount. But until now no book has examined these multiple uses of program theory in a comprehensive, understandable, and integrated way. This promises to be a breakthrough book, valuable to practitioners, program designers, evaluators, policy analysts, funders, and scholars who care about understanding why an intervention works or doesn’t work.” —Michael Quinn Patton, author, Utilization-Focused Evaluation
“Finally, the definitive guide to evaluation using program theory! Far from the narrow ‘one true way’ approaches to program theory, this book provides numerous practical options for applying program theory to fulfill different purposes and constraints, and guides the reader through the sound critical thinking required to select from among the options. The tour de force of the history and use of program theory is a truly global view, with examples from around the world and across the full range of content domains. A must-have for any serious evaluator.” —E. Jane Davidson, PhD, Real Evaluation Ltd.
Jane is the author of the book, Evaluation Methodology Basics: The nuts and bolts of sound evaluation, published by Sage.. This book “…provides a step-by-step guide for doing a real evaluation. It focuses on the main kinds of “big picture” questions that evaluators usually need to answer, and how the nature of such questions is linked to evaluation methodology choices.” And although Ellen didn’t specfically mention this book, it is a worthwhile resource for nascent evaluators.
Two other resources that were mentioned today were Jonny Morell’s book, Evaluation in the face of uncertainty: Anticipating surprise and responding to the inevitable. This volume was published by Guilford Press.. Ellen also mentioned John Mayne and his work in contribution analysis. A quick web search provided this reference: Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. ILAC Brief No. 16. Rome, Italy: Institutional Learning and Change (ILAC) Initiative. I’ll talk more about contribution analysis next week in TIMELY TOPICS.
If those of you who listened to Ellen remember other sources that she mentioned, let me know and I’ll put them here next week.
Hi everyone–it is the third week in April and time for a TIMELY TOPIC! (I was out of town last week.)
Recently, I was asked: Why should I plan my evaluation strategy in the program planning stage? Isn’t it good enough to just ask participants if they are satisfied with the program?
Good question. This is the usual scenario: You have something to say to your community. The topic has research support and is timely. You think it would make a really good new program (or a revision of a current program). So you plan the program.
Do you plan the evaluation at the same time? The keyed response is YES. The usual response is something like, “Are you kidding?” No, not kidding. When you plan your program is the time to plan your evaluation.
Unfortunately, my experience is that many (most) faculty when planning or revising a program fail to think about evaluating that program at the planning stage. Yet, it is at the planning stage that you can clearly and effectively identify what you think will happen and what will indicate that your program has made a difference. Remember the evaluative question isn’t, “Did the participants like the program?” The evaluative question is, “What difference did my program make in the lives of your participants–and if possible in the economic, environmental, and social conditions in which they live.” That is the question you need to ask yourself when you plan your program. It also happens to be the evaluative question for the long term outcomes in a logic model.
If you ask this question before you implement your program, you may find that you can not gather data to answer it. This allows you to look at what change (or changes) can you measure. Can you measure changes in behavior? This answers the question, “What difference did this program make in the way the participants act in the context presented in the program?” Or perhaps, “What change occurred in what the participants know about the program topic?” These are the evaluative questions for the short and intermediate term outcomes in a logic model. (As an a side, there are evaluative questions that can be asked at every stage of a logic model.)
By thinking about and planning for evaluation at the PROGRAM PLANNING STAGE,you avoid an evaluation that gives you data that cannot be used to support your program. A program you can defend with good evaluation data is a program that has staying power. You also avoid having to retrofit your evaluation to your program. Retrofits, though often possible, may miss important data that could only be gathered by thinking of your outcomes ahead of the implementation.
Years ago (back when we beat on hollow logs), evaluations typically asked questions that measured participant satisfaction. You probably still want to know if participants are satisfied with your program. Satisfaction questionnaires may be necessary; they are no longer sufficient. They do not answer the evaluative question, “What difference did this program make?”