Last weekend, I was in Florida visiting my daughter at Eckerd College.  The College was offering an Environmental Film Festival and I had the good fortune to see Green Fire, a film about Aldo Leopold and the land ethic.   I had seen it at OSU and was impressed because it was not all doom and gloom; rather it celebrated Aldo Leopold as one of the three leading and  early conservationists  (the other two are John Muir and Henry David Thoreau ).  Dr. Curt Meine, who narrates the film and is a conservation biologist, was leading the discussion again; I had heard him at OSU.  At the showing early, I was able to chat with him about the film and its effects.  I asked him how he knew he was being effective.  His response was to tell me about the new memberships in the Foundation, the number of showings, and the size of the audience seeing the film.  Appropriate responses for my question.  What I really wanted to know was how did he know he was making a difference.  That is a different question; one which talks about change.  Change is what programs like Green Fire is all about.  It is what Aldo Leopold was all about (read Sand County Almanac to understand Leopold’s position.)

 

Change is what evaluation is all about.  But did I ask the right question?  How could I have phrased it differently to get at what change had occurred in the viewers of the film?  Did new memberships in the Foundation demonstrate change?  Knowing what question to ask is important for program planners as well as evaluators.  There are often multiple levels of questions that could be asked–individual, programmatic, organizational, regional, national, global.  Are they all equally important?  Do they provide a means forgathering pertinent data?  How are you going to use these data once you’ve gathered them?  How carefully do you think about the questions you ask when you craft your logic model?  When you draft a survey?  When you construct questions for focus groups?  Asking the right question will yield relevant answers.  It will show you what difference you’ve made in the lives of your target audience.

 

Oh, and if you haven’t see the film, Green Fire, or read the book, Sand County Almanac–I highly recommend them.

I came across this quote from Viktor Frankl today (thanks to a colleague)

“…everything can be taken from a man (sic) but one thing: the last of the human freedoms – to choose one’s attitude in any given set of circumstances, to choose one’s own way.” Viktor Frankl (Man’s Search for Meaning – p.104)

I realized that,  especially at this time of year, attitude is everything–good, bad, indifferent–the choice is always yours.

How we choose to approach anything depends upon our previous experiences–what I call personal and situational bias.   Sadler* has three classifications for these biases.  He calls them value inertias (unwanted distorting influences which reflect background experience), ethical compromises (actions for which one is personally culpable), and cognitive limitations (not knowing for what ever reason).

When we approach an evaluation, our attitude leads the way.  If we are reluctant, if we are resistant, if we are excited, if we are uncertain, all these approaches reflect where we’ve been, what we’ve seen, what we have learned, what we have done (or not).  We can make a choice how to proceed.

The America n Evaluation Association (AEA) has long had a history of supporting difference.  That value is imbedded in the guiding principles.  The two principles which address supporting differences are

  • Respect for People:  Evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
  • Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

AEA also has developed a Cultural Competence statement.  In it, AEA affirms that “A culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation.”

Both of these documents provide a foundation for the work we do as evaluators as well as relating to our personal and situational bias. Considering them as we  enter into the choice we make about attitude will help minimize the biases we bring to our evaluation work.  The evaluative question from all this–When has your personal and situational biases interfered with you work in evaluation?

Attitude is always there–and it can change.  It is your choice.

 

 

 

 

Sadler, D. R. (1981). Intuitive data processing as a potential source of bias in naturalistic evaluations.  Education Evaluation and Policy Analysis, 3, 25-31.

I’m involved in evaluating a program that is developing as it evolves.  There is some urgency to get predetermined, clear, and measurable outcomes to report to the administration.  Typically, I wouldn’t resist (see resistance post) this mandate; only this program doesn’t lend itself to this approach.  Because this program is developing as it is implemented, it can’t easily be rolled out to all 36 counties in Oregon at once, as much as administration would love to see that happen.  So what can we do?

We can document the principles that drive the program and use them to stage the implementation across the state.

We can identify the factors that tell us that the area is ready to implement the program (i.e., the readiness factors).

We can share lessons learned with key stakeholders in potential implementation areas.

These are the approaches that Michael Patton’s Developmental Evaluation advocate.  Michael says, “Developmental evaluation is designed to be congruent with and nurture developmental, emergent, innovative, and trans-formative processes.” I had the good fortune to talk with Michael about this program in light of these processes.  He indicated that identifying principles not a model supports developmental evaluation and a program in development.  By using underlying principles, we inform expansion.  Can these principles be coded…yes.  Are they outcome indicators…possibly.  Are they outcome indicators in the summative sense of the word?  Nope.  Not even close.  These principles, however, can help the program people roll out the next phase/wave of the program.

As an evaluator, employing developmental evaluation, do I ignore what is happening on the ground–at each phase of the program implementation.  Not a chance.  I need to encourage the program people at that level to identify clear and measurable outcomes–because from those clear and measurable outcomes will come the principles needed for the next phase.  (This is a good example of the complexity concepts that Michael talks about in DE and are the foundation for systems thinking.)  The readiness factors will also become clear when looking at individual sites.  From this view, we can learn a lot–we can apply what we have learned and, hopefully, avoid similar mistakes.  Will mistakes still occur?  Yes.  Is it important that those lessons are heeded; shared with administrators; and used to identify readiness factors when the program is going to be implemented in a new site?  Yes.  Is this process filled with ambiguity?  You bet.  No one said it would be easy to make a difference.

We are learning as we go–that is the developmental aspect of this evaluation and this program.

Ellen Taylor-Powell, UWEX Evaluation Specialist Emeritus, presented via webinar from Rome to the WECT (say west) cohorts today.  She talked about program planning and logic modeling.  The logic model format that Ellen developed was picked up by USDA, now NIFA, and disseminated across Extension.  That dissemination had an amazing effect on Extension, so much so that most Extension faculty know the format and can use it for their programs.

 

Ellen went further today than those resources located through hyperlinks on the UWEX website.  She cited the work by Sue Funnell and Patricia J. Rogers, Purposeful program theory: Effective use of theories of change and logic models  . It was published in March, 2011.  Here is what the publisher (Jossey-Bass, an imprint of Wiley) says:

Between good intentions and great results lies a program theory—not just a list of tasks but a vision of what needs to happen, and how. Now widely used in government and not-for-profit organizations, program theory provides a coherent picture of how change occurs and how to improve performance. Purposeful Program Theory shows how to develop, represent, and use program theory thoughtfully and strategically to suit your particular situation, drawing on the fifty-year history of program theory and the authors’ experiences over more than twenty-five years.

Two reviewers who I have mentioned before, Michael Quinn Patton and E. Jane Davidson, say the following:

“From needs assessment to intervention design, from implementation to outcomes evaluation, from policy formulation to policy execution and evaluation, program theory is paramount. But until now no book has examined these multiple uses of program theory in a comprehensive, understandable, and integrated way. This promises to be a breakthrough book, valuable to practitioners, program designers, evaluators, policy analysts, funders, and scholars who care about understanding why an intervention works or doesn’t work.” —Michael Quinn Patton, author, Utilization-Focused Evaluation

“Finally, the definitive guide to evaluation using program theory! Far from the narrow ‘one true way’ approaches to program theory, this book provides numerous practical options for applying program theory to fulfill different purposes and constraints, and guides the reader through the sound critical thinking required to select from among the options. The tour de force of the history and use of program theory is a truly global view, with examples from around the world and across the full range of content domains. A must-have for any serious evaluator.” —E. Jane Davidson, PhD, Real Evaluation Ltd.

Jane is the author of the book, Evaluation Methodology Basics: The nuts and bolts of sound evaluation, published by Sage..  This book “…provides a step-by-step guide for doing a real evaluation.  It focuses on the main kinds of “big picture” questions that evaluators usually need to answer, and how the nature of such questions is linked to evaluation methodology choices.”  And although Ellen didn’t specfically mention this book, it is a worthwhile resource for nascent evaluators.

Two other resources that were mentioned today were Jonny Morell’s book, Evaluation in the face of uncertainty:  Anticipating surprise and responding to the inevitable. This volume was published by Guilford Press..  Ellen also mentioned John Mayne and his work in contribution analysis.  A quick web search provided this reference:  Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. ILAC Brief No. 16. Rome, Italy: Institutional Learning and Change (ILAC) Initiative.  I’ll talk more about contribution analysis next week in TIMELY TOPICS.

 

If those of you who listened to Ellen remember other sources that she mentioned, let me know and I’ll put them here next week.

 

Last week, a colleague and I led two, 20 person cohorts in a two-day evaluation capacity building event.  This activity was the launch (without the benefit of champagne ) of a 17-month long experience where the participants will learn new evaluation skills and then be able to serve as resources for their colleagues in their states.  This training is the brain-child of the Extension Western Region Program Leaders group.  They believe that this approach will be economical and provide significant substantive information about evaluation to the participants.

What Jim and I did last week was work to, hopefully, provide a common introduction to evaluation.   The event was not meant to disseminate the vast array of evaluation information.  We wanted everyone to have a similar starting place.  It was not a train-the-trainer event, so common in Extension.  The participants were at different places in their experience and understanding of program evaluation–some were seasoned, long-time Extension faculty, some were mid-career, some were brand new to Extension and the use of evaluation.  All were Extension faculty from western states.  And although evaluation can involve programs, policies, personnel, products, performance, processes, etc…these two days focused on program evaluation.

 

It occurred to me that  it would be useful to talk about what is evaluation capacity building (ECB) and what resources are available to build capacity.  Perhaps, the best place to start is with the Preskill and Russ-Eft book by the same name, Evaluation Capacity Building.

This volume is filled with summaries of evaluation points and there are activities to reinforce those points.  Although this is a comprehensive resource, it covers key points briefly and there are other resources s that are valuable to understand the field of capacity building.  For example, Don Compton and his colleagues, Michael Baizerman and Stacey Stockdill edited a New Directions in Evaluation volume (No. 93) that addresses the art, craft, and science of ECB.  ECB is often viewed as a context-dependent system of processes and practices that help instill quality evaluation skills in an organization and its members.  The long term outcome of any ECB is the ability to conduct a rigorous evaluation as part of routine practice.  That is our long-term goal–conducting rigorous evaluations as a part of routine practice.

 

Although not exhaustive, below are some ECB resources and some general evaluation resources (some of my favorites, to be sure).

 

ECB resources:

Preskill, H. & Russ-Eft, D. (2005).  Building Evaluation Capacity. Thousand Oaks, CA: Sage

Compton, D. W. Baizerman, M., & Stockdill, S. H. (Ed.).  (2002).  The art, craft, and science of evaluation capacity building. New Directions for Evaluation, No. 93.  San Francisco: Jossey-Bass.

Preskill, H. & Boyle, S. (2008).  A multidisciplinary model of evalluation capacity building.  American Journal of Evaluation, 29 (4), 443-459.

General evaluation resources:

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (4th ed.). (2011).  Program evaluation:  Alternative approaches and practical guidelines. Boston, MA: Pearson.

Scriven, M. (4th ed.).   (1991).  Evaluation Thesaurus. Newbury Park, CA: Sage.

Patton, M. Q. (4th ed.). (2008).  Utilization-focused evaluation. Thousand Oaks, CA: Sage.

Patton, M. Q. (2012).  Essentials of utilization-focused evaluation. Thousand Oaks, CA: Sage

Hopefully, the technical difficulties with images is no longer a problem and I will be able to post the answers to the history quiz and the post I had hoped to post last week.  So, as promised, here are the answers to the quiz I posted the week of July 5.  The keyed responses are in BOLD

1.  Michael Quinn Patton, author of Utilization-Focused Evaluation and the new book, Developmental Evaluation and the classic Qualitative Evaluation and Research Methods .

2.   Michael Scriven is best known for his concept of formative and summative evaluation. He has also advocated that evaluation is a transdiscipline.  He is the author of the Evaluation Thesaurus .

3. Hallie Preskill is the co-author (with Darlene Russ-Eft) of Evaluation Capacity Building

4. Robert E. Stake has advanced work in case study and is the author of the book Multiple Case Study and The Art of Case Study Research.

5. David M. Fetterman is best known for his advocacy of empowerment evaluation and the book of that name, Foundations of Empowerment Evaluation .

6. Daniel Stufflebeam developed the CIPP (context input process product) model which is discussed in the book Evaluation Models .

7. James W. Altschuldt is the go-to person for needs assessment.  He is the editor of the Needs Assessment Kit (or everything you wanted to know about needs assessment and didn’t know where to find the answer).  He is also the co-author with Bell Ruth Witkin of two needs assessment books,  and  .

8. Jennifer C. Greene, the current President of the American Evaluation Association, and the author of a book on mixed methods .

9. Ernest R. House is a leader in the work of evaluation policy and is the author of  an evaluation novel,  Regression to the Mean   .

10. Lee J. Cronbach is a pioneer in education evaluation and the reform of that practice.  He co-authored with several associates the book, Toward Reform of Program Evaluation .

11.  Ellen Taylor-Powell, the former Evaluation Specialist at University of Wisconsin Extension Service and is credited with developing the logic model later adopted by the USDA for use by the Extension Service.  To go to the UWEX site, click on the words “logic model”.

12. Yvonna Lincoln, with her husband Egon Guba (see below) co-authored the book Naturalistic Inquiry  . She is the currently co-editor (with Norman K. Denzin) of the Handbook of Qualitative Research .

13.   Egon Guba, with his wife Yvonna Lincoln, is the co-author of 4th Generation Evaluation.

14. Blaine Worthen has championed certification for evaluators.  He wit h Jody L. Fitzpatrick and James
R. Sanders have co-authored Program Evaluation: Alternative Approaches and Practical Guidelines.

15.  Thomas A. Schwandt, a philosopher at heart who started as an auditor, has written extensively on evaluation ethics. He is also the co-author (with Edward S. Halpern) of Linking Auditing and Metaevaluation.

16.   Peter H. Rossi, co-author with Howard E. Freeman and Mark E. Lipsey, wrote Evaluation: A Systematic Approach , and is a pioneer in evaluation research.

17. W. James Popham, a leader in educational evaluation, and authored the volume, Educational Evaluation

18. Jason Millman was a pioneer of teacher evaluation and author of  Handbook of Teacher Evaluation

19.  William R. Shadish co-edited (with Laura C. Leviton and Thomas Cook) of Foundations of Program Evaluation: Theories of Practice . His work in theories of evaluation practice earned him the Paul F. Lazarsfeld Award for Evaluation Theory, from the American Evaluation Association in 1994.

20.   Laura C. Leviton (co-editor with Will Shadish and Tom Cook–see above) of Foundations of Program Evaluation: Theories of Practice has pioneered work in participatory evaluation.

 

 

Although I’ve only list 20 leaders, movers and shakers, in the evaluation field, there are others who also deserve mention:  John Owen, Deb Rog, Mark Lipsey, Mel Mark, Jonathan Morell, Midge Smith, Lois-Ellin Datta, Patricia Rogers, Sue Funnell, Jean King, Laurie Stevahn, John, McLaughlin, Michale Morris, Nick Smith, Don Dillman, Karen Kirkhart, among others.

If you want to meet the movers and shakers, I suggest you attend the American Evaluation Association annual meeting.  In 2011, it will be held in Anaheim CA, November 2 – 5; professional development sessions are being offered October 31, November 1 and 2, and also November 6.  More conference information can be found here.

 

 

Hi everybody–it is time for another TIMELY TOPIC.  This week’s topic is about using pretest/posttest evaluation or a post-then-pre evaluation.

There are many considerations for using these designs.  You have to look at the end result and decide what is most appropriate for your program.  Some of the key considerations include:

  • the length of your program;
  • the information you want to measure;
  • the factors influencing participants response; and
  • available resources.

Before explaining the above four factors, let me urge you to read on this topic.  There are a couple of resources (yes, print…) I want to pass your way.

  1. Campbell, D. T. & Stanley, J. C. (1963).  Experimental and quasi-experimental designs for research.  Houghton Mifflin Company:  Boston, MA.  (The classic book on research and evaluation designs.)
  2. Rockwell, S. K., & Kohn, H. (1989). Post-then-pre evaluation. Journal of Extension [On-line]. 27(2). Available at: http://www.joe.org/joe/1989summer/a5.htm (A seminal JoE paper explaining post-then-pre test.)
  3. Nimon, K. Zigarami, D. & Allen, J. (2011).  Measures of program effectiveness based on retrospective pretest data:  Are all created equal? American Journal of Evaluation, 32, 8 – 28.  (A 2011 publication with an extensive bibliography.)

Let’s talk about considerations.

Length of program.

For pre/post test, you want a program that is long.  More than a day.  Otherwise you risk introducing a desired response bias and the threats to internal validity that  Campbell and Stanley identify.  Specifically the threats called history, maturation, testing, and instrumentation,  also a possible regression to the mean threat, though that is on a possible source of concern.  These threats to internal validity assume no randomization and a one group design, typical for Extension programs and other educational programs.  Post-then-pre works well for short programs, a day or less, and  tend to control for response shift and desired response bias.  There may still be threats to internal validity.

Information you want to measure.

If you want to know a participants specific knowledge, a post-then-pre cannot provide you with that information because you can not test something you cannot unknow.  The traditional pre/post can focus on specific knowledge, e.g., what food is the highest in Vitamin C in a list that includes apricot, tomato, strawberry cantaloupe. (Answer:  strawberry)  If you are wanting agreement/disagreement with general knowledge (e.g., I know what the key components of strategic planning are), the post-pre works well.  Confidence, behaviors, skills, and attitudes can all be easily measured with a post-then-pre.

Factors influencing participants response.

I mentioned threats to internal validity above.  These factors all influence participants responses.  If there is a long time between the pretest and the post test, participants can be affected by history (a tornado prevents attendance to the program); maturation (especially true with programs with children–they grow up); testing (having taken the pretest, the post test scores will be better);  and instrumentation (the person administering the posttest administers it differently than the pretest was administered).  Participants desire to please the program leader/evaluator, called desired response bias, also affects participants response.

Available resources.

Extension programs (as well as many other educational programs) are affected by the availability of resources (time, money, personnel, venue, etc.).  If you only have a certain amount of time, or a certain number of people who can administer the evaluation, or a set amount of money, you will need to consider which approach to evaluation you will use.

The idea is to get usable, meaningful data that accurately reflects the work that went into the program.

I’ve talked about how each phase of a logic model has evaluative activities.  I’ve probably even alluded to the fact that needs assessment is the evaluative activity for that phase called situation (see the turquoise area on the left end of the image below.)

What I haven’t done is talk about is the why, what,  and how of needs assessment (NA).  I also haven’t talked about the utilization of the findings of a needs assessment–what makes meaning of the needs assessment.

OK.  So why is a NA conducted?  And what is a NA?

Jim Altschuld is my go-to person when it comes to questions about needs assessment.  He recently edited a series of books on the topic.

Although Jim is my go-to person, Belle Ruth Witkin (a colleague, friend, and collaborator of Jim Altschuld) says in the preface to the co-authored volume (Witkin and Altschuld, 1995–see below),  that the most effective way to decide the best way to divide the (often scarce) resources among the demands (read programs) is to conduct a needs assessment when the planning for the use of those resources begins.

Book 1 of the kit discusses an overview.  In that volume, Jim defines what a needs assessment is: “Needs assessment is the process of identifying needs, prioritizing them, making needs-based decisions, allocating resources, and implementing actions in organizations to resolve problems underlying important needs (pg.20).”  Altschuld states that there are many models for assessing needs and provides citations for those models.  I think the most important aspect of this first volume is the presentation of the phased model developed by Belle Ruth Witkin in 1984 and revised by Altschuld and Witkin in their 1995 and 2000 volumes.Those phases are preassessment, assessment, and postassessment.  They divide those three phases into three levels, primary, secondary, and tertiary, each level targeting a different group of stakeholders.  This volume also discusses the why and the how.  Subsequent volumes go into more detail–volume 2 discusses phase 1 (getting started); volume 3 discusses phase II (collecting data); volume 4 discusses analysis and priorities; and volume 5 discusses phase III (taking action).

Laurie Stevahn and Jean A. King are the authors of this volume. In chapter 3, they discuss strategies for the action plan using facilitation procedures that promote positive relationships, develop shared understanding, prioritize decisions, and assess progress.  They warn of interpersonal conflict and caution against roadblocks that impede change efforts.  They also promote the development of evaluation activities at the onset of the NA because that helps ensure the use of the findings.

Needs assessment is a political experience.  Some one (or ones) will feel disenfranchised, loose resources, have programs ended.  These activities create hard feelings and resentments.  These considerations need to be identified and discussed at the beginning of the process.  It is like the elephant and the blind people–everyone has an image of what the creature is, there may or may not be consensus, yet for the NA to be successful, consensus is important.  Without it, the data will sit on someone’s shelf or in someone’s computer.  Not useful.

I’ve mentioned language use before.

I’ll talk about it today and probably again.

What the word–any word– means is the key to a successful evaluation.

Do you know what it means? Or do you think you know what it means? 

How do you find out if what you think it means is what your key funder (a stakeholder) thinks it means?  Or what the participants (target audience) thinks it means?  Or any other stakeholder (partners, for example) thinks it means…

You ask them.

You ask them BEFORE the evaluation begins.  You ask them BEFORE you have implemented the program.  You ask them when you plan the program.

During program planning, I bring to the table relevant stakeholders–folks similar to and different from those who will be the recipients of the program.  I ask them this evaluative question: “If you participated in this program, how will you know that the program is successful?  What has to happen/change to know that a difference has been made?”

Try it–the answers are often revealing, informative, and enlightening.  They are not often the answers you thought.  Listen to those stakeholders.  They have valuable insights.  They actually know something.

Once you have those answers, clarify any and all terminology so that everyone is on the same page.  What something means to you may means something completely different to someone else.

Impact is one of those words–it is both a noun and a verb.  Be careful how you use it and how it is used.  Go to a less loaded word–like results or effects.  Talk about measurable results that occur within a certain time frame–immediately after the program; several months after the program; several years after the program–depending on your program.  (If you are a forester, you may not see results for 40 years…)

What are standard evaluation tools?  What knowledge do you need to conduct an evaluation effectively and efficiently?  For this post and the next two, I’m going to talk about just that.

This post is about planning programs. 

The next one will be about implementing, monitoring, and delivering the evaluation of that program.

The third one will be about utilizing the findings of that program evaluation.

Today–program planning.  How does program planning relate to program evaluation?

A lot of hours goes into planning a program.  Questions that need to be answered among others include:

  • What expertise is needed?
  • What is the content focus?
  • What venue will be utilized?
  • Who is the target audience?
  • How many can you accommodate?
  • What will you charge?
  • And the list of questions goes on…talk to any event planner–they will tell you, planning a program is difficult.

Although you might think that these are planning questions, they are also evaluation questions.  They point the program planner to the outcome of the program in the context in which the program is planned.  Yet, what often happens is that evaluation is often left out of that planning.  It is one detail that gets lost in all the rest–until the end.  Unfortunately, retrofitting an evaluation after the program has already run often results in spurious data, leading to specious results, resulting in unusable findings and unfortunately–a program that can’t be replicated.  What’s an educator to do?

The tools that help in program planning are ones you have seen and probably used before:  logic models, theories of change, and evaluation proposals.

Logic models have already been the topic of this blog.   Theories of change have been mentioned.  Evaluation proposals are a new topic.  More and more, funding agencies want an evaluation plan.  Some provide a template–often a modified logic model; some ask specifically for a program specific logic mode.  Detailing how your program will bring about change and what change is expected is all part of an evaluation proposal.  A review of logic models and theories of  change and the program theory related to your proposed program will help you write an evaluation proposal.

Keep in mind that you may be writing for a naive audience, an audience who isn’t as knowledgeable as you in your subject matter OR in the evaluation process.  A simple evaluation proposal will go a long way to getting and keeping all stakeholders on the same page.