What are standard evaluation tools?  What knowledge do you need to conduct an evaluation effectively and efficiently?  For this post and the next two, I’m going to talk about just that.

This post is about planning programs. 

The next one will be about implementing, monitoring, and delivering the evaluation of that program.

The third one will be about utilizing the findings of that program evaluation.

Today–program planning.  How does program planning relate to program evaluation?

A lot of hours goes into planning a program.  Questions that need to be answered among others include:

  • What expertise is needed?
  • What is the content focus?
  • What venue will be utilized?
  • Who is the target audience?
  • How many can you accommodate?
  • What will you charge?
  • And the list of questions goes on…talk to any event planner–they will tell you, planning a program is difficult.

Although you might think that these are planning questions, they are also evaluation questions.  They point the program planner to the outcome of the program in the context in which the program is planned.  Yet, what often happens is that evaluation is often left out of that planning.  It is one detail that gets lost in all the rest–until the end.  Unfortunately, retrofitting an evaluation after the program has already run often results in spurious data, leading to specious results, resulting in unusable findings and unfortunately–a program that can’t be replicated.  What’s an educator to do?

The tools that help in program planning are ones you have seen and probably used before:  logic models, theories of change, and evaluation proposals.

Logic models have already been the topic of this blog.   Theories of change have been mentioned.  Evaluation proposals are a new topic.  More and more, funding agencies want an evaluation plan.  Some provide a template–often a modified logic model; some ask specifically for a program specific logic mode.  Detailing how your program will bring about change and what change is expected is all part of an evaluation proposal.  A review of logic models and theories of  change and the program theory related to your proposed program will help you write an evaluation proposal.

Keep in mind that you may be writing for a naive audience, an audience who isn’t as knowledgeable as you in your subject matter OR in the evaluation process.  A simple evaluation proposal will go a long way to getting and keeping all stakeholders on the same page.

Sure, you want to know the outcomes resulting from your program.  Sure, you want to know if your program is effective.  Perhaps, you will even attempt to answer the question, “So What?” when you program is effective on some previously identified outcome.  All that is important.

My topic today is something that is often over looked when developing an evaluation–the participant and program characteristics.

Do you know what your participants look like?

Do you know what your program looks like?

Knowing these characteristics may seem unimportant at the outset of the implementation.  As you get to the end, questions will arise–How many females?  How many Asians?  How many over 60?

Demographers typically ask demographic questions as part of the data collection.

Those questions often include the following categories:

  • Gender
  • Age
  • Race/ethnicity
  • Marital status
  • Household income
  • Educational level

Some of those may not be relevant to your program and you may want to include other general characteristic questions instead.  For example, in a long term evaluation of a forestry program where the target audience was individuals with wood lots, asking how many acres were owned was important and marital status did not seem relevant.

Sometimes asking some questions may seem intrusive–for example, household income or age.  In all demographic cases, giving the participant an option to not respond is appropriate.  When these data are reported, report the number of participants who chose not to respond.

When characterizing your program, it is sometimes important to know characteristics of the geographic area where the program is being implemented–rural, suburban, urban, ?  This is especially true when the program is a multisite program.   Local introduces an unanticipated variable that is often not recognized or remembered.

Any variation in the implementation–number of contact hours, for example, or the number of training modules.  The type of intervention is important as well–was the program delivered as a group intervention or individually. The time of the year that the program is implemented may also be important to document.  The time of the year may inadvertently introduce a history bias into the study–what is happening in September is different than what is happening in December.

Documenting these characteristics  and then defining them when reporting the findings helps to understand the circumstances surrounding the program implementation.  If the target audience is large, documenting these characteristics can provide comparison groups–did males do something differently than females?  Did participants over 50 do something different than participants 49 or under?

Keep in mind when collecting participant and program characteristic data, that these data help you and the audience to whom you disseminate the findings understand your outcomes and the effect of your program.

A faculty member asked me to provide evaluation support for a grant application.  Without hesitation, I agreed.

I went to the web site for funding to review what was expected for an evaluation plan.  What was provided was their statement about why evaluation is important.

Although I agree with what is said in that discussion, I think we have a responsibility to go further.  Here is what I know.

Extension professionals evaluate programs because there needs to be some evidence that the imputs for the program–time, money, personnel, materials, facilities, etc.–are being used advantageously, effectively.  Yet, there is more to the question, “Why evaluate” than accountability. (Michael Patton talks about the various uses to which evaluation findings can be put–see his book on Utilization Focused Evaluation.) Programs are evaluated to determine if people are satisfied, if their expectations were met, whether the program was effective in changing something.

This is what I think.  None of what is stated above addresses the  “so what” part of “why evaluate”.  I think that answering this question (or attempting to) is a compelling reason to justify the effort of evaluating.  It is all very well and good to change people’s knowledge of a topic; it is all very well and good to change people’s behavior related to that topic; and it is all very well and good to have people intend to change (after all, stated intention to change is the best predictor of actual change).  Yet, it isn’t enough.  Being able to answer the “so what” question gives you more information.   And doing that–asking and answering the “so what” question–makes evaluation an everyday activity.   And, who knows.  It may even result in world peace.

Last week I suggested a few evaluation related resolutions…one I didn’t mention which is easily accomplished is reading and/or contributing to AEA365.  AEA365 is a daily evaluation blog sponsored by the American Evaluation Association.  AEA’s Newsletter says: “The aea365 Tip-a-Day Alerts are dedicated to highlighting Hot Tips, Cool Tricks, Rad Resources, and Lessons Learned by and for evaluators (see the aea365 site here). Begun on January 1, 2010, we’re kicking off our second year and hoping to expand the diversity of voices, perspectives, and content shared during the coming year. We’re seeking colleagues to write one-time contributions of 250-400 words from their own experience. No online writing experience is necessary – you simply review examples on the aea356 Tip-a-Day Alerts site, craft your entry according to the contributions guidelines, and send it to Michelle Baron our blog coordinator. She’ll do a final edit and upload. If you have questions, or want to learn more, please review the site and then contact Michelle at aea365@eval.org. (updated December 2011)”

AEA365 is a valuable site.  I commend it to you.

Now the topic for today: Data sources–the why and the why not (or advantages and disadvantages for the source of information).

Ellen Taylor Powell, Evaluation Specialist at UWEX, has a handout that identifies sources of evaluation data.  These sources are existing information, people, and pictorial records and observations. Each source has advantages and disadvantages.

The source for the information below is the United Way publication, Measuring Program Outcomes (p. 86).

1.  Existing information such as Program Records are

  • Available
  • Accessible
  • Known sources and methods  of data collection

Program records can also

  • Be corrupt because of data collection methods
  • Have missing data
  • Omit post intervention impact data

2. Another form of existing information is Other Agency Records

  • Offer a different perspective
  • May contain impact data

Other agency records may also

  • Be corrupt because of data collection methods
  • Have missing data
  • May be unavailable as a data source
  • Have inconsistent time frames
  • Have case identification difficulties

3.  People are often main data source and include Individuals and General Public and

  • Have unique perspective on experience
  • Are an original source of data
  • General public can provide information when individuals are not accessible
  • Can serve geographic areas or specific population segments

Individuals and the general public  may also

  • Introduce a self-report bias
  • Not be accessible
  • Have limited overall experience

4.  Observations and pictorial records include Trained Observers and Mechanical Measurements

  • Can provide information on behavioral skills and practices
  • Supplement self reports
  • Can be easily quantified and standardized

These sources of data also

  • Are only relevant to physical observation
  • Need data collectors who must be reliably trained
  • Often result in inconsistent data with multiple observers
  • Are affected by the accuracy of testing devices
  • Have limited applicability to outcome measurement

The American Evaluation Association will celebrate the 25th anniversary of its founding in 2011.  Seems like a really good reason to declare 2011 the Year of Evaluation.  Consider it declared!

What can you do to celebrate the Year of Evaluation?  Here are a few suggestions that could be made into New Year’s Resolutions:

All kidding aside–below is a list of evaluation related suggestions that could be made into resolutions.

  1. Post a comment to this blog–a comment can be a question, an idea, an experience, or a thought…the idea is to get a conversation going;
  2. Consult with an evaluation specialist about an evaluation question;
  3. Join the American Evaluation Association–its a great resource and it is inexpensive–$80.00 for full members; $30.00 for students;
  4. Use your evaluation findings for something other than reporting your accountability;
  5. Read an evaluation reference–a thread on the AEA LinkedIn site recently asked for the top five evaluation references (I’ve added the link).
  6. Explore a facet of evaluation that you have not used before…always go to a survey to gather your data?  Try focus groups…always use quantitative approaches?  Try qualitative approaches?

Just some ideas–my resolution you ask??  I’ll  continue to blog weekly–please join me!

Happy New Year!

My creative effort this past year (other than my blog) has been to create new and (hopefully) wonderful pie.  This pie is vegetarian, not vegan, and obviously not dairy free…contains milk products and coconut.

Today, a bonus post–my gift to you:

WHITE CHRISTMAS PIE

You will need a 9 inch pie crust, fully baked.  (Although I make mine, getting one premade and following the directions for prebaking the crust will also work.)

Enough crushed peppermint candy to cover the pie crust that has cooled to room temperature.  Save about 1 tsp for garnish.

Melt over a double boiler, 12 ounces of white chocolate chipsA double boiler helps keep the chocolate warm and pourable.

Whip to soft peaks, 1 1/2 Cups of whipping cream. Stir in 1/8 tsp of mint or peppermint extract.

Continue whipping until firm peaks form.  SLOWLY fold into the cream, the cool white chocolate.  There will be layers of cooled chocolate throughout the cream.  That is the way it is supposed to be.

Spoon the chocolate mixture into the prepared pie crust.

Freeze for at least one hour or over night.

Prior to serving, remove pie from the freezer.

Whip 1 Cup of whipping cream until soft peaks form.

Add 1/4 tsp of mint or peppermint extract.

Sift into the cream, 2 Tbs of powdered sugar.

Continue whipping until stiff peaks form.

Spoon over the frozen pie, peaking the whipped cream to look like snow drifts.

Sprinkle with 1 Tbs unsweetened coconut and 1 tsp. crushed peppermint candy (that which you had left over above).

Cut into small slices.  Serves 12.  Happy Holidays!

My wishes to you:  Blessed Solstice.  Merry Christmas.  Happy Kwanzaa. and the Very Best Wishes for the New Year!

A short post today.

Ellen Taylor-Powell, my counterpart at University of Wisconsin Extension, has posted the following to the Extension Education Evaluation TIG list serve.  I think it is important enough to share here.

When you down load this PDF to save a copy, think of where your values come into the model; where others values can affect the program, and how you can modify the model to balance those values.

Ellen says:  “I just wanted to let everyone know that the online logic model course, “Enhancing Program Performance with Logic Models has been produced as a PDF in response to requests from folks without easy or affordable internet access or with different learning needs.  The PDF version (216 pages, 3.35MB) is available at:

http://www.uwex.edu/ces/pdande/evaluation/pdf/lmcourseall.pdf

Please note that no revisions or updates have been made to the original 2003 online course.

Happy Holidays!

Ellen”

My older daughter (I have two–Morgan, the older, and Mersedes, the younger, ) suggested I talk about the evaluative activities around the holidays…hmmm.

Since I’m experiencing serious writers block this week, I thought I’d revisit evaluation as an everyday activity, with a holiday twist.

Keep in mind that the root of evaluation is from the French after the Latin is value (Oxford English Dictionary on line says:  [a. Fr. évaluation, f. évaluer, f. é- =es- (:{em} L. ex) out + value VALUE.]).


Perhaps this is a good time to mention that the theme for Evaluation 2011 put forth by incoming AEA President, Jennifer Greene, is Values and Valuing in Evaluation.  I want to quote from her invitation letter, “…evaluation is inherently imbued with values.  Our work as evaluators intrinsically involves the process of valuing, as our charge is to make judgments (emphasis original) about the “goodness” or the quality, merit, or worth of a program.”

Let us consider the holidays “a program”. The Winter Holiday season starts (at least in the US and the northern hemisphere) with the  Thanksgiving holiday followed shortly thereafter by the first Sunday in Advent.  Typically this period of time includes at least the  following holidays:  St. Nicholas Day, Hanukkah, Winter Solstice, Christmas, Kwanzaa, Boxing Day, New Year’s, and Epiphany (I’m sure there are ones I didn’t list that are relevant).  This list typically takes us through January 6.  (I’m getting to the value part–stay with me…)

When I was a child, I remember the eager expectation of anticipating Christmas–none of the other holidays were even on my radar screen.  (For those of you who know me, you know how long ago that was…)  Then with great expectation (thank you, Charles),   I would go to bed and, as patiently as possible, await the moment when my father would turn on the tree lights, signaling that we children could descend to the living room.  Then poof!  That was Christmas. In 10 minutes it was done. The emotional bath I always took diminished greatly the value of this all important holiday.  Vowing that my children would grow up without the emotional bath of great expectations and dashed hopes, I choose to Celebrate the Season.  In doing so,  found value in the waiting of Advent, the majic of Hanukkah,  sharing of Kwanzaa, the mystery of Christmas and the traditions that come with all of these holidays.  There are other traditions that we revisit yearly, yet we find delight in remembering what the Winter Holiday traditions are and mean; remembering the foods we eat; the times we’ve shared.  From all this we find value in our program.  Do I still experience the emotional bath of childhood during this Holiday Season–not any more–and my children tell me that they like spreading the holidays out over the six week period.  I think this is the time of the year where we can take a second look at our programs (whether they are the holidays, youth development, watershed stewardship, nutrition education, or something else) and look for value in our programs–the part of the program that matters.  Evaluation is the work of capturing that value.  How we do that is what evaluation is all about.

According to the counter on this blog, I’ve published 49 times.  Since last week was the one year anniversary of the inception of “Evaluation is an Everyday Activity”, which means 52 weeks, I missed a few weeks.  Not surprising with vacations, professional development, and writer’s block.  Today is a writer’s block day…I thought I’d do something about program theory.  I’m sure you are asking what has program theory to do with evaluating my program.   Let me explain…

An evaluation that is theory driven uses program theory as a tool to (according to Jody Fitzpatrick):

  1. understand the program to be evaluated
  2. guide the evaluation.

Pretty important contributions.  Faculty have often told me, I know my program’s good; everyone likes it.  But–

Can you describe the program theory that supports your program?

Huey Chen (1) defines program theory as, “a specification of what must be done to achieve the desired goals, what other important impacts may be anticipated, and how these goals and impacts would be generated.”  There are two parts of program theory:  normative theory and causative theory.  Normative theory (quoting Fitzpatrick) “…describes the program as it should be, its goals and outcomes, its interventions and the rationale for these, from the perspectives of various stakeholders.”  Causative theory, according to Fitzpatrick, “… makes use of existing research to describe the potential outcomes of the program based on characteristics of  the clients (read, target audience) and the program actions.  Using both normative and causative theories, one can develop a “plausible program model” or logic model.

Keep in mind that a “plausible program model” is only one of the possible models and the model  developed before implementation may need to change before the final evaluation.  Although anticipated outcomes are the ones you think will happen as a result of the program, Jonny Morell (2) provides a long list of programs where unanticipated outcomes happen before, during, and after the program implementation.  It might be a good idea to think of all potential outcomes–not just the ones you think might happen.  This is why program theory is important…to help you focus on the potential outcomes.

1.  Chen, H. (1990). Theory-driven evaluations.  Newbury Park, CA: Sage.

2.  Morell, J. A. (2010). Evaluation in the Face of Uncertainty. NY: Guilford Press.

There is an ongoing discussion about the difference between impact and outcome.  I think this is an important discussion because Extension professionals are asked regularly to demonstrate  the impact of their program.

There is no consensus about these terms.  They are often used interchangeably. Yet, the consensus is that they are not the same.  When Extension professionals plan an evaluation, it is important to keep these terms separate.  Their meaning is distinct and different.

So what exactly is IMPACT?

And what is an OUTCOME?

What points do we need to keep in mind when considering if the report we are making is a report of OUTCOMES or a report of IMPACTS.  Making explicit the meaning of these words before beginning the program is important.  If there is no difference in your mind, then that needs to be stated.  If there is a difference from your perspective, that needs to be stated as well.  It may all depend on who the audience is for the report.  Have you asked your supervisor (Staff Chair, Department Head, Administrator) what they mean by these terms?

One way to look at this issue is to go to simpler language:

  • What is the result (effect) of the intervention (read ‘program’)–that is, SO WHAT?  This is impact.
  • What is the intervention influence (affect) on the target audience–that is, WHAT HAPPENED?  This is outcome.

I would contend that impact is the effect (i.e., the result) and outcome is the affect (i.e., the influence).

Now to complicate this discussion a bit–where do OUTPUTS fit?

OUTPUTS are necessary and NOT sufficient to determine the influence (affect) or results (effect) of an intervention.  Outputs count things that were done–number of people trained; feet of stream bed reclaimed; number of curriculum written; number of…(fill in the blank).  Outputs do not tell you either the affect or the effect of the intervention.

The difference I draw may be moot if you do not draw the distinction.  If you don’t that is OK.  Just make sure that you are explicit with what you mean by these terms:  OUTCOMES and IMPACT.