Jan
07
Filed Under (program evaluation) by Molly on 07-01-2013

On January 22, 23, and 24, a group of would be evaluators will gather in Tucson, AZ at the Westin La Paloma Resort.

Even though Oregon State is a co-sponsor for this program, being in Oregon in winter (i.e., now) is not the land of sunshine, and since Vitamin D is critical for everyone’s well being, I chose Tucson for our capstone event.  Our able support person, Gretchen, chose  the La Paloma, a wonderful site on the north side of Tucson.  So even if it is not warm, it will be sunny.  Why, we might even get to go swimming; if not swimming,  certainly hiking.  There are a lot of places to hike around Tucson…in Sabino Canyon ; near/around A Mountain (first year U of A students get to whitewash or paint the A)  ; Saguaro National Park ; or maybe in one of the five (yes, five) mountain ranges surrounding Tucson.  (If you are interested in other hikes, look here.)

We will be meeting Tuesday afternoon, all day Wednesday, and Thursday morning.  Participants have spent the past 17 months participating in and learning about evaluation.  They have identified a project/program (either big P or little p), and they participated in a series of modules, webinars, and office hours on topics used everyday in evaluating a project or program.We anticipate over 20 attendees from the cohorts.  We have participants from five Extension program areas (Nutrition, Agriculture, Natural Resources, Family and Community Science, and 4-H), from ten western states (Oregon, Washington, California, Utah, Colorado, Idaho, New Mexico, Arizona, Wyoming, and Hawaii.), and all levels of familiarity with evaluation (beginner to expert).

I’m the evaluation specialist in charge of the program content (big P) and Jim Lindstrom (formerly of Washington State, currently University of Idaho) has been the professional development and technical specialist, and Gretchen Cuevas (OSU) has been our wonderful support person.  I’m using Patton’s Developmental Evaluation Model to evaluate this program.  Although some things were set at the beginning of the program (the topics for the modules and webinars, for example), other things were changed depending on feedback (readings, office hours). Although we expect that participants will grow their knowledge of evaluation, we do not know what specific and measurable outcomes will result (hence, developmental). We hope to run the program (available to Extension faculty in the Western Region) again in September 2013.  Our goal is to build evaluation capacity in the Western Extension Region.  Did we?

Dec
09

I’m involved in evaluating a program that is developing as it evolves.  There is some urgency to get predetermined, clear, and measurable outcomes to report to the administration.  Typically, I wouldn’t resist (see resistance post) this mandate; only this program doesn’t lend itself to this approach.  Because this program is developing as it is implemented, it can’t easily be rolled out to all 36 counties in Oregon at once, as much as administration would love to see that happen.  So what can we do?

We can document the principles that drive the program and use them to stage the implementation across the state.

We can identify the factors that tell us that the area is ready to implement the program (i.e., the readiness factors).

We can share lessons learned with key stakeholders in potential implementation areas.

These are the approaches that Michael Patton’s Developmental Evaluation advocate.  Michael says, “Developmental evaluation is designed to be congruent with and nurture developmental, emergent, innovative, and trans-formative processes.” I had the good fortune to talk with Michael about this program in light of these processes.  He indicated that identifying principles not a model supports developmental evaluation and a program in development.  By using underlying principles, we inform expansion.  Can these principles be coded…yes.  Are they outcome indicators…possibly.  Are they outcome indicators in the summative sense of the word?  Nope.  Not even close.  These principles, however, can help the program people roll out the next phase/wave of the program.

As an evaluator, employing developmental evaluation, do I ignore what is happening on the ground–at each phase of the program implementation.  Not a chance.  I need to encourage the program people at that level to identify clear and measurable outcomes–because from those clear and measurable outcomes will come the principles needed for the next phase.  (This is a good example of the complexity concepts that Michael talks about in DE and are the foundation for systems thinking.)  The readiness factors will also become clear when looking at individual sites.  From this view, we can learn a lot–we can apply what we have learned and, hopefully, avoid similar mistakes.  Will mistakes still occur?  Yes.  Is it important that those lessons are heeded; shared with administrators; and used to identify readiness factors when the program is going to be implemented in a new site?  Yes.  Is this process filled with ambiguity?  You bet.  No one said it would be easy to make a difference.

We are learning as we go–that is the developmental aspect of this evaluation and this program.