Starting this week, aea365 is posting a series of posts authored by evaluators who blog.  Check it out!

 

There will be a lot of different approaches starting with Susan Kistler, Executive Director of the American Evaluation Association, who blogs every Saturday for aea365.  She has been doing this for almost two years.

 

So even though I’m not blogging on a topic this week (see last week’s post), I wanted to share this with you.What a good way to start a new year–new resources for evaluators.

 

I’ll be gone next week so this is the last post of 2011 and  some reflection of 2011 is in order, I think.

For each of you, 2011 was an amazing year.  I know you are thinking, “Yeah, right.”  Truly, 2011 was amazing–and I invoke Dickens here–because 2011 “…was the best of times, it was the worst of times”.  Kiplinger’s  magazine used as its masthead for many years, the saying “We live in interesting times”.  So even if your joys were great; your sorrows, overwhelming; your adventures, amazing; the day-to-day, a grind, we live in interesting times and because of that 2011 was an amazing year.  Think about it…you’ll probably agree.  (If not, that is an evaluative question–what criteria are you using; what biases have inadvertently appeared; what value was at stake?)

So let’s look forward to 2012 

Some folks believe that 2012 marks the end of the Mayan long count calendar, the advent of cataclysmic or transformative events, and, with the end of the long count calendar, the end of the world on December 21, 2012.  Possibly; probably not.  Everyone has some end of the world scenario in mind.   For me,  end of the world as I know it happened when the carbon parts per million passed 350 (the carbon foot print for November was 390.31).  Let’s think evaluation.

Jennifer Greene, the outgoing AEA president, looking forward and keeping in mind the 2011 global catastrophes asks, “…what does evaluation have to do with these contemporary global catastrophes and tribulations?” (of which there were many).  She says:

  • “If you’re not part of the solution, then you’re part of the problem” (Eldridge Cleaver). Evaluation offers opportunities for inclusive engagement with the key social issues at hand. (Think 350.0rg, Heifer Foundation, Habitat for Humanity, and any other organization reflecting social issues.)
  • Most evaluators are committed to making our world a better place. Most evaluators wish to be of consequence in the world.

Are you going to be part of the problem or part of the solution?  How will you make the world a better place?  What difference will you make?  What new year’s resolutions will you make to answer these questions?  Think on it.

 

May 2012 bring you all another amazing year!

I came across this quote from Viktor Frankl today (thanks to a colleague)

“…everything can be taken from a man (sic) but one thing: the last of the human freedoms – to choose one’s attitude in any given set of circumstances, to choose one’s own way.” Viktor Frankl (Man’s Search for Meaning – p.104)

I realized that,  especially at this time of year, attitude is everything–good, bad, indifferent–the choice is always yours.

How we choose to approach anything depends upon our previous experiences–what I call personal and situational bias.   Sadler* has three classifications for these biases.  He calls them value inertias (unwanted distorting influences which reflect background experience), ethical compromises (actions for which one is personally culpable), and cognitive limitations (not knowing for what ever reason).

When we approach an evaluation, our attitude leads the way.  If we are reluctant, if we are resistant, if we are excited, if we are uncertain, all these approaches reflect where we’ve been, what we’ve seen, what we have learned, what we have done (or not).  We can make a choice how to proceed.

The America n Evaluation Association (AEA) has long had a history of supporting difference.  That value is imbedded in the guiding principles.  The two principles which address supporting differences are

  • Respect for People:  Evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
  • Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

AEA also has developed a Cultural Competence statement.  In it, AEA affirms that “A culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation.”

Both of these documents provide a foundation for the work we do as evaluators as well as relating to our personal and situational bias. Considering them as we  enter into the choice we make about attitude will help minimize the biases we bring to our evaluation work.  The evaluative question from all this–When has your personal and situational biases interfered with you work in evaluation?

Attitude is always there–and it can change.  It is your choice.

 

 

 

 

Sadler, D. R. (1981). Intuitive data processing as a potential source of bias in naturalistic evaluations.  Education Evaluation and Policy Analysis, 3, 25-31.

I’m involved in evaluating a program that is developing as it evolves.  There is some urgency to get predetermined, clear, and measurable outcomes to report to the administration.  Typically, I wouldn’t resist (see resistance post) this mandate; only this program doesn’t lend itself to this approach.  Because this program is developing as it is implemented, it can’t easily be rolled out to all 36 counties in Oregon at once, as much as administration would love to see that happen.  So what can we do?

We can document the principles that drive the program and use them to stage the implementation across the state.

We can identify the factors that tell us that the area is ready to implement the program (i.e., the readiness factors).

We can share lessons learned with key stakeholders in potential implementation areas.

These are the approaches that Michael Patton’s Developmental Evaluation advocate.  Michael says, “Developmental evaluation is designed to be congruent with and nurture developmental, emergent, innovative, and trans-formative processes.” I had the good fortune to talk with Michael about this program in light of these processes.  He indicated that identifying principles not a model supports developmental evaluation and a program in development.  By using underlying principles, we inform expansion.  Can these principles be coded…yes.  Are they outcome indicators…possibly.  Are they outcome indicators in the summative sense of the word?  Nope.  Not even close.  These principles, however, can help the program people roll out the next phase/wave of the program.

As an evaluator, employing developmental evaluation, do I ignore what is happening on the ground–at each phase of the program implementation.  Not a chance.  I need to encourage the program people at that level to identify clear and measurable outcomes–because from those clear and measurable outcomes will come the principles needed for the next phase.  (This is a good example of the complexity concepts that Michael talks about in DE and are the foundation for systems thinking.)  The readiness factors will also become clear when looking at individual sites.  From this view, we can learn a lot–we can apply what we have learned and, hopefully, avoid similar mistakes.  Will mistakes still occur?  Yes.  Is it important that those lessons are heeded; shared with administrators; and used to identify readiness factors when the program is going to be implemented in a new site?  Yes.  Is this process filled with ambiguity?  You bet.  No one said it would be easy to make a difference.

We are learning as we go–that is the developmental aspect of this evaluation and this program.

Jennifer Greene, the current American Evaluation President, expanded on the theme of Thanksgiving and gratitude.  She posted her comments in the AEA  Newsletter.  I liked them a lot.  I quote them below…

 

Thanksgiving is a ‘time out’ from the busyness of daily life, a time for quiet reflection, and a time to contemplate pathways taken and pathways that lie ahead.

In somewhat parallel fashion, evaluation can also offer a ‘time out’ from the busyness and routine demands of daily life, notably for evaluation stakeholders and especially for program developers, administrators, and staff. Our educative traditions in particular are oriented toward goals of learning, enlightenment, reflection, and redirection. These traditions, which are anchored in the evaluation ideals of Lee Cronbach and Carol Weiss, aspire to provide a data-based window into how well the logic of a program translates to particular experiences in particular contexts, into promising practices evident in some contexts even if they are not part of the program design, into who is being well served by the program and who remains overlooked. Our educative practices position evaluation as a lens for critical reflection (emphasis added) on the quality of a program’s design and implementation, for reconsideration of the urgency of the needs the program is intended to address, for contemplation of alternative pathways that could be taken, and thus broadly as a vehicle by which society learns about itself (from Cronbach’s 95 theses).

She concludes her comments with a statement that I have lived by and believed throughout my career as an evaluator.  “…I also believe that education remains the most powerful of all social change alternatives.”

 

Education is the great equalizer and evaluation works hand-in-hand with education.