Last Wednesday, I had the privilege to attend the OPEN (Oregon Program Evaluators Network) annual meeting.

Michael Quinn Patton, the key note speaker, talked about  developmental evaluation and

utilization focused evaluation.  Utilization Focused Evaluation makes sense–use by intended users.

Developmental Evaluation, on the other hand, needs some discussion.

The way Michael tells the story (he teaches a lot through story) is this:

“I had a standard 5-year contract with a community leadership program that specified 2 1/2 years of formative evaluation for program improvement to be followed by 2 1/2 years of summative evaluation that would lead to an overall decision about whether the program was effective. ”   After 2 1/2 years, Michael called for the summative evaluation to begin.  The director  was adamant, “We can’t stand still for 2 years.  Let’s keep doing formative evaluation.  We want to keep improving the program… (I) Never (want to do a summative evaluation)”…if it means standardizing the program.  We want to keep developing and changing.”  He looked at Michael sternly, challengingly.  “Formative evaluation!  Summative evaluation! Is that all you evaluators have to offer?” Michael hemmed and hawed and said, “I suppose we could do…ummm…we could do…ummm…well, we might do, you know…we could try developmental evaluation!” Not knowing what that was, the director asked “What’s that?”  Michael responded, “It’s where you, ummm, keep developing.”  Developmental evaluation was born.

The evaluation field offered, until now, two global approaches to evaluation, formative for program improvement and summative to make an overall judgment of merit and worth.  Now, developmental evaluation (DE) offers another approach, one which is relevant to social innovators looking to bring about major social change.  It takes into consideration systems theory, complexity concepts, uncertainty principles,  nonlinearity, and emergence.  DE acknowledges that resistance and push back are likely when change happens.  Developmental evaluation recognized that change brings turbulence and suggests ways that “adapts to the realities of complex nonlinear dynamics rather than trying to impose order and certainty on a disorderly and uncertain world” (Patton, 2011).  Social innovators recognize that outcomes will emerge as the program moves forward and to predefine outcomes limits the vision.

Michael has used the art of Mark M. Rogers to illustrate the point.  The cartoon has two early humans, one with what I would call a wheel, albeit primitive, who is saying, “No go.  The evaluation committee said it doesn’t meet utility specs.  They want something linear, stable, controllable, and targeted to reach a pre-set destination.  They couldn’t see any use for this (the wheel).”

For Extension professionals who are delivering programs designed to lead to a specific change, DE may not be useful.  For those Extension professionals who vision something different, DE may be the answer.  I think DE is worth a look.

Look for my next post after October 14; I’ll be out of the office until then.

Patton, M. Q. (2011) Developmental Evaluation. NY: Guilford Press.

September 25 – October 2 is Banned Book Week.

All of the books shown below have been or are banned.

and the American Library Association has once again published a list of banned or challenged books.  The September issue of the AARP Bulletin listed 50 banned books.  The Merriam Webster Dictionary was banned in a California elementary school in January 2010.

Yes, you say, so what?  How does that relate to program evaluation?

Remember the root of the work “evaluation” is value.  Someplace in the United States, some group used some criteria to “value” (or not) a book– to lodge a protest, successfully (or not), to remove a book from a library, school, or other source.  Establishing a criteria means that evaluation was taking place.  In this case, those criteria included being “too political,” having “too much sex,” being “irreligious,” being “socially offensive,” or some other criteria.   Some one, some place, some where has decided that the freedom to think for your self, the freedom to read, the importance of the First Amendment, the importance of free and open access to information are not important parts of our rights and they used evaluation to make that decision.

Although I don’t agree with censorship–I agree with the right that a person has to express her or his opinion as guaranteed by the First Amendment.  Yet in expressing an opinion, especially an evaluative opinion, an individual has a responsibility to express that opinion without hurting other people or property; to evaluate responsibly.

To aid evaluators to evaluate responsibly, the The American Evaluation Association has developed a set of five guiding principles for evaluators and even though you may not consider yourself a professional evaluator, considering these principals when conducting your evaluations is important and responsible.  The Guiding Principles are:

A. Systematic Inquiry: Evaluators conduct systematic, data-based inquiries;

B. Competence: Evaluators provide competent performance to stakeholders;

C. Integrity/Honesty: Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process;

D.  Respect for People:  Evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders; and

E. Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

I think free and open access to information is covered by principle D and E.  You may or may not agree with the people who used evaluation to challenge a book and in doing so used evaluation.  Yet, as someone who conducts evaluation, you have a responsibility to consider these principles, making sure that your evaluations respect people and are responsible for general and public welfare (in addition to employing systematic inquiry, competence, and integrity/honesty).  Now–go read a good (banned) book!

newyearresolution1 Ok–Christmas is over and now is the time to reflect on what needs to be different…self-deception, not with standing. I went looking for something salient to say today and found the following 10 reasons to NOT make new year’s resolutions posted on the Happy Lists (a blog about personal development and positive change for those who love lists found at

Do these make sense? Remember–you evaluate everyday.

1. They set you up to fail.

2. Everybody does it.

3. Losing weight should be a whole lifestyle change.

4. January is the wrong reason. T

5. Just make one.

6. There are better ways.

7. Bad economy.

8. No need for reminders.

9. You’re already stressed out enough.

10. They’re probably the same as last year.

I want to suggest one resolution that I’d like you to consider in 2010…one that will succeed…one that makes sense.

Learn one thing you didn’t know about evaluation.  Practice it.  If you need suggestions about the one thing, let me know (comment on the blog; email me if you want to remain anonymous on the blog; or call me–they all work.)

So if you’ve made some new year’s resolutions, throw them away. Make the changes in your life because you want to.

My wish for you is a wonderful 2010–make some part of your life better than 2009.