I’m involved in evaluating a program that is developing as it evolves. There is some urgency to get predetermined, clear, and measurable outcomes to report to the administration. Typically, I wouldn’t resist (see resistance post) this mandate; only this program doesn’t lend itself to this approach. Because this program is developing as it is implemented, it can’t easily be rolled out to all 36 counties in Oregon at once, as much as administration would love to see that happen. So what can we do?
We can document the principles that drive the program and use them to stage the implementation across the state.
We can identify the factors that tell us that the area is ready to implement the program (i.e., the readiness factors).
We can share lessons learned with key stakeholders in potential implementation areas.
These are the approaches that Michael Patton’s Developmental Evaluation advocate. Michael says, “Developmental evaluation is designed to be congruent with and nurture developmental, emergent, innovative, and trans-formative processes.” I had the good fortune to talk with Michael about this program in light of these processes. He indicated that identifying principles not a model supports developmental evaluation and a program in development. By using underlying principles, we inform expansion. Can these principles be coded…yes. Are they outcome indicators…possibly. Are they outcome indicators in the summative sense of the word? Nope. Not even close. These principles, however, can help the program people roll out the next phase/wave of the program.
As an evaluator, employing developmental evaluation, do I ignore what is happening on the ground–at each phase of the program implementation. Not a chance. I need to encourage the program people at that level to identify clear and measurable outcomes–because from those clear and measurable outcomes will come the principles needed for the next phase. (This is a good example of the complexity concepts that Michael talks about in DE and are the foundation for systems thinking.) The readiness factors will also become clear when looking at individual sites. From this view, we can learn a lot–we can apply what we have learned and, hopefully, avoid similar mistakes. Will mistakes still occur? Yes. Is it important that those lessons are heeded; shared with administrators; and used to identify readiness factors when the program is going to be implemented in a new site? Yes. Is this process filled with ambiguity? You bet. No one said it would be easy to make a difference.
We are learning as we go–that is the developmental aspect of this evaluation and this program.
Michael Quinn Patton, the key note speaker, talked about developmental evaluation and
The way Michael tells the story (he teaches a lot through story) is this:
“I had a standard 5-year contract with a community leadership program that specified 2 1/2 years of formative evaluation for program improvement to be followed by 2 1/2 years of summative evaluation that would lead to an overall decision about whether the program was effective. ” After 2 1/2 years, Michael called for the summative evaluation to begin. The director was adamant, “We can’t stand still for 2 years. Let’s keep doing formative evaluation. We want to keep improving the program… (I) Never (want to do a summative evaluation)”…if it means standardizing the program. We want to keep developing and changing.” He looked at Michael sternly, challengingly. “Formative evaluation! Summative evaluation! Is that all you evaluators have to offer?” Michael hemmed and hawed and said, “I suppose we could do…ummm…we could do…ummm…well, we might do, you know…we could try developmental evaluation!” Not knowing what that was, the director asked “What’s that?” Michael responded, “It’s where you, ummm, keep developing.” Developmental evaluation was born.
The evaluation field offered, until now, two global approaches to evaluation, formative for program improvement and summative to make an overall judgment of merit and worth. Now, developmental evaluation (DE) offers another approach, one which is relevant to social innovators looking to bring about major social change. It takes into consideration systems theory, complexity concepts, uncertainty principles, nonlinearity, and emergence. DE acknowledges that resistance and push back are likely when change happens. Developmental evaluation recognized that change brings turbulence and suggests ways that “adapts to the realities of complex nonlinear dynamics rather than trying to impose order and certainty on a disorderly and uncertain world” (Patton, 2011). Social innovators recognize that outcomes will emerge as the program moves forward and to predefine outcomes limits the vision.
Michael has used the art of Mark M. Rogers to illustrate the point. The cartoon has two early humans, one with what I would call a wheel, albeit primitive, who is saying, “No go. The evaluation committee said it doesn’t meet utility specs. They want something linear, stable, controllable, and targeted to reach a pre-set destination. They couldn’t see any use for this (the wheel).”
For Extension professionals who are delivering programs designed to lead to a specific change, DE may not be useful. For those Extension professionals who vision something different, DE may be the answer. I think DE is worth a look.
Look for my next post after October 14; I’ll be out of the office until then.
Patton, M. Q. (2011) Developmental Evaluation. NY: Guilford Press.
There are a three topics on which I want to touch today.
In reverse order:
Evaluation use: I neglected to mention Michael Quinn Patton’s book on evaluation use. Patton has advocated use before most everyone else. The title of his book is Utilization-Focused Evaluation. The 4th edition is available from the publisher (Sage) or from Amazon (and if I knew how to insert links to those sites, I’d do it…another lesson…).
Systems diagrams: I had the opportunity last week to work with a group of Extension faculty all involved in Watershed Education (called the WE Team). This was an exciting experience for me. I helped them visualize what their concept of the WE Team looked like using the systems tool of drawing a systems diagram. This is an exercise whereby individuals or small groups quickly draw a visualization of a system (in this case the WE Team). This is not art; it is not realistic; it is only a representation from one perspective.
This is a useful tool for evaluators because it can help evaluators see where there are opportunities for evaluation; where there are opportunities for leverage; and where there there might be resistance to change (force fields). It also helps evaluators see relationships and feedback loops. I have done workshops on using systems tools in evaluating multi-site systems (of which a systems diagram is one tool) with Andrea Hegedus for the American Evaluation Association. Although this isn’t the diagram the WE Team created, it is an example of what a system diagram could look like. I used the soft ware called Inspiration to create the WE Team diagram. Inspiration has a free 30- day download and it is inexpensive (the download for V. 9 is $69.00).
Focus group participant composition.
The composition of focus groups is very important if you want to get data that you can use AND that answers your study question(s). Focus groups tend to be homogeneous, with variations to allow for differing opinions. Since the purpose of the focus group is to elicit in-depth opinions, it is important to compose the group with similar demographics (depending on your topic) in
Comfort and use drive the composition. More on this later.