I’m involved in evaluating a program that is developing as it evolves.  There is some urgency to get predetermined, clear, and measurable outcomes to report to the administration.  Typically, I wouldn’t resist (see resistance post) this mandate; only this program doesn’t lend itself to this approach.  Because this program is developing as it is implemented, it can’t easily be rolled out to all 36 counties in Oregon at once, as much as administration would love to see that happen.  So what can we do?

We can document the principles that drive the program and use them to stage the implementation across the state.

We can identify the factors that tell us that the area is ready to implement the program (i.e., the readiness factors).

We can share lessons learned with key stakeholders in potential implementation areas.

These are the approaches that Michael Patton’s Developmental Evaluation advocate.  Michael says, “Developmental evaluation is designed to be congruent with and nurture developmental, emergent, innovative, and trans-formative processes.” I had the good fortune to talk with Michael about this program in light of these processes.  He indicated that identifying principles not a model supports developmental evaluation and a program in development.  By using underlying principles, we inform expansion.  Can these principles be coded…yes.  Are they outcome indicators…possibly.  Are they outcome indicators in the summative sense of the word?  Nope.  Not even close.  These principles, however, can help the program people roll out the next phase/wave of the program.

As an evaluator, employing developmental evaluation, do I ignore what is happening on the ground–at each phase of the program implementation.  Not a chance.  I need to encourage the program people at that level to identify clear and measurable outcomes–because from those clear and measurable outcomes will come the principles needed for the next phase.  (This is a good example of the complexity concepts that Michael talks about in DE and are the foundation for systems thinking.)  The readiness factors will also become clear when looking at individual sites.  From this view, we can learn a lot–we can apply what we have learned and, hopefully, avoid similar mistakes.  Will mistakes still occur?  Yes.  Is it important that those lessons are heeded; shared with administrators; and used to identify readiness factors when the program is going to be implemented in a new site?  Yes.  Is this process filled with ambiguity?  You bet.  No one said it would be easy to make a difference.

We are learning as we go–that is the developmental aspect of this evaluation and this program.

Print Friendly, PDF & Email

Comments are closed.