Sep
30

Summative, formative, developmental

Filed Under (Methodology, program evaluation) by Molly on 30-09-2010 and tagged , , ,

Last Wednesday, I had the privilege to attend the OPEN (Oregon Program Evaluators Network) annual meeting.

Michael Quinn Patton, the key note speaker, talked about  developmental evaluation and

utilization focused evaluation.  Utilization Focused Evaluation makes sense–use by intended users.

Developmental Evaluation, on the other hand, needs some discussion.

The way Michael tells the story (he teaches a lot through story) is this:

“I had a standard 5-year contract with a community leadership program that specified 2 1/2 years of formative evaluation for program improvement to be followed by 2 1/2 years of summative evaluation that would lead to an overall decision about whether the program was effective. ”   After 2 1/2 years, Michael called for the summative evaluation to begin.  The director  was adamant, “We can’t stand still for 2 years.  Let’s keep doing formative evaluation.  We want to keep improving the program… (I) Never (want to do a summative evaluation)”…if it means standardizing the program.  We want to keep developing and changing.”  He looked at Michael sternly, challengingly.  “Formative evaluation!  Summative evaluation! Is that all you evaluators have to offer?” Michael hemmed and hawed and said, “I suppose we could do…ummm…we could do…ummm…well, we might do, you know…we could try developmental evaluation!” Not knowing what that was, the director asked “What’s that?”  Michael responded, “It’s where you, ummm, keep developing.”  Developmental evaluation was born.

The evaluation field offered, until now, two global approaches to evaluation, formative for program improvement and summative to make an overall judgment of merit and worth.  Now, developmental evaluation (DE) offers another approach, one which is relevant to social innovators looking to bring about major social change.  It takes into consideration systems theory, complexity concepts, uncertainty principles,  nonlinearity, and emergence.  DE acknowledges that resistance and push back are likely when change happens.  Developmental evaluation recognized that change brings turbulence and suggests ways that “adapts to the realities of complex nonlinear dynamics rather than trying to impose order and certainty on a disorderly and uncertain world” (Patton, 2011).  Social innovators recognize that outcomes will emerge as the program moves forward and to predefine outcomes limits the vision.

Michael has used the art of Mark M. Rogers to illustrate the point.  The cartoon has two early humans, one with what I would call a wheel, albeit primitive, who is saying, “No go.  The evaluation committee said it doesn’t meet utility specs.  They want something linear, stable, controllable, and targeted to reach a pre-set destination.  They couldn’t see any use for this (the wheel).”

For Extension professionals who are delivering programs designed to lead to a specific change, DE may not be useful.  For those Extension professionals who vision something different, DE may be the answer.  I think DE is worth a look.

Look for my next post after October 14; I’ll be out of the office until then.

Patton, M. Q. (2011) Developmental Evaluation. NY: Guilford Press.

Print Friendly, PDF & Email
Be Sociable, Share!


2 Comments Already, Leave Yours Too

Samantha Grant on 8 October, 2010 at 12:24 pm #
    

In Minnesota Extension, we’ve been talking about Patton’s new approach. From an Extension point of view, I would be interested to see what you think of it and maybe where you can see it fitting in your program evaluations.


englem on 18 October, 2010 at 12:46 pm #
    

Hi Samantha,
Thanks, for asking…Intuitively (as I’ve not finished reading the book), I’d say that this approach would be valuable to Extension professionals. Extension does a lot of developing and changing as that developing goes along. An example that comes to mind is the OSU Watershed Education Team (WE Team), a collaborative of 12 or so different programs which we are attempting to evaluate as a system and establish a measure of impact of that system. Rather than aggregating the 12 different parts, we’ve developed an adaptable evaluation instrument which we think will work across those programs. We don’t know–we’ve just started. I’ll be more articulate after I read more. My two cents.


Post a Comment
Name:
Email:
Website:
Comments: