Last Wednesday, I had the privilege to attend the OPEN (Oregon Program Evaluators Network) annual meeting.

Michael Quinn Patton, the key note speaker, talked about  developmental evaluation and

utilization focused evaluation.  Utilization Focused Evaluation makes sense–use by intended users.

Developmental Evaluation, on the other hand, needs some discussion.

The way Michael tells the story (he teaches a lot through story) is this:

“I had a standard 5-year contract with a community leadership program that specified 2 1/2 years of formative evaluation for program improvement to be followed by 2 1/2 years of summative evaluation that would lead to an overall decision about whether the program was effective. ”   After 2 1/2 years, Michael called for the summative evaluation to begin.  The director  was adamant, “We can’t stand still for 2 years.  Let’s keep doing formative evaluation.  We want to keep improving the program… (I) Never (want to do a summative evaluation)”…if it means standardizing the program.  We want to keep developing and changing.”  He looked at Michael sternly, challengingly.  “Formative evaluation!  Summative evaluation! Is that all you evaluators have to offer?” Michael hemmed and hawed and said, “I suppose we could do…ummm…we could do…ummm…well, we might do, you know…we could try developmental evaluation!” Not knowing what that was, the director asked “What’s that?”  Michael responded, “It’s where you, ummm, keep developing.”  Developmental evaluation was born.

The evaluation field offered, until now, two global approaches to evaluation, formative for program improvement and summative to make an overall judgment of merit and worth.  Now, developmental evaluation (DE) offers another approach, one which is relevant to social innovators looking to bring about major social change.  It takes into consideration systems theory, complexity concepts, uncertainty principles,  nonlinearity, and emergence.  DE acknowledges that resistance and push back are likely when change happens.  Developmental evaluation recognized that change brings turbulence and suggests ways that “adapts to the realities of complex nonlinear dynamics rather than trying to impose order and certainty on a disorderly and uncertain world” (Patton, 2011).  Social innovators recognize that outcomes will emerge as the program moves forward and to predefine outcomes limits the vision.

Michael has used the art of Mark M. Rogers to illustrate the point.  The cartoon has two early humans, one with what I would call a wheel, albeit primitive, who is saying, “No go.  The evaluation committee said it doesn’t meet utility specs.  They want something linear, stable, controllable, and targeted to reach a pre-set destination.  They couldn’t see any use for this (the wheel).”

For Extension professionals who are delivering programs designed to lead to a specific change, DE may not be useful.  For those Extension professionals who vision something different, DE may be the answer.  I think DE is worth a look.

Look for my next post after October 14; I’ll be out of the office until then.

Patton, M. Q. (2011) Developmental Evaluation. NY: Guilford Press.

Ryan asks a good question: “Are youth serving programs required to have an IRB for applications, beginning and end-of-year surveys, and program evaluations?”  His question leads me to today’s topic.

The IRB is concerned with “research on human subjects”.  So you ask, When is evaluation a form research?

It all depends.

Although evaluation methods have evolved from  social science research, there are important distinctions between the two.

Fitzpatrick, Sanders, and Worthen list five differences between the two and it is in those differences that one must consider IRB assurances.

These five differences are:

  1. purpose,
  2. who sets the agenda,
  3. generalizability of results,
  4. criteria, and
  5. preparation.

Although these criteria differ for evaluation and research, there are times when evaluation and research overlap.    If the evaluation study adds to knowledge in a discipline or research informs our judgments about a program, then the distinctions are blurred and a broader view of the inquiry is needed and possibly an IRB approval.

IRB considers children a vulnerable population.  Vulnerable populations require IRB protection.  Evaluations with vulnerable populations may need IRB assurances.  IF you have a program that involves children AND you plan to use the program activities as the basis of an effectiveness evaluation (ass opposed to program improvement) AND use that evaluation as scholarship you will need IRB.

Ryan asks “what does publish mean”.  That question takes us to what is scholarship.  One definition of scholarship is that scholarship is creative work, that is validated by peers and communicated.  Published means communicating to peers in a peer reviewed journal or professional meeting not, for example, in a press release.

How do you decide if your evaluation needs IRB?  How do you decide if your evaluation is research or not?   Start with the purpose of your inquiry.  Do you want to add knowledge in the field?   Do you want to see if what you are doing is applicable in other settings?  Do you want others to know what you’ve done and why?  They you want to communicate this.  In academics, that means publishing it in a peer reviewed journal or presenting it at a professional meeting.  And to do that and use the information provided you by your participants who are human subjects, you will need IRB assurance that they are protected.

Every IRB is different.  Check with your institution.  Most work done by Extension professionals falls under the category of “exempt from full board review”.  It is the shortest review and the least restrictive.  Vulnerable populations, audio and/or video taping, or asking sensitive questions typically is categorized as expedited, a more stringent review than the “exempt” category, which takes a little longer.  IF you are working with vulnerable populations and asking for sensitive information,  doing an invasive procedure, or involving participants in something that could be viewed as coercive, then the inquiry will probably need full board review (which takes the longest turn around time.

September 25 – October 2 is Banned Book Week.

All of the books shown below have been or are banned.

and the American Library Association has once again published a list of banned or challenged books.  The September issue of the AARP Bulletin listed 50 banned books.  The Merriam Webster Dictionary was banned in a California elementary school in January 2010.

Yes, you say, so what?  How does that relate to program evaluation?

Remember the root of the work “evaluation” is value.  Someplace in the United States, some group used some criteria to “value” (or not) a book– to lodge a protest, successfully (or not), to remove a book from a library, school, or other source.  Establishing a criteria means that evaluation was taking place.  In this case, those criteria included being “too political,” having “too much sex,” being “irreligious,” being “socially offensive,” or some other criteria.   Some one, some place, some where has decided that the freedom to think for your self, the freedom to read, the importance of the First Amendment, the importance of free and open access to information are not important parts of our rights and they used evaluation to make that decision.

Although I don’t agree with censorship–I agree with the right that a person has to express her or his opinion as guaranteed by the First Amendment.  Yet in expressing an opinion, especially an evaluative opinion, an individual has a responsibility to express that opinion without hurting other people or property; to evaluate responsibly.

To aid evaluators to evaluate responsibly, the The American Evaluation Association has developed a set of five guiding principles for evaluators and even though you may not consider yourself a professional evaluator, considering these principals when conducting your evaluations is important and responsible.  The Guiding Principles are:

A. Systematic Inquiry: Evaluators conduct systematic, data-based inquiries;

B. Competence: Evaluators provide competent performance to stakeholders;

C. Integrity/Honesty: Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process;

D.  Respect for People:  Evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders; and

E. Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

I think free and open access to information is covered by principle D and E.  You may or may not agree with the people who used evaluation to challenge a book and in doing so used evaluation.  Yet, as someone who conducts evaluation, you have a responsibility to consider these principles, making sure that your evaluations respect people and are responsible for general and public welfare (in addition to employing systematic inquiry, competence, and integrity/honesty).  Now–go read a good (banned) book!

A faculty member asked me how does one determine impact from qualitative data.  And in my mail box today was a publication from Sage Publishers inviting me to “explore these new and best selling qualitative methods titles from Sage.”

Many Extension professionals are leery of gathering data using qualitative methods.  “There is just too much data to make sense of it,” is one complaint I often hear.  Yes, one characteristic of qualitative data is the rich detail that usually results. (Of course is you are only asking closed ended questions resulting in Yes/No, the richness is missing.)  Other complaints include “What do I do with the data?” “How do I draw conclusions?”  “How do I report the findings?”  And as a result, many Extension professionals default to what is familiar–a survey.  Surveys, as we have discussed previously, are easy to code, easy to report (frequencies and percentages), and difficult to write well.

The Sage brochure provides resources to answer some of these questions.

Michael Patton’s 3rd edition of Qualitative Research and Evaluation Methods “…contains hundreds of examples and stories illuminating all aspects of qualitative inquiry…it offers strategies for enhancing quality and credibility of qualitative findings…and providing detailed analytical guidelines.”  Michael is the keynote speaker for the Oregon Program Evaluator Network (OPEN) fall conference where he will be talking about his new book, Developmental Evaluation. If you are in Portland, I encourage you to attend.  (For more information, see:

http://www.oregoneval.org/program/

Another reference I just purchased is Bernard and Ryan’ s volume, Analyzing Qualitative Data. This book is a systematic approach to making sense out of words. It, too, is available from Sage.

What does all this have to do with a analyzing a conversation?  A conversation is qualitative data.  It is made up of words.  Knowing what to do with those words will provide evaluation data that is powerful.  My director is forever saying the story is what legislators want to hear.  Stories are qualitative data.

One of the most common forms of conversation that Extension professionals use is focus groups.  It is a guided, structured, and focused conversation.  It can yield a wealth of information if the questions are well crafted, if those questions have been piloted tested, and the data are analyzed in a meaningful way.  There are numerous ways to analyze qualitative data (cultural domain analysis, KWIC analysis, discourse analysis, narrative analysis, grounded theory, content analysis, schema analysis, analytic induction and qualitative comparative analysis, and ethnographic decision models) all of which are discussed in the above mentioned reference.  Deciding which will best work with the gathered qualitative data is a decision only the principal investigator can make.  Comfort and experience will enter into that decision.  Keep in mind qualitative data can be reduced to numbers; numbers cannot be exploded to capture the words from which they came.