Recently, I read a Washington Post article on innovation. innovationThe WP  interviewed Calestous Juma (see below), author of the July, 2016 book, “Innovation and Its Enemies:Why People Resist New Technologies.” The book was published by Oxford University Press (prestigious, to be sure). Priced at $29.95 plus an estimated s/h of $5.50, it sounds like a good purchase.  There is quite a bit of information about the book and the author on the Oxford University Press site.  This prompted me to think about what has changed in evaluation (not just technology) over the last 30+ years. First, though, I want to talk about the article.

Article by Juma.

juma-200x300 Calestous Juma (Courtesy of Harvard)

Juma says that “people don’t fear innovation simply because the technology is new, but because innovation often means losing a piece of their identity or lifestyle.” He goes on to say that “Innovation can also separate people from nature or their sense of purpose.” He argues that these two things are fundamental to the humon experience. I have talked about sense of purpose previously. I wonder if nature is part of purpose or if a sense of purpose comes from a person’s nature? Continue reading

From Social Networks: What Maslow Misses | Psychology Today – via @mslogophiliac

“Humans are social animals for good reason. Without collaboration, there is no survival. It was not possible to defeat a Woolley Mammoth, build a secure structure, or care for children while hunting without a team effort. It’s more true now than then. Our reliance on each other grows as societies became more complex, interconnected, and specialized. Connection is a prerequisite for survival, physically and emotionally.”

This statement, which I found on Harold Jarche ‘s blog, applies to evaluation as much as it applies to the example provided by Psychology Today.

Evaluation is a collaborative effort; a team effort, a social effort. Without the collaboration, evaluation lacks much. I’m not sure that survival is dependent on evaluation and collaborative effort; perhaps. The evaluator may know all about evaluation and yet not be able to solve the problem presented by the client because the evaluator doesn’t know about the topic needing to be evaluated. The evaluator may know about something similar to and different from what the client needs and yet, not know about the specific problem. Let me give you an example.

I’ve done a lot of evaluation in the natural resources area and as a result, I’ve learned much about various natural resource topics, including horticulture, plant science, crop science, marine science. I do not know much about potatoes. A while back, a colleague called me and asked if I could/would serve as the evaluator on a ZEBRA CHIP project. Before I said, Sure, I asked about ZEBRA CHIP. Apparently, it is a potato disease transmitted by bacteria carrying psyllid that is causing much economic devastation among growers. It shows up best when the potatoes are made into chips (hence the name). It looks like this: zebra chip. To me, it isn’t particularly stripped like the animal which offers its name, yet it doesn’t look like potato chips I’m used to seeing.  I”m told that there is an unpleasant flavor to the chips as well. I knew a lot about growing things, not about potatoes, even though I’ve worked with potato growers before, just not about this disease.

So, I said sure, only to discover that I have 11 lines in which to write a cogent evaluation section for the work that Extension will be doing (if the grant is funded). If the grant is funded, it will be a five year effort. A continuation actually, which brings me full circle–a collaboration of multiple universities, multiple disciplines, multiple investigators. So what could I say cogently in 11 lines? I suggested that perhaps looking at intention and confidence would be appropriate because we (remember, I said, “Sure”) would not be able to measure actual behavior change. And to overcome the psyllids and eradicate this problem (not unlike the spotted wing drosophila which is affecting the soft fruits of the NW, specifically blueberries), we would need to get as close to behavior change as possible once the teaching has occurred. How can social media be used here? Good question–something to explore. At what level of Maslow’s hierarchy is this collaboration?  Survival, sure. Somehow I don’t think Maslow was focused on economic survival.

my two cents.






Summer reading 2 Many of you have numerous lists for summer reading (NY Times, NPR, Goodreads, Amazon, others…). My question is what are you reading to further your knowledge about evaluation? Perhaps you are; perhaps you’re not. So I’m going to give you one more list 🙂 …yes, it is evaluative.

If you want something light:  Regression to the Mean by Ernest R. to the mean It is a novel. It is about evaluation. It explains what evaluators do from a political perspective.

If you want something qualitative:  Qualitative Data Analysis by Matthew B. Miles, A. Michael Huberman, and Johnny Saldana.Qualitative data analysis ed. 3 It is the new 3rd edition which Sage (the publisher) commissioned. A good thing, too, as both Miles and Huberman are no longer able to do a revision. My new go-to book.

If you want something on needs assessment: Bridging the Gap Between Asset/Capacity Building and Needs Assessment by James W. Altschuld. Bridging the Gap-altschuld Most needs assessments start with what is lacking (i.e., needed); this proposes that an assessment start with what is present (assets) and build  from there, and in the process, meeting needs.

If you want something on higher education:  College (Un)bound by Jeff unbound by jeffry selingo  The state of higher education and some viable alternatives by a contributing editor at the Chronicle of Higher Education. Yes, it is evaluative.

Most of these I’ve mentioned before. I’ve read the above. I recommend them.

Continue reading

Last week, a colleague and I led two, 20 person cohorts in a two-day evaluation capacity building event.  This activity was the launch (without the benefit of champagne ) of a 17-month long experience where the participants will learn new evaluation skills and then be able to serve as resources for their colleagues in their states.  This training is the brain-child of the Extension Western Region Program Leaders group.  They believe that this approach will be economical and provide significant substantive information about evaluation to the participants.

What Jim and I did last week was work to, hopefully, provide a common introduction to evaluation.   The event was not meant to disseminate the vast array of evaluation information.  We wanted everyone to have a similar starting place.  It was not a train-the-trainer event, so common in Extension.  The participants were at different places in their experience and understanding of program evaluation–some were seasoned, long-time Extension faculty, some were mid-career, some were brand new to Extension and the use of evaluation.  All were Extension faculty from western states.  And although evaluation can involve programs, policies, personnel, products, performance, processes, etc…these two days focused on program evaluation.


It occurred to me that  it would be useful to talk about what is evaluation capacity building (ECB) and what resources are available to build capacity.  Perhaps, the best place to start is with the Preskill and Russ-Eft book by the same name, Evaluation Capacity Building.

This volume is filled with summaries of evaluation points and there are activities to reinforce those points.  Although this is a comprehensive resource, it covers key points briefly and there are other resources s that are valuable to understand the field of capacity building.  For example, Don Compton and his colleagues, Michael Baizerman and Stacey Stockdill edited a New Directions in Evaluation volume (No. 93) that addresses the art, craft, and science of ECB.  ECB is often viewed as a context-dependent system of processes and practices that help instill quality evaluation skills in an organization and its members.  The long term outcome of any ECB is the ability to conduct a rigorous evaluation as part of routine practice.  That is our long-term goal–conducting rigorous evaluations as a part of routine practice.


Although not exhaustive, below are some ECB resources and some general evaluation resources (some of my favorites, to be sure).


ECB resources:

Preskill, H. & Russ-Eft, D. (2005).  Building Evaluation Capacity. Thousand Oaks, CA: Sage

Compton, D. W. Baizerman, M., & Stockdill, S. H. (Ed.).  (2002).  The art, craft, and science of evaluation capacity building. New Directions for Evaluation, No. 93.  San Francisco: Jossey-Bass.

Preskill, H. & Boyle, S. (2008).  A multidisciplinary model of evalluation capacity building.  American Journal of Evaluation, 29 (4), 443-459.

General evaluation resources:

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (4th ed.). (2011).  Program evaluation:  Alternative approaches and practical guidelines. Boston, MA: Pearson.

Scriven, M. (4th ed.).   (1991).  Evaluation Thesaurus. Newbury Park, CA: Sage.

Patton, M. Q. (4th ed.). (2008).  Utilization-focused evaluation. Thousand Oaks, CA: Sage.

Patton, M. Q. (2012).  Essentials of utilization-focused evaluation. Thousand Oaks, CA: Sage

lightening and moountains d775bc83-0fcf-487a-b17f-56c97384c8e9Hi! It’s Tuesday, again.

I was thinking–If evaluation is an everyday activity, why does it FEEL so monumental–you know–over whelming, daunting, aversive even

I can think of several reasons for that feeling:

  • You don’t know how.
  • You don’t want to (do evaluation).
  • You have too much else to do.
  • You don’t like to (do evaluation).
  • Evaluation  isn’t important.
  • Evaluation limits your passion for your program.

All those are good reasons. Yet, in today’s world you have to show your programs are making a difference. You have to provide evidence of impact. To do that (show impact, making a difference) you must evaluate your program.

How do you make your evaluation manageable? How do you make it an everyday activity? Here are several ways.

Utilization-Focused Evaluation

  • Set boundaries around what you evaluate.
  • Limit the questions to ones you must know. Michael Patton says only collect data you are going to use, then use it. (To read more about evaluation and use,  see Patton’s book, Utilization-Focused Evaluation).
  • Evaluate key programs, not every program you conduct.
  • Identify where your passion lies and focus your evaluation efforts there.
  • Start small. You probably won’t be able to demonstrate that your program ensured  world peace; you will be able to know that your target audience has made an important change in the desired direction.

We can talk more about the how, later. Now it is enough to know that evaluation isn’t as monumental as you thought.