Last week I spoke about thinking like an evaluator by identifying the evaluative questions that you face daily.  They are endless…Yet, doing this is hard, like any new behavior.  Remember when you first learned to ride a bicycle?  You had to practice before you got your balance.  You had to practice a lot.  The same is true for identifying the evaluative questions you face daily.

So you practice, maybe.  You try to think evaluatively.  Something happens along the way; or perhaps you don’t even get to thinking about those evaluative questions.  That something that interferes with thinking or doing is resistance.  Resistance is a Freudian concept that means that you directly or indirectly refuse to change your behavior.   You don’t look for evaluative questions.  You don’t articulate the criteria for value.  Resistance usually occurs with anxiety about a new and strange situation.  A lot of folks are anxious about evaluation–they personalize the process.  And unless it is personnel evaluation, it is never about you.  It is all about the program and the participants in that program.

What is interesting (to me at least) is that there is resistance at many different levels–the evaluator, the participant, the stakeholder  (which may include the other two levels as well).  Resistance may be active or passive.  Resistance may be overt or covert.  I’ve often viewed resistance as a 2×2 diagram.   The rows are active or passive; the columns are overt or covert.  So combining labels, resistance can be active overt, active covert, passive overt, passive covert.  Now I know this is an artificial and  socially constructed idea and may be totally erroneous.  This approach helps me to make sense out of what I see when I go to meetings to help a content team develop their program and try to introduce (or not) evaluation in the process.  I imagine you have seen examples of these types of resistance–maybe you’ve even demonstrated them.  If so, then you are in good company–most people have demonstrated all of these types of resistance.

I bring up the topic of resistance now for two reasons.

1) Because I’ve just started a 17-month long evaluation capacity building program with 38 participants.  Some of those participants were there because they were told to be there, and let me know their feelings about participating–what kind of resistance could they demonstrate?  Some of those participants are there because they are curious and want to know–what kind of resistance could that be?  Some of the participants just sat there–what kind of resistance could that be?  Some of the participants did anything else while sitting in the program–what kind of resistance could that be? and

2) Because I will be delivering a paper on resistance and evaluation at the annual American Evaluation Association meeting in November.  This is helping me organize my thoughts.

I would welcome your thoughts on this complex topic.

I was talking with a colleague about evaluation capacity building (see last week’s post) and the question was raised about thinking like an evaluator.  Got me thinking about the socialization of professions and what has to happen to build a critical mass of like minded people.

Certainly, preparatory programs in academia conducted by experts, people who have worked in the field a long time–or at least longer than you starts the process.  Professional development helps–you know, attending meetings where evaluators meet (like the upcoming AEA conference, U. S. regional affiliates [there are many and they have conferences and meetings, too], and international organizations [increasing in number–which also host conferences and professional development sessions]–let me know if you want to know more about these opportunities).  Reading new and timely literature  on evaluation provides insights into the language.  AND looking at the evaluative questions in everyday activities.  Questions such as:  What criteria?  What  standards?  Which values?  What worth? Which decisions?

The socialization of evaluators happens because people who are interested in being evaluators look for the evaluation questions in everything they do.  Sometimes, looking for the evaluative question is easy and second nature–like choosing a can of corn at the grocery store; sometimes it is hard and demands collaboration–like deciding on the effectiveness of an educational program.

My recommendation is start with easy things–corn, chocolate chip cookies, wine, tomatoes; move to harder things with more variables–what to wear when and where, or whether to include one group or another .  The choices you make  will all depend upon what criteria is set, what standards have been agreed upon, and what value you place on the outcome or what decision you make.

The socialization process is like a puzzle, something that takes a while to complete, something that is different for everyone, yet ultimately the same.  The socialization is not unlike evaluation…pieces fitting together–criteria, standards, values, decisions.  Asking the evaluative questions  is an ongoing fluid process…it will become second nature with practice.

Last week, a colleague and I led two, 20 person cohorts in a two-day evaluation capacity building event.  This activity was the launch (without the benefit of champagne ) of a 17-month long experience where the participants will learn new evaluation skills and then be able to serve as resources for their colleagues in their states.  This training is the brain-child of the Extension Western Region Program Leaders group.  They believe that this approach will be economical and provide significant substantive information about evaluation to the participants.

What Jim and I did last week was work to, hopefully, provide a common introduction to evaluation.   The event was not meant to disseminate the vast array of evaluation information.  We wanted everyone to have a similar starting place.  It was not a train-the-trainer event, so common in Extension.  The participants were at different places in their experience and understanding of program evaluation–some were seasoned, long-time Extension faculty, some were mid-career, some were brand new to Extension and the use of evaluation.  All were Extension faculty from western states.  And although evaluation can involve programs, policies, personnel, products, performance, processes, etc…these two days focused on program evaluation.


It occurred to me that  it would be useful to talk about what is evaluation capacity building (ECB) and what resources are available to build capacity.  Perhaps, the best place to start is with the Preskill and Russ-Eft book by the same name, Evaluation Capacity Building.

This volume is filled with summaries of evaluation points and there are activities to reinforce those points.  Although this is a comprehensive resource, it covers key points briefly and there are other resources s that are valuable to understand the field of capacity building.  For example, Don Compton and his colleagues, Michael Baizerman and Stacey Stockdill edited a New Directions in Evaluation volume (No. 93) that addresses the art, craft, and science of ECB.  ECB is often viewed as a context-dependent system of processes and practices that help instill quality evaluation skills in an organization and its members.  The long term outcome of any ECB is the ability to conduct a rigorous evaluation as part of routine practice.  That is our long-term goal–conducting rigorous evaluations as a part of routine practice.


Although not exhaustive, below are some ECB resources and some general evaluation resources (some of my favorites, to be sure).


ECB resources:

Preskill, H. & Russ-Eft, D. (2005).  Building Evaluation Capacity. Thousand Oaks, CA: Sage

Compton, D. W. Baizerman, M., & Stockdill, S. H. (Ed.).  (2002).  The art, craft, and science of evaluation capacity building. New Directions for Evaluation, No. 93.  San Francisco: Jossey-Bass.

Preskill, H. & Boyle, S. (2008).  A multidisciplinary model of evalluation capacity building.  American Journal of Evaluation, 29 (4), 443-459.

General evaluation resources:

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (4th ed.). (2011).  Program evaluation:  Alternative approaches and practical guidelines. Boston, MA: Pearson.

Scriven, M. (4th ed.).   (1991).  Evaluation Thesaurus. Newbury Park, CA: Sage.

Patton, M. Q. (4th ed.). (2008).  Utilization-focused evaluation. Thousand Oaks, CA: Sage.

Patton, M. Q. (2012).  Essentials of utilization-focused evaluation. Thousand Oaks, CA: Sage

A colleague asked me what I considered an output in a statewide program we were discussing.  This is a really good example of assumptions and how they can blind side an individual–in this case me.  Once I (figuratively) picked myself up, I proceeded to explain how this terminology applied to the program under discussion.  Once the meeting concluded, I realized that perhaps a bit of a refresher was in order.  Even the most seasoned evaluators can benefit from a reminder every so often.


So OK–inputs, outputs, outcomes.

As I’ve mentioned before, Ellen Taylor-Powell, former UWEX Evaluation specialist has a marvelous tutorial on logic modeling.  I recommend you go there for your own refresher.  What I offer you here is a brief (very) overview of these terms.

Logic models whether linear or circular are composed of various focus points.  Those focus points include (in addition to those mention in the title of this post) the situation, assumptions, and external factors.  Simply put, the situation is a what is going on–the priorities, the needs, the problems that led to the program you are conducting–that is program with a small p (we can talk about sub and supra models later).

Inputs are those resources you need to conduct the program. Typically, they are lumped into personnel, time, money, venue, equipment.  Personnel covers staff, volunteers, partners, any stakeholder.  Time is not just your time–also the time needed for implementation, evaluation, analysis, and reporting.  Money (speaks for itself).  Venue is where the program will be held.  Equipment is what stuff you will need–technology, materials, gear, etc.

Outputs are often classified into two parts–first, participants (or target audience) and the second part, activities that are conducted.  Typically (although not always), those activities are counted and are called bean counts..  In the example that started this post, we would be counting the number of students who graduated high school; the number of students who matriculated to college (either 2 or 4 year); the number of students who transferred from 2 year to 4 year colleges; the number of students who completed college in 2 or 4 years; etc.  This bean  count could also be the number of classes offered; the number of brochures distributed; the number of participants in the class; the number of  (fill in the blank).  Outputs are necessary and not sufficient to determine if a program is being effective.  The field of evaluation started with determining bean counts and satisfactions.

Outcomes can be categorized as short term, medium/intermediate term, or long term.  Long term outcomes are often called impacts.  (There are those in the field who would classify impacts as something separate from an outcome–a discussion for another day.)  Whatever you choose to call the effects of your program, be consistent–don’t use the terms interchangeably; it confuses the reader.  What you are looking for as an outcome is change–in learning; in behavior; in conditions.  This change is measured in the target audience–individuals, groups, communities, etc.

I’ll talk about assumptions and external factors another day.  Have a wonderful holiday weekend…the last vestiges of summer–think tomatoes, corn-on-the-cob , state fair, and  a tall cool drink.