As with a lot of folks who are posting to Eval Central,  I got back Monday from the TCs and AEA’s annual conference, Evaluation ’12.  I

I’ve been going to this conference since 1981 when Bob Ingle decided that the Evaluation Research Society and Evaluation Network needed to pool its resources and have one conference, Evaluation ’81.  I was a graduate student.  That conference changed my life.  This was my professional home.  I loved going and being there.  I was energized; excited; delighted by what I learned, saw, and did.

Reflecting  back over the 30+  years and all that has happened has provided me with insights and new awarenesses.  This year was a bittersweet experience for me, for may reasons–not the least of them being Susan Kistler’s resignation from her role as AEA Executive Director. I remember meeting Susan and her daughter Emily in Chicago when Susan was in graduate school and Emily was three.  Susan has helped make AEA what it is today.  I will miss seeing her at the annual meeting.  Because she lives on the east coast, I will rarely see her in person, now.  There are fewer and fewer long time colleagues and friends at this meeting.  And even though a very wise woman said to me, “Make younger friends”.  Making younger friends isn’t easy when you are an old person (aka OWG) like me and see these new folks only once a year.

I will probably continue going until my youngest daughter, now a junior in high school, finishes college. What I bring home is less this year than last; and less last year than the year before.  It is the people, certainly. I also find that the content challenges me less and less.  Not that the sessions are not interesting or well presented–they are.  I’m just not excited; not energized when I get back to the office. To me a conference is a “good” conference (ever the evaluator) if I met three new people with whom I wanted to maintain contact; spent time with three long time friends/colleagues; and brought home three new ideas. This year, not three new people; yes three long time friends; only one new idea.  4/9. I was delighted to hear that the younger folks were closer to the 9/9. Maybe I’m jaded.

The professional development session I attended (From Metaphor to Model) provided me with a visual for conceptualizing a complex program I’ll be evaluating.  The plenary I attended with Oren Hesterman from the Fair Food Network in Detroit demonstrated how evaluative tools and good questions support food sustainability.  What I found interesting was that during the question/comment session following the plenary, all the questions/comments were about food sustainability, NOT evaluation, even though Ricardo Millett asked really targeted evaluative questions.  Food sustainability seems to be a really important topic–talk about a complex messy system.  I also attended a couple of other sessions that really stood out and some that didn’t.  Is attending this meeting important, even in my jaded view?  Yes.  It is how evaluators grow and change; even when the change is not the goal.  Yes.  The only constant is change.  AEA provides professional development, in it pre and post sesssions as well as plenary and concurrent sessions.  Evaluators need that.

 

 

You can control four things–what you say; what you do; and how you act and react (both  subsets of what you do).  So when is the best action a quick reaction and when are you not waiting (because waiting is an act of faith)?  And how is this an evaluation question?

The original post was in reference to an email response going astray (go see what his suggestions were); it is not likely that emails regarding an evaluation report will fall in that category.  Though not likely, it is possible.  So you send the report to someone who doesn’t want/need/care about the report and is really not a stakeholder, just on the distribution list that you copied from a previous post.  And ooops, you goofed.  Yet the report is important; some people who needed/wanted/cared about it got it.  You need to correct for those others.  You can remedy the situation by following his suggestion, “Alert senders right away when you (send or) receive sensitive (or not so sensitive) emails not intended for you, so the sender can implement serious damage control.” (Parenthetical added.)

 

Emails seem to be a topic of conversation this week.  A blog I follow regularly (Harold Jarche) cited two studies about the amount of time spent reading and dealing with email.  One of the studies he cites ( in the Atlantic Monthly), the average worker spends 28% of a days work time reading email.  Think of all the non-necessary email you get THAT  YOU READ.  How is that cluttering your life?  How is that decreasing your efficiency when it comes to the evaluation work you do?  Email is most of my work these days; used to be that the phone and face-to-face took up a lot of my time…not so much today.  I even use social media for capacity building; my browser is always open.  So between email and the web, a lot of time is spent intimate with technology.

 

The last thought I had for this week was the use of words–not unrelated to emails–especially as it relates to evaluation.  Evaluation is often referred to by efficacy (producing the desired effect), effective (producing the desired effect in specific conditions), efficiency (producing the desired effect in specific conditions with available resources), and fidelity (following the plan).  I wonder if someone would do an evaluation of what we do, would we be able to say we are effective and efficient, let alone faithful to the plan?

 

 

 

 

 

 

 

Yesterday was the 236th anniversary of the US independence from England (and George III, in his infinite wisdom, is said to have said nothing important happened…right…oh, all right, how WOULD he have known anything had happened several thousand miles away?).  And yes, I saw fireworks.  More importantly, though, I thought a lot about what does independence mean?  And then, because I’m posting here, what does independence mean for evaluation and evaluators?

In thinking about independence, I am reminded about intercultural communication and the contrast between individualism and collectivism.  To make this distinction clear, think “I- centered” vs. “We-centered”.  Think western Europe, US vs. Asia, Japan.  To me, individualism is reflective of independence and collectivism is reflective of networks, systems if you will.  When we talk about independence, the words “freedom” and “separate” and “unattached” are bandied about and that certainly applies to the anniversary celebrated yesterday.  Yet, when I contrast it with collectivism and think of the words that are often used in that context (“interdependence”, “group”, “collaboration”), I become aware of other concepts.

Like, what is missing when we are independent?  What have we lost being independent?  What are we avoiding by being independent?  Think “Little Red Hen”.  And conversely, what have we gained by being collective, by collaborating, by connecting?  Think “Spock and Good of the Many”.

There is in AEA a topical interest group of “Independent Consulting”.  This TIG is home to those evaluators who function outside of an institution and who have made their own organization; who work independently, on contract.  In their mission statement, they pro port to “Foster a community of independent evaluators…”  So by being separate, are they missing community and need to foster that aspect?  They insist that they are “…great at networking”, which doesn’t sound very independent; it sounds almost collective.  A small example, and probably not the best.

I think about the way the western world is today; other than your children and/or spouse/significant other are you connected to a community? a network? a group?  not just in membership (like at church or club); really connected (like in extended family–whether of the heart or of the blood)?  Although the Independent Consulting TIG says they are great at networking and some even work in groups, are they connected?  (Social media doesn’t count.)  Is the “I” identity a product of being independent?  It certainly is a characteristic of individualism.  Can you measure the value, merit, worth of the work you do by the level of independence you possess?  Do internal evaluators garner all the benefits of being connected.  (As an internal evaluator, I’m pretty independent, even though there is a critical mass of evaluators where I work.)

Although being an independent evaluator has its benefits–less bias, different perspective (do I dare say, more objective?), is the distance created, the competition for position, the risk taking worth the lack of relational harmony that can accompany relationships? Is the US better off as its own country?  I’d say probably.   My musings only…what do you think?

 

 

 

 

Once again, it is the whole ‘balance’ thing…(we) live in ordinary life and that ordinary life is really the only life we have…I’ll take it. It has some great moments…

 

These wise words come from the insights of Buddy Stallings, Episcopal priest in charge of a large parish in a large city in the US.  True, I took them out of context; the important thing is that they resonated with me from an evaluation perspective.

Too often, faculty and colleagues come to me and wonder what the impact is of this or that program.  I wonder, What do they mean?  What do they want to know? Are they only using words they have heard–the buzz words?  I ponder how this fits into their ordinary life. Or are they outside their ordinary life, pretending in a foreign country?

A faculty member at Oregon State University equated history to a foreign country.  I was put in a mind that evaluation is a foreign country to many (most) people, even though everyone evaluates every day, whether they know it or not.  Individuals visit that contry because they are required to visit; to gather information; to report what they discovered.  They do this with out any special preparation.  Visiting a foreign country entails preparation (at least it does for me).  A study of customs, mores, foods, language, behavior, tools (I’m sure I’m missing something important in this list) is needed; not just necessary, mandatory.  Because although the foreign country may be exotic and unique and novel to you, it is ordinary life for everyone who lives there.  The same is true for evaluation.  There are customs; students are socialized to think and act in a certain way.  Mores are constantly being called into question; language, behaviors, tools, which not known to you in your ordinary life, present themselves. You are constantly presented with opportunities to be outside your ordinary life.  Yet, I wonder what are you missing by not seeing the ordinary; by pretending that it is extraordinary?  By not doing the preparation to make evaluation part of your ordinary life, something you do without thinking.

So I ask you, What preparation have you done to visit this foreign country called EVALUATION?  What are you currently doing to increase your understanding of this country?  How does this visit change your ordinary life or can you get those great moments by recognizing that this is truly the only life you have?   So I ask you, What are you really asking when you ask, What are the impacts?

 

All of this has significant implications for capacity building.

A colleague made a point last week that I want to bring to your attention.  The comment made it clear that when a planning program it is important to think about how to determine what difference the program is making at the beginning of the program, not at the end.

Over the last two years, I’ve alluded to the fact that retrofitting evaluation, while possible, is not ideal.  Granted, sometimes programs are already in place and it is important to report the difference the program made, so evaluation needs to be retrofitted.  Sometimes programs have been in place a long time and need to show long term outcomes (even if they are called impacts).  In cases like that, yes, evaluation needs to be retrofitted.  What this colleague was talking about was a NEW program; one that has never been presented before.

There are lots of ways to get the answer to the question, “What difference is this program making?”  We are not going to talk about methods today, though.  We are going to talk about programs and how programs relate to evaluation.

When I start to talk about evaluation with a faculty member, I ask them what do they expect to happen.  If they understand the program theory, they can describe what outcome is expected.  This is when I pull out the model below.

This model shows the logical linkage between what is expected (outcomes) and what was done to whom (outputs) with what resources (inputs), if you follow the arrow right to left.  If, however, you follow the arrow left to right, you see what resources you need to conduct what activities to whom to expect what outcomes.  Each box (inputs, outputs, outcomes) has an evaluative activity that accompanies it.  In the situation, a needs assessment is the evaluative activity.  Here you are evaluating how to determine what needs to be changed between what is and what should be.  In the resources, you can do a variety of activities; specifically, you can determine if you had enough.  You can also do a cost analysis (there are several).  You can also do a process evaluation.  In outputs, you can determine if you did what you said you would do in the time you said you would do it and with the target audience.  I have always called this a progress evaluation.  In outcomes, you actually determine what difference the program made in the lives of the target audience–for teaching purposes, I have called this a product evaluation.  Here, you want to know if what they know is different; what they do is different; and what the conditions in which they work, live, and play are different.  You do that by thinking first what will the program do.

 

Now this is all very well and good–if you have some idea about what the specific and  measurable outcomes are.  Sometimes you won’t know this because the program has never been done before in quite the way you are doing it OR because the program is developing as you provide it.  (I’m sure there is a third reason–there always is–only I can’t think of one as I type.)

This is why planning evaluation when you are planning the program is important.

 

Ellen Taylor-Powell, UWEX Evaluation Specialist Emeritus, presented via webinar from Rome to the WECT (say west) cohorts today.  She talked about program planning and logic modeling.  The logic model format that Ellen developed was picked up by USDA, now NIFA, and disseminated across Extension.  That dissemination had an amazing effect on Extension, so much so that most Extension faculty know the format and can use it for their programs.

 

Ellen went further today than those resources located through hyperlinks on the UWEX website.  She cited the work by Sue Funnell and Patricia J. Rogers, Purposeful program theory: Effective use of theories of change and logic models  . It was published in March, 2011.  Here is what the publisher (Jossey-Bass, an imprint of Wiley) says:

Between good intentions and great results lies a program theory—not just a list of tasks but a vision of what needs to happen, and how. Now widely used in government and not-for-profit organizations, program theory provides a coherent picture of how change occurs and how to improve performance. Purposeful Program Theory shows how to develop, represent, and use program theory thoughtfully and strategically to suit your particular situation, drawing on the fifty-year history of program theory and the authors’ experiences over more than twenty-five years.

Two reviewers who I have mentioned before, Michael Quinn Patton and E. Jane Davidson, say the following:

“From needs assessment to intervention design, from implementation to outcomes evaluation, from policy formulation to policy execution and evaluation, program theory is paramount. But until now no book has examined these multiple uses of program theory in a comprehensive, understandable, and integrated way. This promises to be a breakthrough book, valuable to practitioners, program designers, evaluators, policy analysts, funders, and scholars who care about understanding why an intervention works or doesn’t work.” —Michael Quinn Patton, author, Utilization-Focused Evaluation

“Finally, the definitive guide to evaluation using program theory! Far from the narrow ‘one true way’ approaches to program theory, this book provides numerous practical options for applying program theory to fulfill different purposes and constraints, and guides the reader through the sound critical thinking required to select from among the options. The tour de force of the history and use of program theory is a truly global view, with examples from around the world and across the full range of content domains. A must-have for any serious evaluator.” —E. Jane Davidson, PhD, Real Evaluation Ltd.

Jane is the author of the book, Evaluation Methodology Basics: The nuts and bolts of sound evaluation, published by Sage..  This book “…provides a step-by-step guide for doing a real evaluation.  It focuses on the main kinds of “big picture” questions that evaluators usually need to answer, and how the nature of such questions is linked to evaluation methodology choices.”  And although Ellen didn’t specfically mention this book, it is a worthwhile resource for nascent evaluators.

Two other resources that were mentioned today were Jonny Morell’s book, Evaluation in the face of uncertainty:  Anticipating surprise and responding to the inevitable. This volume was published by Guilford Press..  Ellen also mentioned John Mayne and his work in contribution analysis.  A quick web search provided this reference:  Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. ILAC Brief No. 16. Rome, Italy: Institutional Learning and Change (ILAC) Initiative.  I’ll talk more about contribution analysis next week in TIMELY TOPICS.

 

If those of you who listened to Ellen remember other sources that she mentioned, let me know and I’ll put them here next week.