An important question that evaluators ask is, “What difference is this program making?”  Followed quickly with, “How do you know?”

Recently, I happened on a blog called {grow} and the author, Mark Schaefer,  had a post called, “Did this blog make a difference?”  Since this is a question as an evaluator I am always asking, I jumped on the page.  Mr. Schaefer is in marketing and as a marketing expert he says the following, “You’re in marketing for one reason: Grow. Grow your company, reputation, customers, impact, profits. Grow yourself. This is a community that will help. It will stretch your mind, connect you to fascinating people, and provide some fun along the way.”  So I wondered how relevant this blog would be to me and other evaluators whether they blogged or not.

Mr. Schaefer is taking stock of his blog–a good thing to do for a blog that has been posted for a while.  So although he lists four innovations, he asks the reader to “…be the judge if it made a difference in your life, your outlook, and your business.”  The four innovations are

  1. Paid contributing columnists.  He actually paid the folks who contributed to his blog; not something those of us in Extension can do.
  2. {growtoons}. Cartoons designed specifically for the blog that “…adds an element of fun and unique social media commentary.”  Hmmm…
  3. New perspectives. He showcased fresh deserving voices; some that he agreed with and some that he did not.  A possibility.
  4. Video. He did many video blogs and that gave him the opportunity to “…shine the light on some incredible people…”  He interviews folks and posts the short video.  Yet another possibility.

His approach seems really different to what I do.  Maybe it is the content; maybe it is the cohort; maybe it is something else.  Maybe there is something to be learned from what he does.  Maybe this blog is making a difference.  Only I don’t know.  So, I take a clue from Mr. Schaefer and ask you to judge if it has made a difference in what you do–then let me know.  I’ve imbedded a link  to a quick survey that will NOT link to you nor in anyway identify you.  I will only be  using the findings for program improvement.  Please let me know.  Click here to link to the survey.

 

Oh, and I won’t be posting next week–spring break and I’ll be gone.

 

A colleague asked an interesting question, one that I am often asked as an evaluation specialist:  “without a control group is it possible to show that the intervention had anything to do with a skill increase?”  The answer to the question “Do I need a control group to do this evaluation?” is, “It all depends.”

It depends on what question are you asking.  Are you testing a hypothesis–a question posed in a null form of no difference?  Or answering an evaluative question–what difference was made?  The methodology you use depends on what question you are asking.  If you want to know how effective or efficient a program (aka intervention) is, you can determine that without a control group.  Campbell and Stanley in their, now well read, 1963 volume, Experimental and quasi-experimental designs for research, talk about quasi-experimental designs that do not use a control group.   Yes, there are threats to internal validity; yes, there are stronger designs; yes, the controls are not as rigorous as in a double-blind, cross-over design (considered the gold standard by some groups).  We are talking here about evaluation, people, NOT research.  We are not asking questions of efficacy (research); rather we want to know what difference is being made; we want to know the answer to “so what”.  Remember, the root of evaluation is value; not cause.

This is certainly a quandary–how to determine cause for the desired outcome.  John Mayne has recognized this quandary and has approached the question of attributing the outcome to the intervention in his use of contribution analysis.  In community-based work, like what Extension does, attributing cause is difficult at best.  Why–because there are factors which Extension cannot control and identifying a control group may not be ethical, appropriate, or feasible.  Use something else that is ethical, appropriate, and feasible (see Campbell and Stanley).

Using a logic model to guide your work helps to defend your premise of “If I have these resources, then I can do these activities with these participants; if I do these activities with these participants, then I expect (because the literature says so–the research has already been done) that the participants will learn these things; do these things; change these conditions.”  The likelihood of achieving world peace with your intervention is low at best; the likelihood of changing something (learning, practices, conditions)  if you have a defensible model (road map) is high.  Does that mean your program caused that change–probably not.  Can you take credit for the change; most definitely.

Last weekend, I was in Florida visiting my daughter at Eckerd College.  The College was offering an Environmental Film Festival and I had the good fortune to see Green Fire, a film about Aldo Leopold and the land ethic.   I had seen it at OSU and was impressed because it was not all doom and gloom; rather it celebrated Aldo Leopold as one of the three leading and  early conservationists  (the other two are John Muir and Henry David Thoreau ).  Dr. Curt Meine, who narrates the film and is a conservation biologist, was leading the discussion again; I had heard him at OSU.  At the showing early, I was able to chat with him about the film and its effects.  I asked him how he knew he was being effective.  His response was to tell me about the new memberships in the Foundation, the number of showings, and the size of the audience seeing the film.  Appropriate responses for my question.  What I really wanted to know was how did he know he was making a difference.  That is a different question; one which talks about change.  Change is what programs like Green Fire is all about.  It is what Aldo Leopold was all about (read Sand County Almanac to understand Leopold’s position.)

 

Change is what evaluation is all about.  But did I ask the right question?  How could I have phrased it differently to get at what change had occurred in the viewers of the film?  Did new memberships in the Foundation demonstrate change?  Knowing what question to ask is important for program planners as well as evaluators.  There are often multiple levels of questions that could be asked–individual, programmatic, organizational, regional, national, global.  Are they all equally important?  Do they provide a means forgathering pertinent data?  How are you going to use these data once you’ve gathered them?  How carefully do you think about the questions you ask when you craft your logic model?  When you draft a survey?  When you construct questions for focus groups?  Asking the right question will yield relevant answers.  It will show you what difference you’ve made in the lives of your target audience.

 

Oh, and if you haven’t see the film, Green Fire, or read the book, Sand County Almanac–I highly recommend them.

I regularly follow Harold Jarche’s blog .

Much of what he writes would not fall under the general topic of evaluation.  Yet his blog for February 18 does.  This blog is titled Why is learning and the sharing of information so important?

I see that intimately related to evaluation, especially given Michael Quinn Patton’s focus on use.  The way I see it, something can’t be used effectively unless one learns about it.  Oh, I know you can use just about anything for anything–and I am reminded of the anecdote of when you have a hammer everything looks like a nail, even if it isn’t. 

That is not the kind of use I’m talking about.

I’m talking about rational, logical, systematic use based on thoughtful inquiry, critical thinking, and problem solving.  I’m talking about making a difference because you have learned something new and valuable (remember the root of evaluation?). In his blog, Jarche cites the Governor-General of Canada, David Johnston and Johnston’s article recently published in the Globe and Mail, a newspaper published in Toronto. What Johnston says makes sense.  Evaluators in this context are diplomats, making learning accessible and sharing knowledge.

Sharing knowledge is what statistics is all about.  If you think the field of statistics is boring, I urge you to check out the video called The Joy of Stats presented by Swedish scholar Hans Rosling  .  I think you will have a whole new appreciation of statistics and the knowledge that can be conveyed.  If you find Hans Rosling compelling (or even if you don’t),  I urge you to check out his TED Talks presentation.  It is an eye-opener.

I think he makes a compelling argument about learning and sharing information.  About making a difference.  That is what evaluation is all about.

 

 

I have a quandary.  Perhaps you have a solution.

I am the evaluator on a program where the funding agency wants clear, measurable, and specific outcomes.  (OK, you say) The funding agency program people were asked to answer the question, “What do you expect to happen as a result of the program?”

These folks responded with a programmatic equivalent of “world peace.”  I virtually rolled my eyes.  IMHO there was no way that this program would end in world peace.  Not even no hunger (a necessary precursor to world peace).  After suggesting that perhaps that goal was unattainable given the resources and activities intended,  they came out of the fantasy world in which they were living and said, realistically, “We don’t know, exactly.”  I probed further.  The sites (several) were all different; the implementation processes (also several) were all different; the resources were all different (depending on site); and the list goes on.  Oh, and the program was to be rolled out soon in another site without an evaluation of the previous sites.  BUT THEY WANTED CLEAR, MEASURABLE, AND SPECIFIC OUTCOMES.

What would you do in this situation?  (I know what I proposed–there was lukewarm response.  I have an idea what would work–although the approach was not mainstream evaluation and these were mainstream folks.)  So I turn to you, Readers.  What would you do?  Let me know.  PLEASE.

 

Oh, and Happy Groundhog’s Day.  I understand there will be six more weeks of winter (there was serious frost this morning in Corvallis OR).

 

 

 

Recently, I’ve been dealing with several different logic models which all use the box format.  You know the one that Ellen Taylor-Powell advocated in her UWEX tutorial.  We are all familiar with this approach.  And all know that this approach helps conceptualize a program; identify program theory; and possible outcomes (maybe even world peace).  Yet, there is much more that can be done with logic models that isn’t in the tutorial.  The tutorial starts us off with this diagram. 

Inputs are what is invested; outputs are what is done; and outcomes are what results/happens.  And we assume (you KNOW what assumptions do, right?) that all the inputs lead to all outputs lead to all outcomes, because that is what the arrows show.  NOT.  One of the best approaches to logic modeling that I’ve seen and learned in the last few years is to make the inputs specific to the outputs and the outputs specific to the outcomes.  It IS possible that volunteers are NOT the input you need to have the outcome you desire (change in social conditions); or it may be. OR volunteers will lead to an entirely different outcome–for example, only change in knowledge, not condition. Connecting the resources specifically helps to clarify for program people what is expected with what will be done and with what resources.

Connecting those points with individual arrows and feedback loops (if appropriate) makes sense.

Jonny Morell suggests that these relationships may be 1:1, 1:many, many:1; many:many; and/or be classified by precedence (which he describes as A before B, A & B simultaneously, and agnostic with respect to procedure.)  If these relationships exist,  and I believe they do, then just filling boxes isn’t a good idea.  (If you want to check out his Power Point presentation at the AEA site, you will have to join  AEA because access this presentation is in the non-public  eLibrary available only to members.  However, I was able to copy and include the slide to which I refer (with permission).



As you can see, it all depends.  Depends on the resources, the planned outputs, the desired outcomes.  Relationships are key.

And you thought logic models were simple.

 

For my new year’s post, I mentioned that AEA is running a series blog posts in aea365 written by evaluators who blog.  Susan Kistler has compiled a schedule of who will be blogging in aea365 when.  This link will take you to the full series and be updated as new posts come online

http://aea365.org/blog/?s=bloggers+series&submit=Go).  The results of Susan’s request is that evaluators who blog will post to aea365 one week a month, starting the last week in December.  January posts will run January 22-27; February posts will run February 12-17; March, the 18th-23; April will run 22-25.

I’ve mentioned aea365 before.  I’ll mention it again.  You can subscribe either by email or RSS feed.  The blogs are archived.  They are not specific to any aspect of evaluation.  Some times they are interesting and helpful; sometimes not.  The variety is rich; the effort tremendous; and the resources useful.  Check it out.

A colleague made a point last week that I want to bring to your attention.  The comment made it clear that when a planning program it is important to think about how to determine what difference the program is making at the beginning of the program, not at the end.

Over the last two years, I’ve alluded to the fact that retrofitting evaluation, while possible, is not ideal.  Granted, sometimes programs are already in place and it is important to report the difference the program made, so evaluation needs to be retrofitted.  Sometimes programs have been in place a long time and need to show long term outcomes (even if they are called impacts).  In cases like that, yes, evaluation needs to be retrofitted.  What this colleague was talking about was a NEW program; one that has never been presented before.

There are lots of ways to get the answer to the question, “What difference is this program making?”  We are not going to talk about methods today, though.  We are going to talk about programs and how programs relate to evaluation.

When I start to talk about evaluation with a faculty member, I ask them what do they expect to happen.  If they understand the program theory, they can describe what outcome is expected.  This is when I pull out the model below.

This model shows the logical linkage between what is expected (outcomes) and what was done to whom (outputs) with what resources (inputs), if you follow the arrow right to left.  If, however, you follow the arrow left to right, you see what resources you need to conduct what activities to whom to expect what outcomes.  Each box (inputs, outputs, outcomes) has an evaluative activity that accompanies it.  In the situation, a needs assessment is the evaluative activity.  Here you are evaluating how to determine what needs to be changed between what is and what should be.  In the resources, you can do a variety of activities; specifically, you can determine if you had enough.  You can also do a cost analysis (there are several).  You can also do a process evaluation.  In outputs, you can determine if you did what you said you would do in the time you said you would do it and with the target audience.  I have always called this a progress evaluation.  In outcomes, you actually determine what difference the program made in the lives of the target audience–for teaching purposes, I have called this a product evaluation.  Here, you want to know if what they know is different; what they do is different; and what the conditions in which they work, live, and play are different.  You do that by thinking first what will the program do.

 

Now this is all very well and good–if you have some idea about what the specific and  measurable outcomes are.  Sometimes you won’t know this because the program has never been done before in quite the way you are doing it OR because the program is developing as you provide it.  (I’m sure there is a third reason–there always is–only I can’t think of one as I type.)

This is why planning evaluation when you are planning the program is important.

 

Starting this week, aea365 is posting a series of posts authored by evaluators who blog.  Check it out!

 

There will be a lot of different approaches starting with Susan Kistler, Executive Director of the American Evaluation Association, who blogs every Saturday for aea365.  She has been doing this for almost two years.

 

So even though I’m not blogging on a topic this week (see last week’s post), I wanted to share this with you.What a good way to start a new year–new resources for evaluators.

 

I’ll be gone next week so this is the last post of 2011 and  some reflection of 2011 is in order, I think.

For each of you, 2011 was an amazing year.  I know you are thinking, “Yeah, right.”  Truly, 2011 was amazing–and I invoke Dickens here–because 2011 “…was the best of times, it was the worst of times”.  Kiplinger’s  magazine used as its masthead for many years, the saying “We live in interesting times”.  So even if your joys were great; your sorrows, overwhelming; your adventures, amazing; the day-to-day, a grind, we live in interesting times and because of that 2011 was an amazing year.  Think about it…you’ll probably agree.  (If not, that is an evaluative question–what criteria are you using; what biases have inadvertently appeared; what value was at stake?)

So let’s look forward to 2012 

Some folks believe that 2012 marks the end of the Mayan long count calendar, the advent of cataclysmic or transformative events, and, with the end of the long count calendar, the end of the world on December 21, 2012.  Possibly; probably not.  Everyone has some end of the world scenario in mind.   For me,  end of the world as I know it happened when the carbon parts per million passed 350 (the carbon foot print for November was 390.31).  Let’s think evaluation.

Jennifer Greene, the outgoing AEA president, looking forward and keeping in mind the 2011 global catastrophes asks, “…what does evaluation have to do with these contemporary global catastrophes and tribulations?” (of which there were many).  She says:

  • “If you’re not part of the solution, then you’re part of the problem” (Eldridge Cleaver). Evaluation offers opportunities for inclusive engagement with the key social issues at hand. (Think 350.0rg, Heifer Foundation, Habitat for Humanity, and any other organization reflecting social issues.)
  • Most evaluators are committed to making our world a better place. Most evaluators wish to be of consequence in the world.

Are you going to be part of the problem or part of the solution?  How will you make the world a better place?  What difference will you make?  What new year’s resolutions will you make to answer these questions?  Think on it.

 

May 2012 bring you all another amazing year!