Matt Keene, AEAs thought leader for June 2012 says, “Wisdom, rooted in knowledge of thyself, is a prerequisite of good judgment. Everybody who’s anybody says so – Philo Judaeus,

Socrates,  Lao-tse,

Plotinus, Paracelsus,

Swami Ramdas,  and Hobbs.

I want to focus on the “wisdom is a prerequisite of good judgement” and talk about how that relates to evaluation.  I also liked the list of “everybody who’s anybody.”   (Although I don’t know who Matt means by Hobbs–is that Hobbes  or the English philosopher for whom the well known previous figure was named, Thomas Hobbes , or someone else that I couldn’t find and don’t know?)  But I digress…

 

“Wisdom is a prerequisite for good judgement.”  Judgement is used daily by evaluators.  It results in the determination of value, merit, and/or worth of something.  Evaluators make a judgement of value, merit, and/or worth.  We come to these judgements through experience.  Experience with people, activities, programs, contributions, LIFE.  Everything we do provides us with experience; it is what we do with that experience that results in wisdom and, therefore, leads to good judgements.

Experience is a hard teacher; demanding, exacting, and often obtuse.  My 19 y/o daughter is going to summer school at OSU.  She got approval to take two courses and for those courses to transfer to her academic record at her college.  She was excited about the subject; got the book; read ahead; and looked forward to class, which started yesterday.  After class, I had never seen a more disappointed individual.  She found the material uninteresting (it was mostly review because she had read ahead), she found the instructor uninspiring (possibly due to class size of 35).  To me, it was obvious that she needed to re-frame this experience into something positive; she needed to find something she could learn from this experience that would lead to wisdom.  I suggested that she think of this experience as a cross cultural exchange; challenging because of cultural differences.  In truth, a large state college is very different from a small liberal arts college; truly a different culture.  She has four weeks to pull some wisdom from this experience; four weeks to learn how to make a judgement that is beneficial.  I am curious to see what happens.

Not all evaluations result in beneficial judgements; often, the answer, the judgement, is NOT what the stakeholders want to hear.  When that is the case, one needs to re-frame the experience so that learning occurs (both for the individual evaluator as well as the stakeholders) so that the next time the learning, the hard won wisdom, will lead to “good” judgement, even if the answer is not what the stakeholders want to hear.  Matt started his discussion with the saying that “wisdom, rooted in knowledge of self, is a prerequisite for good judgement”.  Knowing your self is no easy task; you can only control what you say, what you do, and how your react (a form of doing/action).  The study of those things is a life long adventure, especially when you consider how hard it is to change yourself.  Just having knowledge isn’t enough for a good judgement; the evaluator needs to integrate that knowledge into the self and own it; then the result will be “good judgements”; the result will be wisdom.

I started this post back in April.  I had an idea that needed to be remembered…it had to do with the unit of analysis; a question which often occurs in evaluation.  To increase sample size and, therefore,  power, evaluators often choose run analyses on the larger number when the aggregate, i.e., smaller number is probably the “true” unit of analysis.  Let me give you an example.

A program is randomly assigned to fifth grade classrooms in three different schools.  School A has three classrooms; school B has two classrooms; and school C has one classroom.  All together, there are approximately 180 students, six classrooms, three schools.  What is the appropriate unit of analysis?  Many people use students, because of the sample size issue.  Some people will use classroom because each got a different treatment.  Occasionally, some evaluators will use schools because that is the unit of randomization.  This issue elicits much discussion.  Some folks say that because students are in the school, they are really the unit of analysis because they are imbedded in the randomization unit.  Some folks say that students is the best unit of analysis because there are more of them.  That certainly is the convention.  What you need to decide is what is the unit and be able to defend that choice.  Even though I would loose power, I think I would go with the the unit of randomization.  Which leads me to my next point–truth.

At the end of the first paragraph, I use the words “true” in quotation marks. The Kirkpatricks in their most recent blog opened with a quote from the US CIA headquarters in Langley Virginia, “”And ye shall know the truth, and the truth shall make you free”.   (We wont’ talk about the fiction in the official discourse, today…)   (Don Kirkpatrick developed the four levels of evaluation specifically in the training and development field.)  Jim Kirkpatrick, Don’s son, posits that, “Applied to training evaluation, this statement means that the focus should be on discovering and uncovering the truth along the four levels path.”  I will argue that the truth is how you (the principle investigator, program director, etc.) see the answer to the question.  Is that truth with an upper case “T” or is that truth with a lower case “t”?  What do you want it to mean?

Like history (history is what is written, usually by the winners, not what happened), truth becomes what do you want the answer to mean.  Jim Kirkpatrick offers an addendum (also from the CIA), that of “actionable intelligence”.  He goes on to say that, “Asking the right questions will provide data that gives (sic) us information we need (intelligent) upon which we can make good decisions (actionable).”  I agree that asking the right question is important–probably the foundation on which an evaluation is based.  Making “good decisions”  is in the eyes of the beholder–what do you want it to mean.

An important question that evaluators ask is, “What difference is this program making?”  Followed quickly with, “How do you know?”

Recently, I happened on a blog called {grow} and the author, Mark Schaefer,  had a post called, “Did this blog make a difference?”  Since this is a question as an evaluator I am always asking, I jumped on the page.  Mr. Schaefer is in marketing and as a marketing expert he says the following, “You’re in marketing for one reason: Grow. Grow your company, reputation, customers, impact, profits. Grow yourself. This is a community that will help. It will stretch your mind, connect you to fascinating people, and provide some fun along the way.”  So I wondered how relevant this blog would be to me and other evaluators whether they blogged or not.

Mr. Schaefer is taking stock of his blog–a good thing to do for a blog that has been posted for a while.  So although he lists four innovations, he asks the reader to “…be the judge if it made a difference in your life, your outlook, and your business.”  The four innovations are

  1. Paid contributing columnists.  He actually paid the folks who contributed to his blog; not something those of us in Extension can do.
  2. {growtoons}. Cartoons designed specifically for the blog that “…adds an element of fun and unique social media commentary.”  Hmmm…
  3. New perspectives. He showcased fresh deserving voices; some that he agreed with and some that he did not.  A possibility.
  4. Video. He did many video blogs and that gave him the opportunity to “…shine the light on some incredible people…”  He interviews folks and posts the short video.  Yet another possibility.

His approach seems really different to what I do.  Maybe it is the content; maybe it is the cohort; maybe it is something else.  Maybe there is something to be learned from what he does.  Maybe this blog is making a difference.  Only I don’t know.  So, I take a clue from Mr. Schaefer and ask you to judge if it has made a difference in what you do–then let me know.  I’ve imbedded a link  to a quick survey that will NOT link to you nor in anyway identify you.  I will only be  using the findings for program improvement.  Please let me know.  Click here to link to the survey.

 

Oh, and I won’t be posting next week–spring break and I’ll be gone.

 

A colleague made a point last week that I want to bring to your attention.  The comment made it clear that when a planning program it is important to think about how to determine what difference the program is making at the beginning of the program, not at the end.

Over the last two years, I’ve alluded to the fact that retrofitting evaluation, while possible, is not ideal.  Granted, sometimes programs are already in place and it is important to report the difference the program made, so evaluation needs to be retrofitted.  Sometimes programs have been in place a long time and need to show long term outcomes (even if they are called impacts).  In cases like that, yes, evaluation needs to be retrofitted.  What this colleague was talking about was a NEW program; one that has never been presented before.

There are lots of ways to get the answer to the question, “What difference is this program making?”  We are not going to talk about methods today, though.  We are going to talk about programs and how programs relate to evaluation.

When I start to talk about evaluation with a faculty member, I ask them what do they expect to happen.  If they understand the program theory, they can describe what outcome is expected.  This is when I pull out the model below.

This model shows the logical linkage between what is expected (outcomes) and what was done to whom (outputs) with what resources (inputs), if you follow the arrow right to left.  If, however, you follow the arrow left to right, you see what resources you need to conduct what activities to whom to expect what outcomes.  Each box (inputs, outputs, outcomes) has an evaluative activity that accompanies it.  In the situation, a needs assessment is the evaluative activity.  Here you are evaluating how to determine what needs to be changed between what is and what should be.  In the resources, you can do a variety of activities; specifically, you can determine if you had enough.  You can also do a cost analysis (there are several).  You can also do a process evaluation.  In outputs, you can determine if you did what you said you would do in the time you said you would do it and with the target audience.  I have always called this a progress evaluation.  In outcomes, you actually determine what difference the program made in the lives of the target audience–for teaching purposes, I have called this a product evaluation.  Here, you want to know if what they know is different; what they do is different; and what the conditions in which they work, live, and play are different.  You do that by thinking first what will the program do.

 

Now this is all very well and good–if you have some idea about what the specific and  measurable outcomes are.  Sometimes you won’t know this because the program has never been done before in quite the way you are doing it OR because the program is developing as you provide it.  (I’m sure there is a third reason–there always is–only I can’t think of one as I type.)

This is why planning evaluation when you are planning the program is important.

 

I’ll be gone next week so this is the last post of 2011 and  some reflection of 2011 is in order, I think.

For each of you, 2011 was an amazing year.  I know you are thinking, “Yeah, right.”  Truly, 2011 was amazing–and I invoke Dickens here–because 2011 “…was the best of times, it was the worst of times”.  Kiplinger’s  magazine used as its masthead for many years, the saying “We live in interesting times”.  So even if your joys were great; your sorrows, overwhelming; your adventures, amazing; the day-to-day, a grind, we live in interesting times and because of that 2011 was an amazing year.  Think about it…you’ll probably agree.  (If not, that is an evaluative question–what criteria are you using; what biases have inadvertently appeared; what value was at stake?)

So let’s look forward to 2012 

Some folks believe that 2012 marks the end of the Mayan long count calendar, the advent of cataclysmic or transformative events, and, with the end of the long count calendar, the end of the world on December 21, 2012.  Possibly; probably not.  Everyone has some end of the world scenario in mind.   For me,  end of the world as I know it happened when the carbon parts per million passed 350 (the carbon foot print for November was 390.31).  Let’s think evaluation.

Jennifer Greene, the outgoing AEA president, looking forward and keeping in mind the 2011 global catastrophes asks, “…what does evaluation have to do with these contemporary global catastrophes and tribulations?” (of which there were many).  She says:

  • “If you’re not part of the solution, then you’re part of the problem” (Eldridge Cleaver). Evaluation offers opportunities for inclusive engagement with the key social issues at hand. (Think 350.0rg, Heifer Foundation, Habitat for Humanity, and any other organization reflecting social issues.)
  • Most evaluators are committed to making our world a better place. Most evaluators wish to be of consequence in the world.

Are you going to be part of the problem or part of the solution?  How will you make the world a better place?  What difference will you make?  What new year’s resolutions will you make to answer these questions?  Think on it.

 

May 2012 bring you all another amazing year!

I came across this quote from Viktor Frankl today (thanks to a colleague)

“…everything can be taken from a man (sic) but one thing: the last of the human freedoms – to choose one’s attitude in any given set of circumstances, to choose one’s own way.” Viktor Frankl (Man’s Search for Meaning – p.104)

I realized that,  especially at this time of year, attitude is everything–good, bad, indifferent–the choice is always yours.

How we choose to approach anything depends upon our previous experiences–what I call personal and situational bias.   Sadler* has three classifications for these biases.  He calls them value inertias (unwanted distorting influences which reflect background experience), ethical compromises (actions for which one is personally culpable), and cognitive limitations (not knowing for what ever reason).

When we approach an evaluation, our attitude leads the way.  If we are reluctant, if we are resistant, if we are excited, if we are uncertain, all these approaches reflect where we’ve been, what we’ve seen, what we have learned, what we have done (or not).  We can make a choice how to proceed.

The America n Evaluation Association (AEA) has long had a history of supporting difference.  That value is imbedded in the guiding principles.  The two principles which address supporting differences are

  • Respect for People:  Evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
  • Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

AEA also has developed a Cultural Competence statement.  In it, AEA affirms that “A culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation.”

Both of these documents provide a foundation for the work we do as evaluators as well as relating to our personal and situational bias. Considering them as we  enter into the choice we make about attitude will help minimize the biases we bring to our evaluation work.  The evaluative question from all this–When has your personal and situational biases interfered with you work in evaluation?

Attitude is always there–and it can change.  It is your choice.

 

 

 

 

Sadler, D. R. (1981). Intuitive data processing as a potential source of bias in naturalistic evaluations.  Education Evaluation and Policy Analysis, 3, 25-31.

Happy Thanksgiving.  A simple evaluative statement if ever there was one.

Did you know that there are eight countries in the world that have a holiday dedicated to giving thanks.  That’s not very many.  (If you want to know which ones go to this site–the site also has a nice image.)

Thanksgiving could be considered the evaluator’s holiday.  We take the time, hopefully, to recognize what is of value, what has merit, what has worth in our lives and to be grateful for those contributions, opportunities, friends, family members, and (of course, in the US) the food (although I know that this is not necessarily the case everywhere).

My daughters and I, living in a vegetarian household, have put a different twist on Thanksgiving–we serve foods for which we are thankful–foods we have especially enjoyed over the year.  Sometimes they are the same foods–like chocolate pecan pie; sometimes not.  One year, we had all green foods–we had a good laugh that year.  This year, my younger daughter is home from boarding school and has asked (!!!) for Kale and White bean soup (I’ve modified it some).  A dear friend of mine would have new foods for which the opportunity to enjoy has presented itself (like in this recipe).

What ever you choose to have on your table, remember the folks who helped to put that food there; remember the work that it took to make the feast; and most of all, remember that there is value in being grateful.

 

 

I’ve long suspected I wasn’t alone in the recognition that the term impact is used inappropriately in most evaluation. 

Terry Smutlyo sings a song about impact during an outcome mapping seminar he conducted.  Terry Smutlyo is the Director, Evaluation International Development Research Development Research Center, Ottawa, Canada.  He ought to know a few things about evaluation terminology.  He has two versions of this song, Impact Blues, on YouTube; his comments speak to this issue.  Check it out.

 

Just a gentle reminder to use your words carefully.  Make sure everyone knows what you mean and that everyone at the table agrees with the meaning you use.

 

This week the post is  short.  Terry says it best.

Next week I’ll be at the American Evaluation Association annual meeting in Anaheim, CA, so no post.  No Disneyland visit either…sigh

 

 

I am reading the book, Eaarth, by Bill McKibben (a NY Times review is here).  He writes about making a difference in the world on which we live.  He provides numerous  examples that have all happened in the 21st century, none of them positive or encouraging. He makes the point that the place in which we live today is not, and never will be again, like the place in which we lived when most of us were born.  He talks about not saving the Earth for our grandchildren but rather how our parents needed to have done things to save the earth for them–that it is too late for the grandchildren.  Although this book is very discouraging, it got me thinking.

 

Isn’t making a difference what we as Extension professionals strive to do?

Don’t we, like McKibben, need criteria to determine what that difference can/could/would be made and look like?

And if we have that criteria well established, won’t we be able to make a difference, hopefully positive (think hand washing here)?  And like this graphic, , won’t that difference be worth the effort we have put into the attempt?  Especially if we thoughtfully plan how to determine what that difference is?

 

We might not be able to recover (according to McKibben, we won’t) the Earth the way it was when most of us were born; I think we can still make a difference–a positive difference–in the lives of the people with whom we work.  That is an evaluative opportunity.

 

 

A colleague asks for advice on handling evaluation stories, so that they don’t get brushed aside as mere anecdotes.  She goes on to say of the AEA365 blog she read, ” I read the steps to take (hot tips), but don’t know enough about evaluation, perhaps, to understand how to apply them.”  Her question raises an interesting topic.  Much of what Extension does can be captured in stories (i.e., qualitative data)  rather than in numbers (i.e., quantitative data).  Dick Krueger, former Professor and Evaluation Leader (read specialist) at the University of Minnesota has done a lot of work in the area of using stories as evaluation.  Today’s post summarizes his work.

 

At the outset, Dick asks the following question:  What is the value of stories?  He provides these three answers:

  1. Stories make information easier to remember
  2. Stories make information more believable
  3. Stories can tap into emotions.

There are all types of stories.  The type we are interested in for evaluation purposes are organizational stories.  Organizational stories can do the following things for an organization:

  1. Depict culture
  2. Promote core values
  3. Transmit and reinforce the culture
  4. Provide instruction to employees
  5. Motivate, inspire, and encourage

He suggests six common types of organizational stories:

  1. Hero stories  (someone in the organization who has done something beyond the normal range of achievement)
  2. Success stories (highlight organizational successes)
  3. Lessons learned stories (what major mistakes and triumphs teach the organization)
  4. “How it works around here” stories (highlight core organizational values reflected in actual practice
  5. “Sacred bundle” stories (a collection of stories that together depict the culture of an organization; core philosophies)
  6. Training and orientation stories (assists new employees in understanding how the organization works)

To use stories as evaluation, the evaluator needs to consider how stories might be used, that is, do they depict how people experience the program?  Do they understand program outcomes?  Do they get insights into program processes?

You (as evaluator) need to think about how the story fits into the evaluation design (think logic model; program planning).  Ask yourself these questions:  Should you use stories alone?  Should you use stories that lead into other forma of inquiry?  Should you use stories that augment/illustrate results from other forms of inquiry?

You need to establish criteria for stories.  Rigor can be applied to story even though the data are narrative.  Criteria include the following:   Is the story authentic–is it truthful?  Is the story verifiable–is there a trail of evidence back to the source of the story?  Is there a need to consider confidentiality?  What was the original intent–purpose behind the original telling?  And finally, what does the story represent–other people or locations?

You will need a plan for capturing the stories.  Ask yourself these questions:  Do you need help capturing the stories?  What strategy will you use for collecting the stories?  How will you ensure documentation and record keeping?  (Sequence the questions; write them down the type–set up; conversational; etc.)  You will also need a plan for analyzing and reporting the stories  as you, the evaluator,  are responsible for finding meaning.