Yesterday was the 236th anniversary of the US independence from England (and George III, in his infinite wisdom, is said to have said nothing important happened…right…oh, all right, how WOULD he have known anything had happened several thousand miles away?).  And yes, I saw fireworks.  More importantly, though, I thought a lot about what does independence mean?  And then, because I’m posting here, what does independence mean for evaluation and evaluators?

In thinking about independence, I am reminded about intercultural communication and the contrast between individualism and collectivism.  To make this distinction clear, think “I- centered” vs. “We-centered”.  Think western Europe, US vs. Asia, Japan.  To me, individualism is reflective of independence and collectivism is reflective of networks, systems if you will.  When we talk about independence, the words “freedom” and “separate” and “unattached” are bandied about and that certainly applies to the anniversary celebrated yesterday.  Yet, when I contrast it with collectivism and think of the words that are often used in that context (“interdependence”, “group”, “collaboration”), I become aware of other concepts.

Like, what is missing when we are independent?  What have we lost being independent?  What are we avoiding by being independent?  Think “Little Red Hen”.  And conversely, what have we gained by being collective, by collaborating, by connecting?  Think “Spock and Good of the Many”.

There is in AEA a topical interest group of “Independent Consulting”.  This TIG is home to those evaluators who function outside of an institution and who have made their own organization; who work independently, on contract.  In their mission statement, they pro port to “Foster a community of independent evaluators…”  So by being separate, are they missing community and need to foster that aspect?  They insist that they are “…great at networking”, which doesn’t sound very independent; it sounds almost collective.  A small example, and probably not the best.

I think about the way the western world is today; other than your children and/or spouse/significant other are you connected to a community? a network? a group?  not just in membership (like at church or club); really connected (like in extended family–whether of the heart or of the blood)?  Although the Independent Consulting TIG says they are great at networking and some even work in groups, are they connected?  (Social media doesn’t count.)  Is the “I” identity a product of being independent?  It certainly is a characteristic of individualism.  Can you measure the value, merit, worth of the work you do by the level of independence you possess?  Do internal evaluators garner all the benefits of being connected.  (As an internal evaluator, I’m pretty independent, even though there is a critical mass of evaluators where I work.)

Although being an independent evaluator has its benefits–less bias, different perspective (do I dare say, more objective?), is the distance created, the competition for position, the risk taking worth the lack of relational harmony that can accompany relationships? Is the US better off as its own country?  I’d say probably.   My musings only…what do you think?

 

 

 

 

Matt Keene, AEAs thought leader for June 2012 says, “Wisdom, rooted in knowledge of thyself, is a prerequisite of good judgment. Everybody who’s anybody says so – Philo Judaeus,

Socrates,  Lao-tse,

Plotinus, Paracelsus,

Swami Ramdas,  and Hobbs.

I want to focus on the “wisdom is a prerequisite of good judgement” and talk about how that relates to evaluation.  I also liked the list of “everybody who’s anybody.”   (Although I don’t know who Matt means by Hobbs–is that Hobbes  or the English philosopher for whom the well known previous figure was named, Thomas Hobbes , or someone else that I couldn’t find and don’t know?)  But I digress…

 

“Wisdom is a prerequisite for good judgement.”  Judgement is used daily by evaluators.  It results in the determination of value, merit, and/or worth of something.  Evaluators make a judgement of value, merit, and/or worth.  We come to these judgements through experience.  Experience with people, activities, programs, contributions, LIFE.  Everything we do provides us with experience; it is what we do with that experience that results in wisdom and, therefore, leads to good judgements.

Experience is a hard teacher; demanding, exacting, and often obtuse.  My 19 y/o daughter is going to summer school at OSU.  She got approval to take two courses and for those courses to transfer to her academic record at her college.  She was excited about the subject; got the book; read ahead; and looked forward to class, which started yesterday.  After class, I had never seen a more disappointed individual.  She found the material uninteresting (it was mostly review because she had read ahead), she found the instructor uninspiring (possibly due to class size of 35).  To me, it was obvious that she needed to re-frame this experience into something positive; she needed to find something she could learn from this experience that would lead to wisdom.  I suggested that she think of this experience as a cross cultural exchange; challenging because of cultural differences.  In truth, a large state college is very different from a small liberal arts college; truly a different culture.  She has four weeks to pull some wisdom from this experience; four weeks to learn how to make a judgement that is beneficial.  I am curious to see what happens.

Not all evaluations result in beneficial judgements; often, the answer, the judgement, is NOT what the stakeholders want to hear.  When that is the case, one needs to re-frame the experience so that learning occurs (both for the individual evaluator as well as the stakeholders) so that the next time the learning, the hard won wisdom, will lead to “good” judgement, even if the answer is not what the stakeholders want to hear.  Matt started his discussion with the saying that “wisdom, rooted in knowledge of self, is a prerequisite for good judgement”.  Knowing your self is no easy task; you can only control what you say, what you do, and how your react (a form of doing/action).  The study of those things is a life long adventure, especially when you consider how hard it is to change yourself.  Just having knowledge isn’t enough for a good judgement; the evaluator needs to integrate that knowledge into the self and own it; then the result will be “good judgements”; the result will be wisdom.

I started this post back in April.  I had an idea that needed to be remembered…it had to do with the unit of analysis; a question which often occurs in evaluation.  To increase sample size and, therefore,  power, evaluators often choose run analyses on the larger number when the aggregate, i.e., smaller number is probably the “true” unit of analysis.  Let me give you an example.

A program is randomly assigned to fifth grade classrooms in three different schools.  School A has three classrooms; school B has two classrooms; and school C has one classroom.  All together, there are approximately 180 students, six classrooms, three schools.  What is the appropriate unit of analysis?  Many people use students, because of the sample size issue.  Some people will use classroom because each got a different treatment.  Occasionally, some evaluators will use schools because that is the unit of randomization.  This issue elicits much discussion.  Some folks say that because students are in the school, they are really the unit of analysis because they are imbedded in the randomization unit.  Some folks say that students is the best unit of analysis because there are more of them.  That certainly is the convention.  What you need to decide is what is the unit and be able to defend that choice.  Even though I would loose power, I think I would go with the the unit of randomization.  Which leads me to my next point–truth.

At the end of the first paragraph, I use the words “true” in quotation marks. The Kirkpatricks in their most recent blog opened with a quote from the US CIA headquarters in Langley Virginia, “”And ye shall know the truth, and the truth shall make you free”.   (We wont’ talk about the fiction in the official discourse, today…)   (Don Kirkpatrick developed the four levels of evaluation specifically in the training and development field.)  Jim Kirkpatrick, Don’s son, posits that, “Applied to training evaluation, this statement means that the focus should be on discovering and uncovering the truth along the four levels path.”  I will argue that the truth is how you (the principle investigator, program director, etc.) see the answer to the question.  Is that truth with an upper case “T” or is that truth with a lower case “t”?  What do you want it to mean?

Like history (history is what is written, usually by the winners, not what happened), truth becomes what do you want the answer to mean.  Jim Kirkpatrick offers an addendum (also from the CIA), that of “actionable intelligence”.  He goes on to say that, “Asking the right questions will provide data that gives (sic) us information we need (intelligent) upon which we can make good decisions (actionable).”  I agree that asking the right question is important–probably the foundation on which an evaluation is based.  Making “good decisions”  is in the eyes of the beholder–what do you want it to mean.

“Resilience = Not having all of your eggs in one basket.

Abundance = having enough eggs.”

Borrowed from and appearing in the blog by Harold Jarche, Models, flows, and exposure, posted April 28, 2012.

 

In January, John Hagel blogged in  Edge Perspectives:  “If we are not enhancing flow, we will be marginalized, both in our personal and professional life. If we want to remain successful and reap the enormous rewards that can be generated from flows, we must continually seek to refine the designs of the systems that we spend time in to ensure that they are ever more effective in sustaining and amplifying flows.”

That is a powerful message.  Just how do we keep from being marginalized, especially when there is a shifting paradigm?  How does that relate to evaluation?  What exactly do we need to do to keep evaluation skills from being lost in the shift and be marginalized?  Good questions.

The priest at the church I attend is retiring, after 30 years of service.  This is a significant and unprecedented change (at least in my tenure there).  Before he left for summer school in Minnesota, he gave the governing board a pep talk that has relevance to evaluation.  He posited that what we needed to do was not focus on what we needed, rather focus on what strengths and assets we currently have and build on them.  No easy task, to be sure.  And not the  usual approach for an interim.  The usual approach is what do we want; what do we need for this interim.  See the shifting paradigm?  I hope so.

Needs assessment is often the same approach–what do you want; what do you need.  (Notice the use of the word “you” in this sentence; more on that later in another post.)  A well intentioned evaluator recognizes that something is missing or lacking and conducts a needs assessment documenting that need/lack/deficit.  What would happen, do you think, if the evaluator documented what assets existed and developed a program to build that capacity?  Youth leadership development has been building programs to build assets for many years (See citations below).  The approach taken by the youth development professionals is that there are certain skills, or assets, which, if strengthened, build resilience.  Buy building resilience, needs are mitigated; problems solved or avoided; goals met.

So what would happen if, when conducting a “needs” assessment, an evaluator actually conducted an asset assessment and developed programs to benefit the community by building capacity which strengthened assets and built resiliency?  Have you ever tried that approach?

By focusing on strengths and assets instead of weaknesses and liabilities, programs could be built that would benefit more than a vocal minority.  The greater whole could benefit.  Wouldn’t that be novel?  Wouldn’t that be great!

Citations:

1.  Benson, P. L. (1997).  All Kids are Our Kids.  San Francisco:  Jossey-Bass Publishers

2.  Silbereisen, R. K. & Lerner, R. M. (2007).  Approaches to Positive Youth Development. Los Angeles: Sage Publications.

 

They [CEO’s] simply expect unpredictability. For them, there is no “new normal.” This is why perpetual Beta is a constant theme here. It is a necessary perspective in dealing with increasing complexity.

This is a piece of interesting information I picked up on a blog about letting go.  I found it interesting reading, especially when viewed wearing my evaluators hat.  Increasingly, the programs we devise address problems which are complex.  No longer are we willing to look at a small slice of a problem find one solution and address it (control, according to Jarche).  Rather, all problems are interrelated; consequently, all solutions must also be interrelated (Jarche talks about letting go [of control]).

Every year, Corvallis Sister Cities-Gondar holds a walk for water.  The idea being to raise funds to support the activities of the Sister Cities programs (in this case clean, potable, available water or planting trees in treeless areas).  I asked the president of the organization if in their teaching activities they were talking about population control.  He answered, sadly, No.  Even though he could see the connection between clean water and population, that step hadn’t been taken in the educational efforts of the program.  But providing funding for individual wells to provide clean water was.  I see this as simplification of a complex problem.  (Your point, Engle?)  Right.  Point.

All the above relates to program planning and developing a logic model for that program.  This tries to remind folks to expect the unexpected in planning the program to address a problem and think about building the unexpected into the model.  Yes, you might get clean water if you teach well digging skills; yet that clean water will reduce infant mortality.  With the reduction of infant mortality, population reaching adulthood will increase.  An increase in population will tax already limited resources (food, water, shelter–not to mention consumer resources); that leads back to the basic problem–no/little access to clean water.  Teaching well drilling skills is only part of the problem; the rest of the problem is unexpected.  By building that into the model, one can see the relation to the bigger picture; the relation to the system.  Even if designing the program will only address that one small part (of the problem), identifying the unexpected, the unpredictable, will help the program planner see clearly.

 


Once again, it is the whole ‘balance’ thing…(we) live in ordinary life and that ordinary life is really the only life we have…I’ll take it. It has some great moments…

 

These wise words come from the insights of Buddy Stallings, Episcopal priest in charge of a large parish in a large city in the US.  True, I took them out of context; the important thing is that they resonated with me from an evaluation perspective.

Too often, faculty and colleagues come to me and wonder what the impact is of this or that program.  I wonder, What do they mean?  What do they want to know? Are they only using words they have heard–the buzz words?  I ponder how this fits into their ordinary life. Or are they outside their ordinary life, pretending in a foreign country?

A faculty member at Oregon State University equated history to a foreign country.  I was put in a mind that evaluation is a foreign country to many (most) people, even though everyone evaluates every day, whether they know it or not.  Individuals visit that contry because they are required to visit; to gather information; to report what they discovered.  They do this with out any special preparation.  Visiting a foreign country entails preparation (at least it does for me).  A study of customs, mores, foods, language, behavior, tools (I’m sure I’m missing something important in this list) is needed; not just necessary, mandatory.  Because although the foreign country may be exotic and unique and novel to you, it is ordinary life for everyone who lives there.  The same is true for evaluation.  There are customs; students are socialized to think and act in a certain way.  Mores are constantly being called into question; language, behaviors, tools, which not known to you in your ordinary life, present themselves. You are constantly presented with opportunities to be outside your ordinary life.  Yet, I wonder what are you missing by not seeing the ordinary; by pretending that it is extraordinary?  By not doing the preparation to make evaluation part of your ordinary life, something you do without thinking.

So I ask you, What preparation have you done to visit this foreign country called EVALUATION?  What are you currently doing to increase your understanding of this country?  How does this visit change your ordinary life or can you get those great moments by recognizing that this is truly the only life you have?   So I ask you, What are you really asking when you ask, What are the impacts?

 

All of this has significant implications for capacity building.

Today, I’m writing about several random thoughts which have occurred to me in the last two weeks–some prompted by comments; some not.  And before I forget–I was sick last week and didn’t blog (did you miss me?); I’ll be gone next week and won’t blog (will you miss me?).  I’ll be back the week of May 21.

Random thought 1.

I’m doing this evaluation capacity building program for folks who are interested in evaluation.  Several (more than 2) participants have commented to me, in person or electronically, that they are being asked to use their evaluation expertise garnered from this program in projects that are in their content area and are not their programs–serving as evaluation consultants, if you will.  This evaluation capacity building program draws Extension professionals from across the western states and includes folks from natural resources, agriculture, family and community science, nutrition, and 4-H.  It is another example in my life where I put information together (like this blog) and send it out and rarely know what happens at the other end of the send.  Getting comments like this is heart warming. Capturing these stories will be part of our summative evaluation.

Random thought 2.

Although I am a program evaluator, specializing in community-based educational programs, there are other kinds of evaluation. (Note: This is not a discussion of evaluation models, like naturalistic evaluation or developmental evaluation–that is another discussion.)   Evaluations can be conducted on products, processes, policy (including proposals, plans, and possibilities), performance, personnel as well as program.  Michael Scriven, in his Evaluation Thesaurus discusses all these.  The goal of all forms of evaluation is to determine the merit, worth, or effectiveness of the thing being evaluated (evaluand).

Random thought 3.

I love getting comments about what I write.  If you comment, I may not respond.  I always read them.  One comment was about subscribing.  You can subscribe to the blog through an RSS feed button in the upper left side of the blog (under the word, Subscribe).  You can also subscribe by email.  The blog postings are archived (thank you OSU), so you can go back and see what I’ve said.  I try not to repeat myself.

Random thought 4.

Because I blog as part of my work, and because I work at Oregon State University, I use the resources available to me there (like wordpress for this blog).  It is written using Firefox NOT Internet Explorer.  The appearance may be different depending on the browser used.  Try viewing it in a different browser if you are having trouble seeing the blog.

Random thought 5.

Most folks who are program people know a lot about their content area, whether it is invasive species, viticulture, woodlot management, nutrition, or youth development.  What I’m finding is that knowing what to do with the data they have about their programs isn’t as well known.  The data from most of what Extension does can be analyzed using relatively simple statistics, such as chi-square, t-test, ANOVA.  Sometimes using an ANCOVA is needed.  There are many useful resources available for broadening understanding about statistics.  One I really like is a TED Talk by Hans Rosling.

 

CAVEAT:  This may be too political for some readers.

Sometimes, there are ideas that appear in other blogs that may or may not be directly related to my work in evaluation.  Because I read them, I see evaluative relations and think they are important enough to pass along.  Today is one of those days.  I’ll try to connect the dots  between what I read and share here and evaluation.  (For those of you who are interested in the Connect the Dots, a major event day on climate change and weather on May 5, 2012, go here.)

First, Valerie Williams, AEA365 blog, April18, 2012 says, “…Many environmental education programs struggle with the question of whether environmental education is a means to an end (e.g. increased stewardship) or an end itself. This question has profound implications for how programs are evaluated, and specifically the measures used to determine program success.”

I think that many educational programs (whether environmentally focused or not) struggle with this question.  Is the program a means to an end or the end itself?  I am reminded of programs which are instituted for cost savings and then the program designers want that program evaluated.  Means or end?

Williams also offers comments about evaluability assessment–that evaluation task that helps evaluators decide whether to evaluate a new programs, especially if that new program’s readiness for evaluation is in question. (She provides resources if you are interested.)  She offers reasons for conducting an evaluability assessment.  Specifically:

  • Surfacing disagreements among stakeholders about the program theory, design and/or structure;
  • Highlighting the need for changes in program design; and
  • Clarifying the type of evaluation most helpful to the program.

Evauability assessment is a topic for future discussion.

Second, a colleague offered the following CDC reference and says, “The purpose of this workbook is to help public health program managers, administrators, and evaluators develop an effective evaluation plan in the context of the planning process. It is intended to assist in developing an evaluation plan but is not intended to serve as a complete resource on how to implement program evaluation.”  I offer it here because I know that evaluation plans are often added after the program has been implemented.  Although it has as a focus pubic health programs, one source familiar with this work commented that there is enough in the workbook that can be applied to a variety of settings.  Check it out; the link is below

 

Next, Nigerian novelist Chimamanda Ngozi Adichie is quoted as saying, “The single story creates stereotypes, and the problem with stereotypes is not that they are untrue, but that they are incomplete. They make one story become the only story.”

Given that

  • Extension uses story to evaluate a lot of programs; and
  • Story is used to convince legislators of Extension’s value; and
  • Story, if done right, is a powerful tool;

Then it behooves us all to remember this–are we using the story because it captures the effect or because it is the only story?  If only story, is it promoting a stereotype?  Adichie, though a novelist, may be an evaluator at heart.

Finally, there is this quote, also from an AEA365 blog (Steve Mayer) “There are elements of Justice and Injustice everywhere – in society, in reform efforts, and in the evaluation of reform efforts. The choice of outcomes to be assessed is a political act. “Noticing progress” probably takes us further than “measuring impact,” always being mindful of who benefits.”

We often are stuck on “measuring impact”; after all, isn’t that what everyone wants to know.  If world peace is the ultimate impact, then what is the likelihood of measuring that?  I think that “noticing progress” (i.e., change) will take us much further because of the justice it can capture (or not–and that is telling).  And by capturing “noticing progress”, we can make explicit who benefits.

This runs long today.

 

I wonder (as y’all know) if anyone reads this; if the blog makes a difference; and should I keep writing (because blogging is hard work).

Over the last two weeks, I’ve received over 50 comments about my posts, from folks who are not subscribed and who read the post.  I don’t know if their search engine has optimized my blog so it pops up or if they are really interested in evaluation.  Some of the comments appear genuine; some seem specious at best.  Please know I read them all.  And I appreciate the feedback.  There were some questions posted in the comments.  Here are some answers, not in any particular order.

  1. AEA365 is a blog sponsored by the American Evaluation Association.  It invited known evaluators who blog (like me) to contribute a post to their AEA365.  Susan Kistler is AEA’s executive director; she has really good ideas.  I wouldn’t be surprised if this was one of them.
  2. To be an evaluator who blogs, you first need to be an evaluator.  You get to be an evaluator by studying evaluation.  There are numerous places to do that–universities, Evaluator’s Institute, AEA’s summer institute, on the job training.  I went to university; I got a Ph.D in program evaluation.  Most people who come to evaluation come through some social science–sociology, psychology, social work, anthropology, other disciplines.  If you want to know more, I’ll be happy to elaborate in a future blog.
  3. When I preview my post, the graphics look fine.  I don’t have to click on them more than once; they just are there.  My IT person says it might be the browser being used.  I use Firefox; I am a PC user.  I don’t know how this looks on a Mac.
  4. Although I try to stay off my political soap box when I post, there are times where the topic (Viktor Frankl, for example) is both political and evaluative.  For those of you new to evaluation, evaluation is a political discipline.  I have a few passions in my life that I return to again and again as they have been with me for a long time (some as long as 50 years).  Evaluatiion is one of those passions even though I’ve been a professional evaluator for only 30 years. (I’ve probably been a lay evaluator for as long as I’ve known my passions.)
  5. 500 words seems to be a good length.

I’m working with my IT person to make this blog better.  Since I’m a technopeasant, learning something new is hard work for me.  Next week I’ll talk about evaluation again.  I promise.  Hopefully, y’all will see the sun where you are. Here in Oregon, we are eager for the sun.   Even if it is the sunset in Florida.

Before Spring break, I blogged about making a difference.  I realize that many who subscribe to this blog were on break last week when the post came.  So I’m sending an extra post this week:  PLEASE COMPLETE THE SURVEY that was posted through an imbedded hyperlink in the post two weeks ago.  I plan to close the survey on Friday, COB.  The URL if the link above doesn’t work is http://www.surveymonkey.com/s/ZD33HFS.  You can copy and paste the URL into your browser.  PLEASE…:)