The topic of complexity has appeared several times over the last few weeks.  Brian Pittman wrote about it in an AEA365; Charles Gasper used it as a topic for his most recent blog.  Much food for thought, especially as it relates to the work evaluators do.

Simultaneously, Harold Jarche talks about connections.  To me connections and complexity are two side of the same coin. Something which is complex typically has multiple parts.  Something which has multiple parts is connected to the other parts.  Certainly the work done by evaluators has multiple parts; certainly those parts are connected to each other.  The challenge we face is  logically defending those connections and in doing so, make explicit the parts.  Sound easy?  Its not.

 

That’s why I stress modeling your project before you implement it.  If the project is modeled, often the model leads you to discover that what you thought would happen because of what you do, won’t.  You have time to fix the model, fix the program, and fix the evaluation protocol.  If your model is defensible and logical, you still may find out that the program doesn’t get you where you want to go.  Jonny Morell writes about this in his book, Evaluation in the face of uncertaintyThere are worse things than having to fix the program or fix the evaluation protocol before implementation.  Keep in mind that connections are key; complexity is everywhere.  Perhaps you’ll have an Aha! moment.

 

I’ll be on holiday and there will not be a post next week.  Last week was an odd week–an example of complexity and connections leading to unanticipated outcomes.

 

Evaluation costs:  A few weeks ago, I posted a summary about evaluation costs. A recent AEA LinkedIn discussion was on the same topic (see this link).  If you have not linked to other evaluators, there are other groups besides AEA that have LinkedIn groups.  You might want to join one that is relevant.

New topic:  The video on surveys posted last week generated a flurry of comments (though not on this blog).  I think it is probably appropriate to revisit the topic of surveys.  As I decided to revisit this topic,  an AEA 365 post from the Wilder Research group talked about data coding related to longitudinal data.

Now, many surveys, especially Extension surveys, focus on cross sectional data not on longitudinal data.  They may, however, involve a large number of participants and the hot tips that are provided apply to coding surveys.  Whether the surveys Extension professionals develop involve 30, 300, or 3000 participants, these tips are important especially if the participants are divided into groups on some variable.  Although the hot tips in the Wilder post talk about coding, not surveys specifically, they are relevant to surveys and I’m repeating them here.   (I’ve also adapted the original tip to Extension use).

  • Anticipate different groups.  If you do this ahead of time, and write it down in a data dictionary or coding guide, your coding will be easier.  If the raw data are dropped, or for some other reason scrambled (like a flood, hurricane, or a sleepy night), you will be able to make sense out of the data quicker.
  • Sometimes there are preexisting identifying information (like location of the program) that have a logical code.  Use that code.
  • Precoding by the location sites helps keep the raw data organized and enables coding.

Over the rest of the year, I’ll be revisiting survey on a regular basis.  Survey is often used by Extension.  Developing a survey that provides you with information you want, can use, and makes sense is a useful goal.

New topic:  I’m thinking of varying the format of the blog or offering alternative formats with evaluation information.  I’m curious as to what would help you do your work better.  Below are a few options.  Let me know what you’d like.

  • Videos in blogs
  • Short concise (i.e., 10-15 minute) webinars
  • Guest writers/speakers/etc.
  • Other ideas

A few weeks ago I  mentioned that a colleague of mine shared with me some insights she had about survey development.  She had an Aha! moment.   We had a conversation about that Aha! Moment and video taped the conversation.  To see the video, click here.

 

In thinking about what Linda learned, I realized that Aha! Moments could be a continuing series…so watch for more.

Let me know what you think.  Feedback is always welcome.

Oh–I want to remind you about an excellent resource for surveys.  Dillman’s current book, Internet, mail, and mixed-mode surveys:  The tailored design method.  It is a Wiley publication by Don A. Dillman, Jolene D. Smyth, and Leah Melani Christian.  Needs to be on your desk if you do any kind of survey work.

You can control four things–what you say; what you do; and how you act and react (both  subsets of what you do).  So when is the best action a quick reaction and when are you not waiting (because waiting is an act of faith)?  And how is this an evaluation question?

The original post was in reference to an email response going astray (go see what his suggestions were); it is not likely that emails regarding an evaluation report will fall in that category.  Though not likely, it is possible.  So you send the report to someone who doesn’t want/need/care about the report and is really not a stakeholder, just on the distribution list that you copied from a previous post.  And ooops, you goofed.  Yet the report is important; some people who needed/wanted/cared about it got it.  You need to correct for those others.  You can remedy the situation by following his suggestion, “Alert senders right away when you (send or) receive sensitive (or not so sensitive) emails not intended for you, so the sender can implement serious damage control.” (Parenthetical added.)

 

Emails seem to be a topic of conversation this week.  A blog I follow regularly (Harold Jarche) cited two studies about the amount of time spent reading and dealing with email.  One of the studies he cites ( in the Atlantic Monthly), the average worker spends 28% of a days work time reading email.  Think of all the non-necessary email you get THAT  YOU READ.  How is that cluttering your life?  How is that decreasing your efficiency when it comes to the evaluation work you do?  Email is most of my work these days; used to be that the phone and face-to-face took up a lot of my time…not so much today.  I even use social media for capacity building; my browser is always open.  So between email and the web, a lot of time is spent intimate with technology.

 

The last thought I had for this week was the use of words–not unrelated to emails–especially as it relates to evaluation.  Evaluation is often referred to by efficacy (producing the desired effect), effective (producing the desired effect in specific conditions), efficiency (producing the desired effect in specific conditions with available resources), and fidelity (following the plan).  I wonder if someone would do an evaluation of what we do, would we be able to say we are effective and efficient, let alone faithful to the plan?

 

 

 

 

AEA hosts an online, free, and open to anyone list serve called EVALTALK, managed by the University of Alabama.  This week, Felix Herzog posted the following question.

 

How much can/should an evaluation cost? 

 

This is a question I often get asked, especially by Extension faculty.  This question is especially relevant because more Extension faculty are responding to request for proposals (RFP) or request for contract (RFC) that call for an evaluation.  The questions arise about how does one budget for the evaluation.  Felix  compiled what he discovered and I’ve listed it below.  It is important to note that this is not just the evaluator’s salary, rather all expenses which relate to evaluating the program–data entry, data management, data analysis, report writing, as well as the evaluator’s salary, data collection instrument development, pilot testing, and the salaries of those who do the above mentioned tasks.  Felix thoughtfully provided references with sources so that you can read them.  He did note that the most useful citation (the Reider, 2011) is in German.

–           The benefit of the evaluation should be at least as high as its cost (Rieder, 2011)

–           “Rule of thumb”: 1 – 10% of the costs for a policy program (personnal communication from an administrator)

–           5 – 7 % of a program (Kellog Foundation, p. 54)

–           1 – 15 %of the total cost of a program (Rieder, 2011, 5 quotes in Table 5 p. 82)

–           0.5% of a program (EC, 2004, p. 32 ff.)

–           Up to 10% of a program (EC 2008, p. 47 ff.)

 

REFERENCES

EC (2004) Evaluating EU activities and practical guide for the Commission services. Bruxelles. http://ec.europa.eu/dgs/secretariat_general/evaluation/docs/eval_activities_en.pdf.

EC (2008) EVALSED: The resource for the evaluation of socioeconomic development. Bruxelles. http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evalsed/downloads/guide2008_evalsed.pdf .

Kellogg Foundation.  (1984).  W. K. Kellogg Foundation Evaluation Handbook. <http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&sqi=2&ved=0CE8QFjAA&url=http%3A%2F%2Fwww.wkkf.org%2F%7E%2Fmedia%2F62EF77BD5792454B807085B1AD044FE7.ashx&ei=gUsAUIOBMIXaqgG6sdiqBw&usg=AFQjCNHorPDftVA4z54-Kx_d8WZT0p5eQg&sig2=0nQwUeUHCR0_4wuaGdwnBw>.

Rieder S. (2011) Kosten von Evaluationen. LEGES 2011(1), 73 – 88.

 

Recognizing the value of your evaluation work, being able to put a dollar value to that work, and being able to communicate it helps build organizational capacity in evaluation.

Bright ideas are often the result of  “Aha” moments.  Aha moments  are “The sudden understanding or grasp of a concept…an event that is typically rewarding and pleasurable.  Usually, the insights remain in our memory as lasting impressions.” — Senior News Editor for Psych Central.

How often have you had an “A-ha” moment when you are evaluating?  A colleague had one, maybe several, that made an impression on her.  Talk about building capacity–this did.  She has agreed to share that experience, soon (the bright idea).

Not only did it make an impression on her, her telling me made an impression on me.  I am once again reminded of how much I take evaluation for granted.  Because evaluation is an everyday activity, I often assume that people know what I’m talking about.  We all know what happens when we assume something….  I am also reminded how many people don’t know what I consider basic  evaluation information, like constructing a survey item (Got  Dillman on your shelf, yet?).

 

What is this symbol called?  No, it is not the square root sign–although that is its function.  “It’s called a radical…because it gets at the root…the definition of radical is: of or going to the root or origin.”–Guy McPherson

How radical are you?  How does that relate to evaluation, you wonder?  Telling truth to power is a radical concept (the definition here is departure from the usual or traditional); one to which evaluators who hold integrity sacrosanct adhere. (It is the third AEA guiding principle.)  Evaluators often, if they are doing their job right, have to speak truth to power–because the program wasn’t effective, or it resulted in something different than what was planned, or it cost too much to replicate, or it just didn’t work out .  Funders, supervisors, program leaders need to know the truth as you found it.


“Those who seek to isolate will become isolated themselves.”Diederick Stoel  This sage piece of advice is the lead for Jim Kirkpatrick’s quick tip for evaluating training activities.  He says, “Attempting to isolate the impact of the formal training class at the start of the initiative is basically discounting and disrespecting the contributions of other factors…Instead of seeking to isolate the impact of your training, gather data on all of the factors that contributed to the success of the initiative, and give credit where credit is due. This way, your role is not simply to deliver training, but to create and orchestrate organizational success. This makes you a strategic business partner who contributes to your organization’s competitive advantage and is therefore indispensable.”  Extension faculty conduct a lot of trainings and want to take credit for the training effectiveness.  It is important to recognize that there may be other factors at work–mitigating factors; intermediate factors; even confounding factors.  As much as Extension faculty want to isolate (i.e., take credit), it is important to share the credit.

Harold Jarche says that “most learning happens informally on the job.  Formal instruction, or training, accounts for less than 20%, and some research shows it is about 5% of workplace learning.” He divides learning into dependent, interdependent, and independent–that is, formal instruction like you get in school; social and collaborative learning like you get when you engage colleagues; and learning supported by tools and information.

As an evaluator, what do you do with that other 95%?  Do you read? Tweet? Talk to folks?  Just how do you learn more about evaluation?  I don’t think there is a best way.  I think that the individual needs to look what their strengths are (assets, if you will), where their passions lie, where their questions occur (and those may or may not be needs–shift the paradigm, people).  Sometimes learning emerges from a place never before explored.  A good example–I’ve been charged with the evaluation of a organizational change.  Although I’ve looked at references for organizational change, actually had a course in organizational behavior in graduate school, I haven’t really gone looking for answers…until this evaluation was assigned.  Then, at this years AEA annual conference, one of the professional development session captured much of what I’ve been puzzling– not that it will have answers; but maybe I’ll learn something, something I can take back with me; something I could use; perhaps even something in that 95%.  This professional development session (informal learning and interdependent) will afford me an opportunity for learning; for content I haven’t experienced.  I’d put it in the other 95%.

Social media falls into the category of the other 95%–it connects folks.  It provides information.  It builds community where one has not been before.  Can it take the place of formal education; no, I don’t think so.  Can it provide a source of information; possibly (it then becomes a matter of reliability).  My take away for today–explore other types of learning; share what you know.

 

 

 

 

 

 

Yesterday was the 236th anniversary of the US independence from England (and George III, in his infinite wisdom, is said to have said nothing important happened…right…oh, all right, how WOULD he have known anything had happened several thousand miles away?).  And yes, I saw fireworks.  More importantly, though, I thought a lot about what does independence mean?  And then, because I’m posting here, what does independence mean for evaluation and evaluators?

In thinking about independence, I am reminded about intercultural communication and the contrast between individualism and collectivism.  To make this distinction clear, think “I- centered” vs. “We-centered”.  Think western Europe, US vs. Asia, Japan.  To me, individualism is reflective of independence and collectivism is reflective of networks, systems if you will.  When we talk about independence, the words “freedom” and “separate” and “unattached” are bandied about and that certainly applies to the anniversary celebrated yesterday.  Yet, when I contrast it with collectivism and think of the words that are often used in that context (“interdependence”, “group”, “collaboration”), I become aware of other concepts.

Like, what is missing when we are independent?  What have we lost being independent?  What are we avoiding by being independent?  Think “Little Red Hen”.  And conversely, what have we gained by being collective, by collaborating, by connecting?  Think “Spock and Good of the Many”.

There is in AEA a topical interest group of “Independent Consulting”.  This TIG is home to those evaluators who function outside of an institution and who have made their own organization; who work independently, on contract.  In their mission statement, they pro port to “Foster a community of independent evaluators…”  So by being separate, are they missing community and need to foster that aspect?  They insist that they are “…great at networking”, which doesn’t sound very independent; it sounds almost collective.  A small example, and probably not the best.

I think about the way the western world is today; other than your children and/or spouse/significant other are you connected to a community? a network? a group?  not just in membership (like at church or club); really connected (like in extended family–whether of the heart or of the blood)?  Although the Independent Consulting TIG says they are great at networking and some even work in groups, are they connected?  (Social media doesn’t count.)  Is the “I” identity a product of being independent?  It certainly is a characteristic of individualism.  Can you measure the value, merit, worth of the work you do by the level of independence you possess?  Do internal evaluators garner all the benefits of being connected.  (As an internal evaluator, I’m pretty independent, even though there is a critical mass of evaluators where I work.)

Although being an independent evaluator has its benefits–less bias, different perspective (do I dare say, more objective?), is the distance created, the competition for position, the risk taking worth the lack of relational harmony that can accompany relationships? Is the US better off as its own country?  I’d say probably.   My musings only…what do you think?

 

 

 

 

Matt Keene, AEAs thought leader for June 2012 says, “Wisdom, rooted in knowledge of thyself, is a prerequisite of good judgment. Everybody who’s anybody says so – Philo Judaeus,

Socrates,  Lao-tse,

Plotinus, Paracelsus,

Swami Ramdas,  and Hobbs.

I want to focus on the “wisdom is a prerequisite of good judgement” and talk about how that relates to evaluation.  I also liked the list of “everybody who’s anybody.”   (Although I don’t know who Matt means by Hobbs–is that Hobbes  or the English philosopher for whom the well known previous figure was named, Thomas Hobbes , or someone else that I couldn’t find and don’t know?)  But I digress…

 

“Wisdom is a prerequisite for good judgement.”  Judgement is used daily by evaluators.  It results in the determination of value, merit, and/or worth of something.  Evaluators make a judgement of value, merit, and/or worth.  We come to these judgements through experience.  Experience with people, activities, programs, contributions, LIFE.  Everything we do provides us with experience; it is what we do with that experience that results in wisdom and, therefore, leads to good judgements.

Experience is a hard teacher; demanding, exacting, and often obtuse.  My 19 y/o daughter is going to summer school at OSU.  She got approval to take two courses and for those courses to transfer to her academic record at her college.  She was excited about the subject; got the book; read ahead; and looked forward to class, which started yesterday.  After class, I had never seen a more disappointed individual.  She found the material uninteresting (it was mostly review because she had read ahead), she found the instructor uninspiring (possibly due to class size of 35).  To me, it was obvious that she needed to re-frame this experience into something positive; she needed to find something she could learn from this experience that would lead to wisdom.  I suggested that she think of this experience as a cross cultural exchange; challenging because of cultural differences.  In truth, a large state college is very different from a small liberal arts college; truly a different culture.  She has four weeks to pull some wisdom from this experience; four weeks to learn how to make a judgement that is beneficial.  I am curious to see what happens.

Not all evaluations result in beneficial judgements; often, the answer, the judgement, is NOT what the stakeholders want to hear.  When that is the case, one needs to re-frame the experience so that learning occurs (both for the individual evaluator as well as the stakeholders) so that the next time the learning, the hard won wisdom, will lead to “good” judgement, even if the answer is not what the stakeholders want to hear.  Matt started his discussion with the saying that “wisdom, rooted in knowledge of self, is a prerequisite for good judgement”.  Knowing your self is no easy task; you can only control what you say, what you do, and how your react (a form of doing/action).  The study of those things is a life long adventure, especially when you consider how hard it is to change yourself.  Just having knowledge isn’t enough for a good judgement; the evaluator needs to integrate that knowledge into the self and own it; then the result will be “good judgements”; the result will be wisdom.

I started this post back in April.  I had an idea that needed to be remembered…it had to do with the unit of analysis; a question which often occurs in evaluation.  To increase sample size and, therefore,  power, evaluators often choose run analyses on the larger number when the aggregate, i.e., smaller number is probably the “true” unit of analysis.  Let me give you an example.

A program is randomly assigned to fifth grade classrooms in three different schools.  School A has three classrooms; school B has two classrooms; and school C has one classroom.  All together, there are approximately 180 students, six classrooms, three schools.  What is the appropriate unit of analysis?  Many people use students, because of the sample size issue.  Some people will use classroom because each got a different treatment.  Occasionally, some evaluators will use schools because that is the unit of randomization.  This issue elicits much discussion.  Some folks say that because students are in the school, they are really the unit of analysis because they are imbedded in the randomization unit.  Some folks say that students is the best unit of analysis because there are more of them.  That certainly is the convention.  What you need to decide is what is the unit and be able to defend that choice.  Even though I would loose power, I think I would go with the the unit of randomization.  Which leads me to my next point–truth.

At the end of the first paragraph, I use the words “true” in quotation marks. The Kirkpatricks in their most recent blog opened with a quote from the US CIA headquarters in Langley Virginia, “”And ye shall know the truth, and the truth shall make you free”.   (We wont’ talk about the fiction in the official discourse, today…)   (Don Kirkpatrick developed the four levels of evaluation specifically in the training and development field.)  Jim Kirkpatrick, Don’s son, posits that, “Applied to training evaluation, this statement means that the focus should be on discovering and uncovering the truth along the four levels path.”  I will argue that the truth is how you (the principle investigator, program director, etc.) see the answer to the question.  Is that truth with an upper case “T” or is that truth with a lower case “t”?  What do you want it to mean?

Like history (history is what is written, usually by the winners, not what happened), truth becomes what do you want the answer to mean.  Jim Kirkpatrick offers an addendum (also from the CIA), that of “actionable intelligence”.  He goes on to say that, “Asking the right questions will provide data that gives (sic) us information we need (intelligent) upon which we can make good decisions (actionable).”  I agree that asking the right question is important–probably the foundation on which an evaluation is based.  Making “good decisions”  is in the eyes of the beholder–what do you want it to mean.