Having just read Harold Jarche’s April 27, 2014 blog, making sense of the network era, about personal knowledge mastery (PKM), I am once again reminded about the challenge of evaluation. I am often asked, “Do you have a form I could use about…?” My nutrition and exercise questions notwithstanding (I do have notebooks of those), this makes evaluation sound like it is routine, standardized, or prepackaged rather than individualized, customized, or specific. For me, evaluation is about the exceptions to the rule; how the evaluation this week may have similarities to something I’ve done before (after all this time, I would hope so…), yet is so different; unique, specific.

You can’t expect to find a pre-made formsurvey 2 for your individual program (unless, of course you are replicating a previously established program). Evaluations are unique and the evaluation approach needs to match that unique program specialness. Whether the evaluation uses a survey, a focus group, or an observation (or any other data gathering approach), that approach to gathering data needs to focus on the evaluation question you want answered. You can start with “What difference did the program make?” Only you, the evaluator, can determine if you have enough resources to conduct the evaluation to answer the specific questions that result from what difference did the program make.  You probably do not have enough resources to determine if the program led your target audience to world peace; you might have enough resources to determine if the intention to do something different is there. You probably have enough resources to decide how to use your findings. It is so important that the findings be used; use may be how world peace may be accomplished.

demographics 4There are a few commonalities in data collection; those are the demographics, the data that tell you what your target audience looks like. Things like gender, age, marital status, education level, SES, probably a few other things depending on the program. Make sure when you ask demographic information that a “choose not to answer” option is provided in the survey. Sometimes you have to ask; observations don’t always provide the answer. You need to make sure you include demographics in your survey as most journals want to know what the target audience looked like.

Readers, what makes your evaluations different, unique, special? I’d like to hear about that. Oh and while you are at it…like and share this post, if you do.

 

The question of surveys came up the other day. Again.

I got a query from a fellow faculty member and a query from the readership. (No not a comment; just a query–although I now may be able to figure out why the comments don’t work.)

So surveys; a major part of evaluation work. (My go-to book on surveys is Dillman’s 3rd edition 698685_cover.indd; I understand there is a 4th edition coming later this year.9781118456149.pdf )

After getting a copy of Dillman for your desk, This is what I suggest: Start with what you want to know.

This may be in the form of statements or questions. If the result is complicated, see if you can simplify it by breaking it into more than one statement or question. Recently, I  got a “what we want to know” in the form of complicated research questions. I’m not sure that the resulting survey questions answered the research questions because of the complexity. (I’ll have to look at the research questions and the survey questions side by side to see.) Multiple simple statements/questions are easier to match to your survey questions, easier to see if you have survey questions that answer what you want to know. Remember: if you will not use the answer (data), don’t ask the question. Less can actually be more, in this case, and just because it would be interesting to know doesn’t mean the data will answer your “what you want to know” question.

Evaluators strive for evaluation use . (See: Patton, M. Q. (2008). Utilization Focused Evaluation, 4ed. Thousand Oaks, CA: Sage Publications, Inc.Utilization-Focused Evaluation; AND/OR Patton, M. Q. (2011). Essentials of Utilization-Focused Evaluation. Thousand Oaks, CA: Sage Publications, Inc.Essentials of UFE).  See also the The  Program Evaluation Standards , which lists utility (use) as the first attribute and standard for evaluators. (Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The Program Evaluation Standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.The_Program_Evaluation_Standards_3ed)

Evaluation use is related to stated intention to change about which I’ve previously written. If your statements/questions of what you want to know will lead you to using the evaluation findings, then stating the question in such a way as to promote use will foster use, i.e., intention to change. Don’t do the evaluation for the sake of doing an evaluation. If you want to improve the program, evaluate. If you want to know about the program’s value, merit, and worth, evaluate. Then use. One way to make sure that you will follow-through is to frame your initial statements/questions in a way that will facilitate use. Ask simply.

I’ve just read Ernie House’s book, Regression to the Mean.  house--regression to the meanIt is a NOVEL about evaluation politics.  A publishers review says, “Evaluation politics is one of the most critical, yet least understood aspects of evaluation. To succeed, evaluators must grasp the politics of their situation, lest their work be derailed. This engrossing novel illuminates the politics and ethics of evaluation, even as it entertains. Paul Reeder, an experienced (and all too human) evaluator, must unravel political, ethical, and technical puzzles in a mysterious world he does not fully comprehend. The book captures the complexities of evaluation politics in ways other works do not. Written expressly for learning and teaching, the evaluation novel is an unconventional foray into vital topics rarely explored.”

Many luminaries (Patton, Lincoln, Scriven, Weiss) made pre-publication comments. Although I found the book fascinating, I found the quote that is included attributed to Freud compelling.  That quote is, “The voice of the intellect is a soft one, but it does not rest until it has gained a hearing.  Ultimately, after endless rebuffs, it succeeds.  This is one of the few points in which we can be optimistic about the future of mankind (sic).”  Although Freud wasn’t speaking about evaluation, House contends that this statement applies, and goes on to say, “Sometimes you have to persist against your emotions as well as the emotions of others.  None of us are rational.”

So how does rationality fit into evaluation. I would contend that it doesn’t. Although the intent of evaluation is to be objective, none of us can be because of what I called personal and situational bias; what is known in the literature as cognitive bias. I contend that if one has cognitive bias (and everyone does) then that prevents us from being rational, try as we might. Our emotions get in the way. House’s comment (above) seems fitting to evaluation–evaluators must persist against personal emotions as well as emotions of others. I would add persists against personal and situational bias. I believe it is important to make explicit the personal and situational bias prior to commencing an evaluation. By clarifying assumptions that occur with the stakeholders and the evaluator, surprises are minimized, and the evaluation may be more useful to program people.

Warning:  This post may contain information that is controversial .

Schools (local public schools) were closed (still are).

The University (which never closes) was closed for four days (now open).

The snow kept falling and falling and falling.  Snow in corvallis February 2014.jpg (Thank you Sandra Thiesen for the photo.)

Eighteen inches.  Then freezing rain.  It is a mess (although as I write this, the sun is shining, and it is 39F and supposed to get to 45F by this afternoon).

This is a complex messy system (thank you Dave Bella).  It isn’t getting better.  This is the second snow Corvallis has experienced in the same number of months, with increasing amounts.

It rains in the valley in Oregon; IT DOES NOT SNOW.

Another example of a complex messy system is what is happening in the UK

These are examples extreme events; examples of climate chaos.

Evaluating complex messy systems is not easy.  There are many parts.  If you hold constant one part, what happens to the others?  If you don’t hold constant one part, what happens to the rest of the system?.  Systems thinking and systems evaluation has come of age with the 21st century; there were always people who viewed the world as a system; one part linked to another, indivisible.  Soft systems theory dates back to at least von Bertalanffy who developed general systems theory and published the book by the same name in 1968general systems theory (ISBN 0-8076-0453-4).

One way to view systems is in this photo (compliments of Wikipedia) Systems_thinking_about_the_society.svg.

Evaluating systems is complicated and complex.

Bob Williams, along with Iraj Imam, edited the volume Systems Concepts in EvaluationSystems_Concepts in evaluation_pb (2007), and along with Richard Hummelbrunner,   wrote the volume Systems Concepts in Action: A Practitioner’s Toolkit  systems concepts--tool kit (2010).  He is a leader in systems and evaluation.

These two books relate to my political statement at the beginning and complex messy systems.  According to Amazon, the second book “explores the application of systems ideas to investigate, evaluate, and intervene in complex and messy situations”.

If you think your program works in isolation, think again.  If you think your program doesn’t influence other programs, individuals, stakeholders, think again.  You work in a complex messy system. Because you work in a complex messy system, you might want to simplify the situation (I know I do); only you can’t.  You have to work within the system.

Might be worth while to get von Bertalanffy’s book; might be worth while to get Williams books; might be worth while to get  a copy of Gunderson and Holling book  Panarchy: Understanding Transformations in Systems of Humans and Nature.panarchy

After all, nature is a complex messy system.

On February 1 at 12:00 pm PT, I will be holding my annual virtual tea party.  This is something I’ve been doing since February of 1993.  I was in Minnesota and the winter was very cold, and although not as bleak as winter in Oregon, I was missing my friends who did not live near me.  I had a tea party for the folks who were local and wanted to think that those who were not local were enjoying the tea party as well.  So I created a virtual tea party.  At that time, the internet was not available; all this was done in hard copy (to this day, I have one or two friends who do not have internet…sigh…).  Today, the internet makes the tea party truly virtual–well the invitation is; you have to have a real cup of tea where ever you are.
Virtual Tea Time 2014

 

How is this evaluative?  Gandhi says that only you can be the change you want to see…this is one way you can make a difference.  How will you know?

I know because my list of invitees has grown exponentially.  And some of them share the invitation.  They pass it on.  I started with a dozen or so friends.  Now my address list is over three pages long.  Including my daughters and daughters of my friends (maybe sons, too for that matter…)

Other ways:  Design an evaluation plan; develop a logic model; create a metric/rubric.  Report the difference.  This might be a good place for using an approach other than a survey or Likert scale.  Think about it.

Did you know that there are at least 11 winter holidays besides Christmas–many of them related to light or the return of light.

One needs evaluation tools to determine the merit or worth, to evaluate the holiday’s value to you.  For me, any that return lightsolstice light are important.  So for me, there is Hanukkah menorah (and eight candles), Solstice solstice bonfire (and bonfires and yule logs), Christmas advent wreath(and Advent wreaths with five candles), Kwanzaa kinara( and kinara seven candles).  Sometimes Diwali Diwali falls late in November to be included (it is the ancient Hindu festival of lights that is a movable feast like Hanukkah).

I have celebrations for Hanukkah  (I have several menorahs), for Solstice  (I have two special candelabra solstice candelabra that holds 12 candles–a mini-bonfire to be sure), for Advent/Christmas (I make a wreath each year), and for Kwanzaa  (a handmade Kinara).  And foods for each celebration as well.  Because I live in a multicultural household, it is important that everyone understand that no holiday is more important than any other–all talk about returning light (literal or figurative).  Sometimes the holidays over lap–Hanukkah, Solstice, Christmas all in the same week…phew, I’m exhausted just thinking about it.  Sometimes it seems hard to keep them separate–then I realized that returning the light is not separate; it is light returning.  It is an evaluative task.

So well come the new born sun/son…the light returns.  Evaluation continues.

Happy Holidays…all of them!

I’m taking two weeks holiday–will see you in the new year.

I was reminded about the age of this blog (see comment below).  Then it occurred to me:  I’ve been writing this blog since December 2009.  That is 4 years of almost weekly posts.  And even though evaluation is my primary focus, I occasionally get on my soap box and do something different (White Christmas Pie, anyone?).  My other passion besides evaluation is food and cooking.  I gave a Latke party on Saturday and the food was pretty–and it even tasted good.  I was more impressed by the visual appeal of my table; my guests were more impressed by the array of tastes, flavors, and textures.  I’d say the evening was a success.  This blog is a metaphor for that table.  Sometimes I’m impressed with the visual appeal; sometimes I’m impressed with the content.  Today is an anniversary.  Four years.  I find that amazing (visual appeal).  The quote below (a comment offered by a reader on the post “Is this blog making a difference?”, a post I made a long time ago) is about content.

“Judging just from the age of your blog I must speculate that you’ve done something right. If not then I doubt you’d still be writing regularly. Evaluation of your progress is important but pales in comparison to the importance of writing fresh new content on a regular basis. Content that can be found no place else is what makes a blog truly useful and indeed helps it make a difference.”

Audit or evaluation?

I’m an evaluator; I want to know what difference the “program” is making in the lives of the participants.  The local school district where I live, work, and send my children to school has provided middle school children with iPads iPad.  They want to “audit” their use.  I commend the school district for that initiative (both giving the iPads as well wanting to determine the effectiveness).  I wonder if they really want to know what difference the electronics are making in the lives of the students.  I guess I need to go re-read Tom Schwandt’s 1988 book, “Linking Auditing and Metaevaluation”, a book he wrote with Ed Halpern, Tom Schwandt book  as well as see what has happened in the last 25 years (and it is NOT that I do not have anything else to read…smiley).  I think it is important to note the sentence (taken from the forward), “Nontraditional studies are found not only in education, but also in…divers fields …(and the list they provide is a who’s who in social science).  The problem of such studies is “establishing their merit”.  That is always a problem with evaluation–establishing the merit, worth, value of a program (study).

We could spend a lot of time debating the  merit, worth, value of using electronics in the pursuit of learning.  (In fact, Jeffrey Selingo writes about the need to personalize instruction using electronics in his 2013 book “College (Un)bound”college unbound by jeffry selingo–very readable, recommended.)   I do not think counting the number of apps or the number of page views is going to answer the question posed.  I do not think counting the number of iPads returned in working condition will either.  This is an interesting experiment.  How , reader, would you evaluate the merit, worth, value of giving iPads to middle school children?  All ideas are welcome–let me know because I do not have an answer, only an idea.

For the first time in my lifetime the first day of Hanukkah is also Thanksgiving.  The pundits are are sagely calling the event Thanksgivukkah.thanksgivukkah image  According to this referenced source, the first day of Hanukkah will not happen again for over 70,000 years.  However, according to another source, this overlap could happen again in 2070 and 2165.  Although I do not think I’ll be around in 2070, my children could be (they are 17 and 20 of this writing).  I find this phenomenon really interesting–Thanksgiving usually starts the US holiday season and Hanukkah falls later, during Advent.  Not so this year.  I wonder how people combine latkes and Thanksgiving (even without the turkey).  Loaded latkes? Thanksgivukkah latkes (My appreciation to Kia.)

So I’m sure you are wondering, HOW EXACTLY DOES THIS RELATE TO EVALUATION?

I decided that it was time to revisit my blog title, Evaluation is an Everyday Activity. Every day you evaluate something.  Although you do not necessarily articulate out loud the criteria against which you are determining merit, worth, and value, you have those criteria.  I have them for latkes AND Thanksgiving.  Our latkes must be crispy; of winter vegetables including potatoes.  This allows me to use a variety of winter vegetables I may have gotten in my CSA.  (Beet latkes? Sweet potato latkes?  Celeriac latkes?  You bet!)   Our Thanksgiving is to have foods for which we are truly thankful.  That allows us to think about gratitude.  Each year our menu is different because each year we are thankful for different things.  (I must confess, however, we always have pie–pumpkin, which I make from home grown pumpkin/squash, and chocolate pecan, which is an original old family recipe.)  One year when we put all the food on the table, all the food was green.  We didn’t plan it that way; it just happened because they were foods for which we were thankful.  This year, we will have mashed potatoes (by the Queen of mashed potatoes), Celebration Filo, both the gluten-free (made with rice wrappers and no onion, garlic, or dairy) and glutened versions (the version which we renamed and is in the link above), and something else that will probably be green.  This year I’m thankful for my gluten-free; dairy-free friend who will join us for Thanksgiving and I’m working up alternatives to accommodate her and still satisfy the rest of us.

So you see, even when I’m thinking about Thanksgiving, latkes, and gratitude, I’m thinking about evaluation.  What merit does the “program” have?  What is its worth?  What is its value?  Those are all evaluative questions that apply to Thanksgiving (and latkes and gratitude). Thanksgiving 2

So you see, Evaluation is an Everyday Activity.

I won’t be blogging next week.  Enjoy.  Be grateful.Thanksgiving

 

 

Variables.

We all know about independent variables, and dependent variables.  Probably even learned about moderator variables, control variables and intervening variables.  Have you heard of confounding variables?  Variables over which you have no (or very little) control.  They present as a positive or negative correlation with the dependent and independent variable.  This spurious relationship plays havoc with analyses, program outcomes, and logic models.  You see them often in social programs.

Ever encounter one? (Let me know).  Need an example?  Here is one a colleague provided.  There was a program developed to assist children removed from their biologic  mothers (even though the courts typically favor mothers) to improve the children’s choices and chances of success.  The program had included training of key stakeholders (including judges, social service, potential foster parents).  The confounding variable that wasn’t taken into account was the sudden appearance of the biological father.  Judges assumed that he was no longer present (and most of the time he wasn’t); social service established fostering without taking into consideration the presence of the biological father; potential foster parents were not allerted in their training of the possibility.  Needless to say, the program failed.  When biologic fathers appeared (as often happened), the program had no control over the effect they had.  Fathers had not been included in the program’s equation.

Reviews.

Recently, I was asked to review a grant proposal, the award would result in several hundred thousand dollars (and in today’s economy, no small change).  The PI’s passion came through in the proposal’s text.  However, the PI and the PI’s colleagues did some major lumping in the text that confounded the proposed outcomes.  I didn’t see how what was being proposed would result in what was said to happen.  This is an evaluative task.  I was charged to with evaluating the proposal on technical merit, possibility of impact (certainly not world peace), and achievability.  The proposal was lofty and meant well.  The likelihood that it would accomplish what it proposed was unclear, despite the PI’s passion.  When reviewing a proposal, it is important to think big picture as well as small picture.  Most proposals will not be sustainable after the end of funding.  Will the proposed project be able to really make an impact (and I’m not talking here about world peace).

Conversations.

I attended a meeting recently that focused on various aspects of diversity.  (Now among the confounding here is what does one mean by diversity; is it only the intersection of gender and race/ethnicity?  Or something bigger, more?)  One of the presenters talked about how just by entering into the conversation, the participants would be changed.  I wondered, how can that change be measured?  How would you know that a change took place?  Any ideas?  Let me know.

Focus groups.

A colleague asked whether a focus group could be conducted via email.  I had never heard of such a thing (virtual, yes; email, no).  Dick Krueger and Mary Ann Casey only talk about electronic reporting in their 4th edition of their Focus Group book. krueger 4th ed  If I go to Wikipedia (keep in mind it is a wiki…), there is a discussion of online focus groups.  Nothing offered about email focus groups.  So I ask you, readers, is it a focus group if it is conducted by email?

 

 

 

What follows is a primer, one of the first things evaluators learn when developing a program.  This is something that cannot be said enough.  Program evaluation is about the program.  NOT about the person who leads the program; NOT about the policy about the program; NOT about the people who are involved in the program.  IT IS ABOUT THE PROGRAM!

Phew.  Now that I’ve said that.  I’ll take a deep breath and elaborate.

 

“Anonymity, or at least a lack of face-to-face dialogue, leads people to post personal attacks…” (This was said by Nina Bahadur, Associate Editor, HuffPost Women.)  Although she was speaking about blogs, not specifically program evaluation, this applies to program evaluations.  Evaluations are handed out at the end of a program.  Program evaluations do not ask for identifying information and often lead to personal attacks.  Personal attacks are not helpful to the program lead, the program, or the participants learning.

The program lead really wants to know ABOUT THE PROGRAM, not slams about what s/he did or didn’t do; say or didn’t say.  There are some things about a program over which the program lead doesn’t have any control–the air handling at the venue; the type of chairs used; the temperature of the room; sometimes, even the venue.  The program lead does have control over the choice of venue (usually), the caterer (if food is offered), the materials (the program) offered to the participants, how s/he looks (grumpy or happy; serious or grateful)–I’ve just learned that how the “teacher” looks at the class makes a big difference in participants learning.

What a participant must remember is that they agreed to participate.   It may have been a requirement of their job; it may have been encouraged by their boss; it may have been required by their boss.  What ever the reason, they agreed to participate.  They must be accountable for their participation.  Commenting on those things over which the program lead has no control may make then feel better in the short run; it doesn’t do any good to improve the program or to determine if the program made a difference–that is had merit, worth, value.  (Remember the root word of evaluation is VALUE.)

Personal grousing doesn’t add to the program’s value.  The question that must be remembered when filling out an evaluation is, “Would this comment be said in real life (not on paper)? Would you tell the person this comment?”  If not, it doesn’t belong in your evaluation.  Program leads want to build a good and valuable program.  The only way they can do is to receive critical feedback about the program.  So if the food stinks and the program lead placed the order with the caterer, tell the program lead not to use the caterer again, don’t tell the program lead that her/his taste in food is deplorable–how does that improve the program?  If the chairs are uncomfortable, tell the program lead to tell the venue that the chairs were found by participants to be uncomfortable as the program lead didn’t deliberately make the chairs uncomfortable.  If there wasn’t enough time for sharing, tell the program lead to increase the sharing time because sometimes sharing of personal experiences is just what is needed to make the program meaningful to participants.