I was reading a blog post by Harold Jarche harold jarche who stated that “Donald Taylor notes that, ‘everyone has a memory that is particularly attuned to learning some things very easily’. In his post, Donald says that the context in which learn something, as well as how it is presented and received, are all important aspects of whether we will remember something.”

Jennifer Greene jennifer greene-2, a long-time colleague currently at the University of Illinois Urbana-Champaign, addresses context when she says, “We all know that the contexts in which our evaluands* take place are inextricably intertwined with the program as envisioned, implemented, experienced, and judged. And regarding this program context, Saville Kushner saville-kushnerhas profoundly challenged us to ask not, “how well are participants doing in the program?” but rather “how well does the program serve, respect, and respond to these participants’ needs, hopes, and dreams in this place?” Continue reading

My friend and colleague, Patricia Rogers, says of cognitive bias , “It would be good to think through these in terms of systematic evaluation approaches and the extent to which they address these.” This was in response to the article herecognitive bias The article says that the human brain is capable of 10 to the 16th power (a big number) processes per second. Despite being faster than a speeding bullet, etc., the human brain has ” annoying glitches (that) cause us to make questionable decisions and reach erroneous conclusions.”

Bias is something that evaluators deal with all the time. There is desired response bias, non-response bias, recency and immediacy bias, measurement bias, and…need I say more? Isn’t evaluation and aren’t evaluators supposed to be “objective”? That we as evaluators behave in an ethical manner? That we have dealt with potential bias and conflicts of interest. That is where cognitive bias appear. And you might not know it at all. Continue reading

KASA. You’ve heard the term many times. Have you really stopped to think about what it means? What evaluation approach you will use if you want to determine a difference in KASA? What analyses you will use? How you will report the findings?

Probably not. You just know that you need to measure KNOWLEDGE, ATTITUDE, SKILLS, and ASPIRATIONS.

The Encyclopedia of Evaluation (edited by Sandra Mathisonsandra mathison) says that they influence the adoption of selected practices and technologies (i.e., programs). Claude Bennett Claude Bennett uses KASA in his TOP model  Bennett Hierarchy.I’m sure there are other sources. Continue reading

First, let me say that getting to world peace will not happen in my lifetime (sigh…) and world peace is the ultimate impact. Everything else is an outcome. It may be a long term outcome, that is a condition change (either social, economic, environmental, or civic), or not. Just because the powers that be use a term doesn’t mean the term is being used correctly!

Then let me say that evaluation is the way to know you got  to that impact…ultimately, world peace. Ultimately. In the mean time, you will need to find approximate (proxy) measures.

Last week, I attended the Engagement Scholarship Consortiumengagement scholarship consortium conference in State College, PA, home of Penn State.penn-state-logo I had the good fortune to see long time friends, meet new people and get a few new ideas. One of the long time friends I was able to visit with was Nancy Franz, Professor Emeritus, Iowa State University. She did a session called “Four steps to measuring and articulating engagement impact”.

Basically she reduced into four steps (hence, the title) program evaluation. And since engagement scholarship is a “program” it needs to be evaluated to make sure it is making a difference. Folks are slowly coming to that idea if the attendance at her session is any indication (full). She used different words than I would have used; I found myself adding parenthetical comments to her words.

I want to share in words what she shared graphically:

  1. In order to be able to conduct these four steps, you need evaluation training, evaluation support, and successful models;
  2. STEP 1: You need to map the intended program (my parenthetical was the “logic model” for which she provided the UWEX web site);
  3. STEP 2: You need to determine what “impact” will be measured (input vs. outcome);
  4. STEP 3: You need to collect and analyze data (qualitative and quantitative);
  5. STEP 4: You need to tell the story (when, what, so what, now what; the public value);
  6. If you do these four steps she believes that you will enhance paid and volunteer staff performance; increase program quality; and improve impact reporting (be persuasive).

She had a few good suggestions; specifically:

  1. Since most people don’t like to analyze data (because they do not know how?), she holds a data party to look at what was found; and
  2. Case studies have value; use them.
  3. I added, “If you aren’t going to use the data, do not collect it. It only obfuscates the impact.”

Think about what you do when you evaluate a program. Do you do these four steps? Do you know what impact you are trying to achieve? And if you can’t get to world peace, that’s OK. Each step will bring you closer.

my two cents.

molly.