I was reading a blog post by Harold Jarche harold jarche who stated that “Donald Taylor notes that, ‘everyone has a memory that is particularly attuned to learning some things very easily’. In his post, Donald says that the context in which learn something, as well as how it is presented and received, are all important aspects of whether we will remember something.”

Jennifer Greene jennifer greene-2, a long-time colleague currently at the University of Illinois Urbana-Champaign, addresses context when she says, “We all know that the contexts in which our evaluands* take place are inextricably intertwined with the program as envisioned, implemented, experienced, and judged. And regarding this program context, Saville Kushner saville-kushnerhas profoundly challenged us to ask not, “how well are participants doing in the program?” but rather “how well does the program serve, respect, and respond to these participants’ needs, hopes, and dreams in this place?” Continue reading

My friend and colleague, Patricia Rogers, says of cognitive bias , “It would be good to think through these in terms of systematic evaluation approaches and the extent to which they address these.” This was in response to the article herecognitive bias The article says that the human brain is capable of 10 to the 16th power (a big number) processes per second. Despite being faster than a speeding bullet, etc., the human brain has ” annoying glitches (that) cause us to make questionable decisions and reach erroneous conclusions.”

Bias is something that evaluators deal with all the time. There is desired response bias, non-response bias, recency and immediacy bias, measurement bias, and…need I say more? Isn’t evaluation and aren’t evaluators supposed to be “objective”? That we as evaluators behave in an ethical manner? That we have dealt with potential bias and conflicts of interest. That is where cognitive bias appear. And you might not know it at all. Continue reading

KASA. You’ve heard the term many times. Have you really stopped to think about what it means? What evaluation approach you will use if you want to determine a difference in KASA? What analyses you will use? How you will report the findings?

Probably not. You just know that you need to measure KNOWLEDGE, ATTITUDE, SKILLS, and ASPIRATIONS.

The Encyclopedia of Evaluation (edited by Sandra Mathisonsandra mathison) says that they influence the adoption of selected practices and technologies (i.e., programs). Claude Bennett Claude Bennett uses KASA in his TOP model  Bennett Hierarchy.I’m sure there are other sources. Continue reading

First, let me say that getting to world peace will not happen in my lifetime (sigh…) and world peace is the ultimate impact. Everything else is an outcome. It may be a long term outcome, that is a condition change (either social, economic, environmental, or civic), or not. Just because the powers that be use a term doesn’t mean the term is being used correctly!

Then let me say that evaluation is the way to know you got  to that impact…ultimately, world peace. Ultimately. In the mean time, you will need to find approximate (proxy) measures.

Last week, I attended the Engagement Scholarship Consortiumengagement scholarship consortium conference in State College, PA, home of Penn State.penn-state-logo I had the good fortune to see long time friends, meet new people and get a few new ideas. One of the long time friends I was able to visit with was Nancy Franz, Professor Emeritus, Iowa State University. She did a session called “Four steps to measuring and articulating engagement impact”.

Basically she reduced into four steps (hence, the title) program evaluation. And since engagement scholarship is a “program” it needs to be evaluated to make sure it is making a difference. Folks are slowly coming to that idea if the attendance at her session is any indication (full). She used different words than I would have used; I found myself adding parenthetical comments to her words.

I want to share in words what she shared graphically:

  1. In order to be able to conduct these four steps, you need evaluation training, evaluation support, and successful models;
  2. STEP 1: You need to map the intended program (my parenthetical was the “logic model” for which she provided the UWEX web site);
  3. STEP 2: You need to determine what “impact” will be measured (input vs. outcome);
  4. STEP 3: You need to collect and analyze data (qualitative and quantitative);
  5. STEP 4: You need to tell the story (when, what, so what, now what; the public value);
  6. If you do these four steps she believes that you will enhance paid and volunteer staff performance; increase program quality; and improve impact reporting (be persuasive).

She had a few good suggestions; specifically:

  1. Since most people don’t like to analyze data (because they do not know how?), she holds a data party to look at what was found; and
  2. Case studies have value; use them.
  3. I added, “If you aren’t going to use the data, do not collect it. It only obfuscates the impact.”

Think about what you do when you evaluate a program. Do you do these four steps? Do you know what impact you are trying to achieve? And if you can’t get to world peace, that’s OK. Each step will bring you closer.

my two cents.

molly.

I’ve been stuck.

I haven’t blogged for three weeks. I haven’t blogged because I don’t have a topic. Oh, I’ve plenty to say (I am never for a loss of words… 🙂 ) I want something to relate to evaluation. Relate clearly. Without question. Evaluation.

So after 5 years, I’m going to start over. Evaluation is an everyday activity!

Evaluative thinking is something you do everyday; probably all day. (I don’t know about when you are a sleep, so I said probably.) I think evaluative thinking is one of those skills that everyone needs to learn systematically. I think everyone learns at least a part of evaluative thinking as they grow; the learning may not be systematic. I would put that skill in the same category as critical (not negative but thoughtful) thinking, team building, leadership, communication skills (both verbal and written), technological facility as well as some others which escape me right now. I would add systematic evaluative thinking.

Everyone has criteria on which decisions are based. Look at how you choose a package of cookies or a can of corn can of corn 2 at the grocery store. What criteria do you use for choosing? Yet that wasn’t taught to you; it was just something you developed. Evaluative thinking is more than just choosing what you want for dinner. AARP lists problem solving as part of the critical thinking skills. I think it is more than just problem solving; I do agree that it is a critical thinking skill (see graphic, from Grant Tilus, Rasmussen College).Core Critical Thinking Skills

So you think thoughtfully about most events/activities/things that you do throughout the day. And you learn over time what works and what doesn’t; what has value and what doesn’t. You learn to discern the conditions under which something works; you learn what changes the composition of the outcome. You begin to think evaluatively about most things. One day you realize that you are critically thinking about what you can, will, and need to do. Evaluative thinking has become systematic. You realize that it depends on many factors. You realize that evaluative thinking is a part of who you are. You are an evaluator, even if you are a psychologist or geologist or engineer or educator first.

 

my two cents.

molly.

I just got back from a road trip across Southern Alabama with my younger daughter.southern alabama We started from Birmingham and drove a very circuitous route ending in Mobile and the surrounding areas, then returned to Birmingham for her to start her second year at Birmingham-Southern College.

As we traveled, I read a book by Bill McKibben (one of many) called Oil and Honey: The Education of an Unlikely Activist. It is a memoir, a personal recounting of the early years of this decade, which corresponded with the years my older daughter was in college (2011-2014). I met Bill McKibben, who, in 2008, is credited with starting the non-profit, 350.0rg, and is currently listed as “senior adviser and co-founder”. He is a passionate, soft-spoken man, who beleives that the world is on a short fuse. He really seems to believe that there is a better way to have a future. He, like Gandhi, is taking a stand.  Oil and Honey puts into action Gandhi’s saying about being the change you want to seegandhi and change. As the subtitle indicates, McKibben is an unlikely activist. He is a self-described non-leader who led and advises the global effort to increase awareness of climate change/chaos. When your belief is on the line, you do what has to be done.

Evaluators are the same way. When your belief is on the line, you do what has to be done. And, hopefully, in the process you are the change that you want to see in the world. But know it cannot happen one pipeline at a time. The fossil fuel industry has too much money. So what do you do? You start a campaign. That is what 350.org has done:  “There are currently fossil fuel divestment campaigns at 308 colleges and universities, 105 cities and states, and 6 religious institutions.”(Wikipedia, 350.0rg) (Scroll down to the heading “Fossil Fuel Divestment” to see the complete discussion.) Those are clear numbers, hard data for consumption. (Unfortunately, the  divestment campaign at OSU failed.)

So I see the question as one of impact, though not specifically world peace (my ultimate impact). If there is no planet on which to work for world peace, there in no need for world peace. Evaluators can help. They can look at data critically. They can read the numbers. They can gather the words. This may be the best place for the use of pictures (they are, after all, worth 1000 words).  Perhaps by combining efforts, the outcome will be an impact that benefits all humanity and builds a tomorrow for the babies born today.

my two cents.

molly.

 

The use of the term impact is problematic, as I see it. If you (or any evaluator) are going to have an impact, if your program is going to have an impact, if you are going to do anything other than focus on the outcomes, how will you know? Scriven, in his Thesauras Scriven book cover, says an impact evaluation is an evaluation which focuses on outcomes rather than process, progress (delivery), or implementation. (Is that an example of using the word to define the word?) Is an impact evaluation the same as an evaluation which captures the outcomes? Continue reading

Ignorance is a choice.ignorance

Not knowing may be “easier”; you know, less confusing, less intimidating, less fearful, less embarrassing.

I remember when I first asked the question, “Is it easier not knowing?” What I was asking was “By choosing to not know, did I really make a choice, or was it a default position?” Because if you consciously avoid knowing, do you really not know or are you just ignoring the obvious. Perhaps it goes back to the saying common on social media today: “Great people talk about ideas; average people talk about things; small people talk about other people” (which is a variation of what Elanor Roosevelt said).great minds-people Continue reading

survey image 3The use of a survey is a valuable evaluation tool, especially in the world of electronic media. The survey allows individuals to gather data (both qualitative and quantitative) easily and relatively inexpensively. When I want information about surveys, I turn to the 4th edition of the Dillman book Dillman 4th ed. (Dillman, Smyth, & Christian, 2014*). Dillman has advocated the “Tailored Design Method” for a long time. (I first became aware of his method, which he called “Total Design Method,” in his 1978 first edition,dillman 1st edition a thin, 320 page volume [as opposed to the 509 page fourth edition].)

Today I want to talk about the “Tailored Design” method (originally known as total design method).

In the 4th edition, Dillman et al. say that “…in order to minimize total survey error, surveyors have to customize or tailor their survey designs to their particular situations.” They are quick to point out (through various examples) that the same procedures won’t work  for all surveys.  The “Tailored Design Method” refers to the customizing survey procedures for each separate survey.  It is based upon the topic of the survey and the audience being surveyed as well as the resources available and the time-line in use.  In his first edition, Dillman indicated that the TDM (Tailored Design Method) would produce a response rate of 75% for mail surveys and an 80%-90% response rate is possible for telephone surveys. Although I cannot easily find the same numbers in the 4th edition, I can provide an example (from the 4th edition on page 21-22) where the response rate is 77% after a combined contact of mail and email over one month time. They used five contacts of both hard and electronic copy.

This is impressive. (Most surveys I and others I work with conduct have a response rate less than 50%.) Dillman et al. indicate that there are three fundamental considerations in using the TDM. They are:

  1. Reducing four sources of survey error–coverage, sampling, nonresponse, and measurement;
  2. Developing a set of survey procedures that interact and work together to encourage all sample members to respond; and
  3. Taking into consideration elements such as survey sponsorship, nature of survey population, and the content of the survey questions.

The use of a social exchange perspective suggests that respondent behavior is motivated by the return that behavior is expected, and usually does, bring. This perspective affects the decisions made regarding coverage and sampling, the way questions are written and questionnaires are constructed, and determines how contacts will produce the intended sample.

If you don’t have a copy of this book (yes, there are other survey books out there) on your desk, get one! It is well worth the cost ($95.00, Wiley; $79.42, Amazon).

* Dillman, D. A., Smyth, J. D. & Christian, L. M. (2014)  Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Hoboken, N. J.: John Wiley & Sons, Inc.

my two cents.

molly.

“fate is chance; destiny is choice”.destiny-calligraphy-poster-c123312071

Went looking for who said that originally so that I could give credit. Found this as the closest saying: “Destiny is no matter of chance. It is a matter of choice: It is not a thing to be waited for, it is a thing to be achieved.

William Jennings Bryan

 

Evaluation is like destiny. There are many choices to make. How do you choose? What do you choose?

Would you listen to the dictates of the Principal Investigator even if you know there are other, perhaps better, ways to evaluate the program?

What about collecting data? Are you collecting it because it would be “nice”? OR are you collecting it because you will use the data to answer a question?

What tools do you use to make your choices? What resources do you use?

I’m really curious. It is summer and although I have a list (long to be sure) of reading, I wonder what else is out there, specifically relating to making choices? (And yes, I could use my search engine; I’d rather hear from my readers!)

Let me know. PLEASE!

my two cents.

molly.