Sheila Robinson has an interesting post which she titled “Outputs are for programs. Outcomes are for people.”  Sounds like a logic model to me.

Evaluating something (a strategic plan, an administrative model, a range management program) can be problematic. Especially if all you do is count. So “Do you want to count?” OR “Do you want to determine what difference you made?” I think it all relates to outputs and outcomes.

 

Logic model

 

The model below explains the difference between outputs and outcomes.

.logicmodel (I tried to find a link on the University of Wisconsin website and UNFORTUNATELY it is no longer there…go figure. Thanks to Sheila, I found this link which talks about outputs and outcomes) I think this model makes clear the  difference between Outputs (activities and participation) and Outcomes-Impact (learning, behavior, and conditions). Continue reading

Probable? Maybe. Making a difference is always possible.

Oxford English Dictionary defines possible as capable of being (may/can exist, be done, or happen). It  defines probable as worthy of acceptance, believable.

Ray Bradbury Ray Bradbury: “I define science fiction as the art of the possible. Fantasy is the art of the impossible.”

Somebody asked me what was the difference between science fiction and fantasy. Certainly the simple approach is that science fiction deals with the possible (if you can think it, it can happen). Fantasy deals with monsters, fairies, goblins, and other mythical creatures, i.e., majic and majical creatures.

(Disclaimer: I personally believe in majic; much of fantasy deals with magic.) I love the Arthurian legend (it could be fantasy; it has endured for so long it is believable). It is full of majic. I especially like  the Marion Zimmer Bradley MarionZimmerBradley book, The Mists of Avalon Mists_of_Avalon-1st_ed. (I find the feminist perspective refreshing.)

Is fantasy always impossible as Bradbury suggests, or is it just improbable?  (Do the rules of physics apply?) This takes me back to Bradbury’s quote and evaluation after the minor digression. Bradbury also says that “Science fiction, again, is the history of ideas, and they’re always ideas that work themselves out and become real and happen in the world.” Not unlike evaluation. Evaluation works itself out and becomes real and happens. Usually.

Evaluation and the possible.

Often, I am invited to be the evaluator of record after the program has started. I sigh. Then I have a lot of work to do. I must teach folks that evaluation is not an “add on” activity. I  must also teach the folks how to identify the difference the program made. Then there is the issue of outputs (activities, participants) vs. outcomes (learning, behavior, conditions). Many principal investigators want to count differences pre-post.

Does the “how many” provide a picture of what difference the program made? If you start with no or few participants  and you end with many participants, have you made a difference? Yes, it is possible to count. Counts often meet reporting requirements. They are possible. So is documenting the change in knowledge, behavior, and conditions. It takes more work and more money. It is possible. Will you get to world peace? Probably not. Even if you can think it. World peace may be probable; it may not be possible (at least in my lifetime).

my two cents.

molly.

 

mandela-impossible “It always seems impossible until it’s done.” ~Nelson Mandela

How many times have you shaken your head in wonder? in confusion? in disbelief?

Regularly throughout your life, perhaps. (Now if you are a wonder kid, you probably have just ignored the impossible and moved on to something else.) Most of us will have been in awe; uncertainty; incredulity. Most of us will always look at that which seems impossible and then be amazed when it is done. (Mandela nelson mandela 1 was such a remarkable man who had such amazing insights.) Continue reading

KASA. You’ve heard the term many times. Have you really stopped to think about what it means? What evaluation approach you will use if you want to determine a difference in KASA? What analyses you will use? How you will report the findings?

Probably not. You just know that you need to measure KNOWLEDGE, ATTITUDE, SKILLS, and ASPIRATIONS.

The Encyclopedia of Evaluation (edited by Sandra Mathisonsandra mathison) says that they influence the adoption of selected practices and technologies (i.e., programs). Claude Bennett Claude Bennett uses KASA in his TOP model  Bennett Hierarchy.I’m sure there are other sources. Continue reading

“fate is chance; destiny is choice”.destiny-calligraphy-poster-c123312071

Went looking for who said that originally so that I could give credit. Found this as the closest saying: “Destiny is no matter of chance. It is a matter of choice: It is not a thing to be waited for, it is a thing to be achieved.

William Jennings Bryan

 

Evaluation is like destiny. There are many choices to make. How do you choose? What do you choose?

Would you listen to the dictates of the Principal Investigator even if you know there are other, perhaps better, ways to evaluate the program?

What about collecting data? Are you collecting it because it would be “nice”? OR are you collecting it because you will use the data to answer a question?

What tools do you use to make your choices? What resources do you use?

I’m really curious. It is summer and although I have a list (long to be sure) of reading, I wonder what else is out there, specifically relating to making choices? (And yes, I could use my search engine; I’d rather hear from my readers!)

Let me know. PLEASE!

my two cents.

molly.

There has been a somewhat lengthy discussion regarding logic models on EvalTalk,listserv an evaluation listserv sponsored by the American Evaluation AssociationAEA logo. (Check out the listserv archivesEVALTALK Archives.)  This discussion has been called in the subject line, “Logic model for the world?” The discussion started on January 26, 2015. The most telling (at least to me) was a statement that appeared January 30, 2015:

“The problem is not the instrument. All instruments can be mastered as a matter of technique. The problem is that logic models mistake the nature of evaluative knowledge – which is neither linear nor rational.” (Saville Kushner, EvalTalk, January 30, 2015).

The follow-up of this discussion talks about tools, specifically hammers (Bill Fear, EvalTalk, January 30, 2015). Fear says, “Logic is only a tool. It does not exist outside of the construction of the mind.” Continue reading

Recently, I drafted a paper about a capacity building; I’ll be presenting it at the 2014 AEA conference. The example on which I was reporting was regional and voluntary; it took a dedication, a commitment from participants. During the drafting of that paper, I had think about the parts of the program; what would be necessary for individuals who were interested in evaluation and had did not have a degree. I went back to the competencies listed in the AJE article (March 2005) that I cited in a previous post. I found it interesting to see that the choices I made (after consulting with evaluation colleagues) were listed in the competencies identified by Stevahn et al., yet they list so much more. So the question occurs to me is: To be competent, to build institutional evaluation capacity are all those needed? Continue reading

What? So what? Now what?

Sounds like an evaluation problem.

King and Stevahn (in press) tells us the first query requires thoughtful observation of a situation; the second query a discussion of possible options and implications of those options, and the third query calls for the creation of a list of potential next steps.adaptive_action.wiki

Yet these are the key words for “adaptive action” (If you haven’t looked at the web site, I suggest you do.) One quote that is reflective of adaptive action is, “Adaptive Action reveals how we can be proactive in managing today and influencing tomorrow.”( David W. Jamieson, University of St. Thomas). Adaptive action can help you

  • Understand the sources of uncertainty in your chaotic world
  • Explore opportunities for action and their implications as they occur
  • Learn a simple process that cuts through complexity
  • Transform the work of individuals, teams, organizations and communities
  • Take on any challenge—as large as a strategic plan or small as a messy meeting
  • Take action to improve productivity, collaboration and sustainability

Evaluation is a proactive (usually) activity (oh, I know that sometimes evaluation is flying by the seat of your pantsflying-by-the-seat-of-your-pants-Laurence-Musgrove-with-credit-line  and is totally reactive). People are now recognizing that evaluation will benefit them, their programs, and their organizations and that it isn’t personal (although that fear is still out there).

Although the site is directed towards leadership in organizations, the key questions are evaluative. You can’t determine “what” without evidence (data); you can’t determine “so what” unless you have a plan (logic model), and you can’t think about “now what” unless you have an outcome that you can move toward. These questions are evaluative in contemporary times because there are no simple problems any more. (Panarchy approaches similar situations using a similar model  adaptive-cycle Action.) Complex situations are facing program people and evaluators all the time. Using adaptive action may help. Panarchy may help (the book is called Panarchy by Gunderson and Hollings panarchy .)

Just think of adaptive action as another model of evaluation.

mytwo cents

molly.

unintended-consequencesA colleague asked, “How do you design an evaluation that can identify unintended consequences?” This was based on a statement about methodologies that “only measure the extent to which intended results have been achieved and are not able to capture unintended outcomes (see AEA365). (The cartoon is attributed to Rob Cottingham.)

Really good question. Unintended consequences are just that–outcomes which are not what you think will happen with the program you are implementing. This is where program theory comes into play. When you model the program, you think of what you want to happen. What you want to happen is usually supported by the literature, not your gut (intuition may be useful for unintended, however). A logic model lists as outcome the “intended” outcomes (consequences). So you run your program and you get something else, not necessarily bad, just not what you expected; the outcome is unintended.

Program theory can advise you that other outcomes could happen. How do you design your evaluation so that you can capture those. Mazmanian in his 1998 study on intention to change had an unintended outcome; one that has applications to any adult learning experience (1). So what method do you use to get at these? A general question, open ended? Perhaps. Many (most?) people won’t respond to open ended questions–takes too much time. OK. I can live with that. So what do you do instead? What does the literature say could happen? Even if you didn’t design the program for that outcome. Ask that question. Along with the questions about what you expect to happen.

How would you represent this in your logic model–by the ubiquitous “other”? Perhaps. Certainly easy that way. Again, look at program theory. What does it say? Then use what is said there. Or use “other”–then you are getting back to the open ended questions and run the risk of not getting a response. If you only model “other”–do you really know what that “other” is?

I know that I won’t be able to get to world peace, so I look for what I can evaluate and since I doubt I’ll have enough money to actually go and observe behaviors (certainly the ideal), I have to ask a question. In your question asking, you want a response right? Then ask the specific question. Ask it in a way that elicits program influence–how confident the respondent is that X happened? How confident the respondent is that they can do X? How confident is the respondent that this outcome could have happened? You could ask if X happened (yes/no) and then ask the confidence questions (confidence questions are also known as self-efficacy). Bandura will be proud. See Bandure social cognitive theory  OR Bandura social learning theory  OR   Bandura self-efficacy (for discussions of self-efficacy and social learning).

mytwo cents

molly.

1. Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., Kantrowitz, M. P. (1998). Information about barriers to planned change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine 73(8), 882-886.

On May 9, 2014, Dr. Don Kirkpatrick  Don Kirkpatrick photo died at the age of 90. His approach (called a model) to evaluation was developed in 1954 and has served the training and development arena well since then; it continues to do so.

For those of you who are not familiar with the Kirkpatrick model, here is a primer, albeit short. (There are extensive training programs for getting certified in this model, if you want to know more.)

Don Kirkpatrick, Ph. D. developed the Kirkpatrick model when he was a doctoral student; it was the subject of his dissertation which was defended in 1954.  There are four levels (they are color coded on the Kirkpatrick website) and I quote:

Level 1: Reaction

 

To what degree participants react favorably to the training

 

Level 2: Learning

 

To what degree participants acquire the intended knowledge, skills, attitudes, confidence and commitment based on their participation in a training event

 

Level 3: Behavior

 

To what degree participants apply what they learned during training when they are back on the job

 

Level 4: Results

                   To what degree targeted outcomes occur as a result of the training event and subsequent reinforcement

Sounds simple, right. (Reminiscent of a logic model’s short, medium, and long term outcomes).  He was the first to admit that it is difficult to get to level four (no world peace for this guy, unfortunately). We all know that behavior can be observed and reported, although self-report is fraught with problems (self-selection, desired response, other cognitive bias, etc.). Continue reading