Filed Under (Methodology, program evaluation) by Molly on 27-06-2014

unintended-consequencesA colleague asked, “How do you design an evaluation that can identify unintended consequences?” This was based on a statement about methodologies that “only measure the extent to which intended results have been achieved and are not able to capture unintended outcomes (see AEA365). (The cartoon is attributed to Rob Cottingham.)

Really good question. Unintended consequences are just that–outcomes which are not what you think will happen with the program you are implementing. This is where program theory comes into play. When you model the program, you think of what you want to happen. What you want to happen is usually supported by the literature, not your gut (intuition may be useful for unintended, however). A logic model lists as outcome the “intended” outcomes (consequences). So you run your program and you get something else, not necessarily bad, just not what you expected; the outcome is unintended.

Program theory can advise you that other outcomes could happen. How do you design your evaluation so that you can capture those. Mazmanian in his 1998 study on intention to change had an unintended outcome; one that has applications to any adult learning experience (1). So what method do you use to get at these? A general question, open ended? Perhaps. Many (most?) people won’t respond to open ended questions–takes too much time. OK. I can live with that. So what do you do instead? What does the literature say could happen? Even if you didn’t design the program for that outcome. Ask that question. Along with the questions about what you expect to happen.

How would you represent this in your logic model–by the ubiquitous “other”? Perhaps. Certainly easy that way. Again, look at program theory. What does it say? Then use what is said there. Or use “other”–then you are getting back to the open ended questions and run the risk of not getting a response. If you only model “other”–do you really know what that “other” is?

I know that I won’t be able to get to world peace, so I look for what I can evaluate and since I doubt I’ll have enough money to actually go and observe behaviors (certainly the ideal), I have to ask a question. In your question asking, you want a response right? Then ask the specific question. Ask it in a way that elicits program influence–how confident the respondent is that X happened? How confident the respondent is that they can do X? How confident is the respondent that this outcome could have happened? You could ask if X happened (yes/no) and then ask the confidence questions (confidence questions are also known as self-efficacy). Bandura will be proud. See Bandure social cognitive theory  OR Bandura social learning theory  OR   Bandura self-efficacy (for discussions of self-efficacy and social learning).

mytwo cents


1. Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., Kantrowitz, M. P. (1998). Information about barriers to planned change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine 73(8), 882-886.

Filed Under (program evaluation) by Molly on 17-06-2014

“Engaged scholarship most commonly refers to a range of collaborative research, teaching, and learning initiatives rooted in sustained community-university partnerships and pursued across various disciplines and social and cultural contexts.” That is a commonly agreed upon definition of engaged scholarship. So let me ask you: Are blogs engaging?

If I tease apart the definition this way:

  1. it is collaborative IF (and only if) you consider that reading a lot of other blogs is collaborative learning (someone writes; someone reads=collaboration);
  2. that is sustained in a “partnership” that consists of other bloggers who are in the community and bloggers who are  in the university;
  3. that is pursued across various disciplines (remember, I read a lot of other blogs); and
  4. that are in a variety of social and cultural contexts (remember, I read a lot of other blogs);

then I would say yes…and no…because what is really collaborative learning? Or collaborative teaching?  (I’m certainly not doing research.) To me, collaboration is an agreed upon working together in an intellectual effort. I would guess that in this case there is tacit agreement; I write and the reader agrees to read either by subscription or random search engine optimization. Some how that feels really one sided–until I realize that I’m still getting comments on posts that I wrote two-three years ago. So something must be engaging–even if it is random SEO. (though I’m not sure it is collaborative in the usual sense of collaboration).

And since this is a blog about evaluation, my question is, “How do you know?” Chris Lysy has a cartoon about knowing  that always makes me smile (actually several).evaluation and project working ..research v. evaluationI-have-evidence cartoon Remember, you need a rubric to determine if something works you cannot work on gut impressions; only through rigorous evaluation can you determine if a program has merit, value, worth (the root of evaluation is value); and only if you have evidence do you know it works. Can I use comments (many over the last four years of this blog) to say the blog is engaging, that I have evidence? One comment received in the last week suggested that since folks are still commenting on this post, perhaps that is evidence….don’t know.

Any thoughts–comments are welcome; please.

My two cents.





Filed Under (Methodology, program evaluation) by Molly on 11-06-2014

On May 9, 2014, Dr. Don Kirkpatrick  Don Kirkpatrick photo died at the age of 90. His approach (called a model) to evaluation was developed in 1954 and has served the training and development arena well since then; it continues to do so.

For those of you who are not familiar with the Kirkpatrick model, here is a primer, albeit short. (There are extensive training programs for getting certified in this model, if you want to know more.)

Don Kirkpatrick, Ph. D. developed the Kirkpatrick model when he was a doctoral student; it was the subject of his dissertation which was defended in 1954.  There are four levels (they are color coded on the Kirkpatrick website) and I quote:

Level 1: Reaction


To what degree participants react favorably to the training


Level 2: Learning


To what degree participants acquire the intended knowledge, skills, attitudes, confidence and commitment based on their participation in a training event


Level 3: Behavior


To what degree participants apply what they learned during training when they are back on the job


Level 4: Results

                   To what degree targeted outcomes occur as a result of the training event and subsequent reinforcement

Sounds simple, right. (Reminiscent of a logic model’s short, medium, and long term outcomes).  He was the first to admit that it is difficult to get to level four (no world peace for this guy, unfortunately). We all know that behavior can be observed and reported, although self-report is fraught with problems (self-selection, desired response, other cognitive bias, etc.). Read the rest of this entry »

Filed Under (program evaluation) by Molly on 03-06-2014

Recently, I received the following comment: “In today’s world it’s virtually impossible to keep up with facebook, twitter, news, tv, movies email, texts, etc.”

It was in response to a blog post about making a difference. How do you know? Given that most of what was suggested happens in the virtual world, the play on words is interesting. How is it impossible–because there is too much information? because you are too distracted by the virtual part of all the information and get lost? because virtuality it is not clearly understood? because of something else?  I personally find I can get lost when I spend all day on line (virtual). It isn’t real, actually. I have no sense of what is happening and what isn’t happening. Even with the feeds from news lines, I find I have to double check my facts. Yet even as I say this, the virtual is expanding (go here). I have heard about Web 2.0; hadn’t heard about IoE (Internet of Everything)…the CEO of Cisco (John Chambers) stated that the IoE depends on the architecture, the systems integration. Is virtual the way of the world? It certainly isn’t the future any more; it is now. I have to ask, though, what about people…Given that much evaluation is now being done with the use of virtual tools, are we really understanding what difference is being made? Or are there just connections?

The individual continued with the comment by saying, “Keep up your small voice. Some are listening.” Those “listening” are certainly reflected in the number of comments I received on the posts about making a difference in the last two days (over 45).  This may certainly be a way of engaging; I know it is outreaching. It is only my small voice; it is rewarding to know that some are listening/reading. Even if they only stay a short while.

My two cents. (my small voice).StillSmallVoice