A colleague asked, “How do you design an evaluation that can identify unintended consequences?” This was based on a statement about methodologies that “only measure the extent to which intended results have been achieved and are not able to capture unintended outcomes (see AEA365). (The cartoon is attributed to Rob Cottingham.)
Really good question. Unintended consequences are just that–outcomes which are not what you think will happen with the program you are implementing. This is where program theory comes into play. When you model the program, you think of what you want to happen. What you want to happen is usually supported by the literature, not your gut (intuition may be useful for unintended, however). A logic model lists as outcome the “intended” outcomes (consequences). So you run your program and you get something else, not necessarily bad, just not what you expected; the outcome is unintended.
Program theory can advise you that other outcomes could happen. How do you design your evaluation so that you can capture those. Mazmanian in his 1998 study on intention to change had an unintended outcome; one that has applications to any adult learning experience (1). So what method do you use to get at these? A general question, open ended? Perhaps. Many (most?) people won’t respond to open ended questions–takes too much time. OK. I can live with that. So what do you do instead? What does the literature say could happen? Even if you didn’t design the program for that outcome. Ask that question. Along with the questions about what you expect to happen.
How would you represent this in your logic model–by the ubiquitous “other”? Perhaps. Certainly easy that way. Again, look at program theory. What does it say? Then use what is said there. Or use “other”–then you are getting back to the open ended questions and run the risk of not getting a response. If you only model “other”–do you really know what that “other” is?
I know that I won’t be able to get to world peace, so I look for what I can evaluate and since I doubt I’ll have enough money to actually go and observe behaviors (certainly the ideal), I have to ask a question. In your question asking, you want a response right? Then ask the specific question. Ask it in a way that elicits program influence–how confident the respondent is that X happened? How confident the respondent is that they can do X? How confident is the respondent that this outcome could have happened? You could ask if X happened (yes/no) and then ask the confidence questions (confidence questions are also known as self-efficacy). Bandura will be proud. See OR OR (for discussions of self-efficacy and social learning).
molly.
1. Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., Kantrowitz, M. P. (1998). Information about barriers to planned change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine 73(8), 882-886.