I started this post the third week in July. Technical difficulties prevented me from completing the post. Hopefully, those difficulties are now in the past.
A colleague asked me what can we do when we can’t measure actual behavior change in our evaluations. Most evaluations can capture knowledge change (short term outcomes); some evaluations can capture behavior change (intermediate or medium term outcomes); very few can capture condition change (long term outcomes, often called impacts–though not by me). I thought about that. Intention to change behavior can be measured. Confidence (self-efficacy) to change behavior can be measured. For me, all evaluations need to address those two points.
Paul Mazmanian, Associate Dean for Continuing Professional Development and Evaluation Studies at Virginia Commonwealth University, has studied changing practice patterns for several years. One study, conducted in 1998, reported that “…physicians in both study and control groups were significantly more likely to change (47% vs. 7% p< .001) if they indicated intent to change immediately following the lecture” (Academic Medicine. 1998; 73:882-886). Mazmanian and his co-authors say in their conclusions that “successful change in practice may depend less on clinical and barriers information than on other factors that influence physicians’ performance. To further develop the commitment-to-change strategy in measuring effects of planned change, it is important to isolate and learn the powers of individual components of the strategy as well as their collective influence on physicians’ clinical behavior.”
What are the implications for Extension and other complex organizations? It makes sense to extrapolate from this information from the continuing medical education literature. Physicians are adults; most of Extension’s audience are adults. If stated intention to change is highly predictable “immediately following the lecture” (i.e., continuing education program) based on stated intention to change, then stated intention to change solicited from participants in Extension programs immediately following the program delivery would increase the likelihood of behavior change. One of the outcomes Extension wants to see is change in behavior (medium term outcomes). Measuring those behavior changes directly (through observation, or some other method) is often outside the resources available. Measuring those intended behavior changes is within the scope of Extension resources. Using a time frame (such as 6 months) helps bound the anticipated behavior change. In addition, intention to change can be coupled with confidence to implement the behavior change to provide the evaluator with information about the effect of the program. The desired effect is high confidence to change and willingness to implement the change within the specified time frame. If Extension professionals find that result, then it would be safe to say that the program is successful.
1. Mazmanian, P.E., Daffron, S. R., Johnson, R. E., Davis, D. A., Kantrowitz, M. P. (1998). Information about barriers to planned change: A Randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine, 73 (8), 882-886.
2. Mazmanian, P. E. & Mazmanian, P. M. (1999). Commitment to change: Theoretical foundations, methods, and outcomes. The Journal of Continuing Education in the Health Professions, 19, 200 – 207.
3. Mazmanian, P. E., Johnson, R. E, Zhang, A. Boothby, J. & Yeatts, E. J. (2001). Effects of a signature on rates of change: A randomized controlled trial involving continuing medical education and the commitment-to-change model. Academic Medicine, 76 (6), 642-646.
Hopefully, the technical difficulties with images is no longer a problem and I will be able to post the answers to the history quiz and the post I had hoped to post last week. So, as promised, here are the answers to the quiz I posted the week of July 5. The keyed responses are in BOLD
7. James W. Altschuldt is the go-to person for needs assessment. He is the editor of the Needs Assessment Kit (or everything you wanted to know about needs assessment and didn’t know where to find the answer). He is also the co-author with Bell Ruth Witkin of two needs assessment books, and .
11. Ellen Taylor-Powell, the former Evaluation Specialist at University of Wisconsin Extension Service and is credited with developing the logic model later adopted by the USDA for use by the Extension Service. To go to the UWEX site, click on the words “logic model”.
15. Thomas A. Schwandt, a philosopher at heart who started as an auditor, has written extensively on evaluation ethics. He is also the co-author (with Edward S. Halpern) of Linking Auditing and Metaevaluation.
19. William R. Shadish co-edited (with Laura C. Leviton and Thomas Cook) of Foundations of Program Evaluation: Theories of Practice . His work in theories of evaluation practice earned him the Paul F. Lazarsfeld Award for Evaluation Theory, from the American Evaluation Association in 1994.
Although I’ve only list 20 leaders, movers and shakers, in the evaluation field, there are others who also deserve mention: John Owen, Deb Rog, Mark Lipsey, Mel Mark, Jonathan Morell, Midge Smith, Lois-Ellin Datta, Patricia Rogers, Sue Funnell, Jean King, Laurie Stevahn, John, McLaughlin, Michale Morris, Nick Smith, Don Dillman, Karen Kirkhart, among others.
If you want to meet the movers and shakers, I suggest you attend the American Evaluation Association annual meeting. In 2011, it will be held in Anaheim CA, November 2 – 5; professional development sessions are being offered October 31, November 1 and 2, and also November 6. More conference information can be found here.