Aug
24
Filed Under (program evaluation) by Molly on 24-08-2011

 

I started this post the third week in July.  Technical difficulties prevented me from completing the post.  Hopefully, those difficulties are now in the past.

A colleague asked me what can we do when we can’t measure actual behavior change in our evaluations.  Most evaluations can capture knowledge change (short term outcomes); some evaluations can capture behavior change (intermediate or medium term outcomes); very few can capture condition change (long term outcomes, often called impacts–though not by me).  I thought about that.  Intention to change behavior can be measured.  Confidence (self-efficacy) to change behavior can be measured.  For me, all evaluations need to address those two points.

Paul Mazmanian, Associate Dean for Continuing Professional Development and Evaluation Studies at Virginia Commonwealth University, has studied changing practice patterns for several years.  One study, conducted in 1998, reported that “…physicians in both study and control groups were significantly more likely to change (47% vs. 7% p< .001) if they indicated intent to change immediately following the lecture” (Academic Medicine. 1998; 73:882-886).   Mazmanian and his co-authors say in their conclusions that “successful change in practice may depend less on clinical and barriers information than on other factors that influence physicians’ performance.  To further develop the commitment-to-change strategy in measuring effects of planned change, it is important to isolate and learn the powers of individual components of the strategy as well as their collective influence on physicians’ clinical behavior.”

 

What are the implications for Extension and other complex organizations?   It makes sense to extrapolate from this information from the continuing medical education literature.  Physicians are adults; most of Extension’s audience are adults.  If stated intention to change is highly predictable  “immediately following the lecture” (i.e., continuing education program) based on stated intention to change, then stated intention to change solicited from participants in Extension programs immediately following the program delivery would increase the likelihood of behavior change.  One of the outcomes Extension wants to see is change in behavior (medium term outcomes).  Measuring those behavior changes directly (through observation, or some other method) is often outside the resources available.  Measuring those intended behavior changes is within the scope of Extension resources.  Using a time frame (such as 6 months) helps bound the anticipated behavior change.  In addition, intention to change can be coupled with confidence to implement the behavior change to provide the evaluator with information about the effect of the program.  The desired effect is high confidence to change and willingness to implement the change within the specified time frame.  If Extension professionals find that result, then it would be safe to say that the program is successful.

REFERENCES

1.  Mazmanian, P.E., Daffron, S. R., Johnson, R. E., Davis, D. A., Kantrowitz, M. P.  (1998).  Information about barriers to planned change:  A Randomized controlled trial involving continuing medical education lectures and commitment to change.  Academic Medicine, 73 (8), 882-886.

2.  Mazmanian, P. E. & Mazmanian, P. M.  (1999).  Commitment to change: Theoretical foundations, methods, and outcomes.  The Journal of Continuing Education in the Health Professions, 19, 200 – 207.

3.  Mazmanian, P. E., Johnson, R. E, Zhang, A. Boothby, J. & Yeatts, E. J. (2001).  Effects of a signature on rates of change: A randomized controlled trial involving continuing medical education and the commitment-to-change model.  Academic Medicine, 76 (6), 642-646.

 

Aug
04
Filed Under (program evaluation) by Molly on 04-08-2011

Hopefully, the technical difficulties with images is no longer a problem and I will be able to post the answers to the history quiz and the post I had hoped to post last week.  So, as promised, here are the answers to the quiz I posted the week of July 5.  The keyed responses are in BOLD

1.  Michael Quinn Patton, author of Utilization-Focused Evaluation and the new book, Developmental Evaluation and the classic Qualitative Evaluation and Research Methods .

2.   Michael Scriven is best known for his concept of formative and summative evaluation. He has also advocated that evaluation is a transdiscipline.  He is the author of the Evaluation Thesaurus .

3. Hallie Preskill is the co-author (with Darlene Russ-Eft) of Evaluation Capacity Building

4. Robert E. Stake has advanced work in case study and is the author of the book Multiple Case Study and The Art of Case Study Research.

5. David M. Fetterman is best known for his advocacy of empowerment evaluation and the book of that name, Foundations of Empowerment Evaluation .

6. Daniel Stufflebeam developed the CIPP (context input process product) model which is discussed in the book Evaluation Models .

7. James W. Altschuldt is the go-to person for needs assessment.  He is the editor of the Needs Assessment Kit (or everything you wanted to know about needs assessment and didn’t know where to find the answer).  He is also the co-author with Bell Ruth Witkin of two needs assessment books,  and  .

8. Jennifer C. Greene, the current President of the American Evaluation Association, and the author of a book on mixed methods .

9. Ernest R. House is a leader in the work of evaluation policy and is the author of  an evaluation novel,  Regression to the Mean   .

10. Lee J. Cronbach is a pioneer in education evaluation and the reform of that practice.  He co-authored with several associates the book, Toward Reform of Program Evaluation .

11.  Ellen Taylor-Powell, the former Evaluation Specialist at University of Wisconsin Extension Service and is credited with developing the logic model later adopted by the USDA for use by the Extension Service.  To go to the UWEX site, click on the words “logic model”.

12. Yvonna Lincoln, with her husband Egon Guba (see below) co-authored the book Naturalistic Inquiry  . She is the currently co-editor (with Norman K. Denzin) of the Handbook of Qualitative Research .

13.   Egon Guba, with his wife Yvonna Lincoln, is the co-author of 4th Generation Evaluation.

14. Blaine Worthen has championed certification for evaluators.  He wit h Jody L. Fitzpatrick and James
R. Sanders have co-authored Program Evaluation: Alternative Approaches and Practical Guidelines.

15.  Thomas A. Schwandt, a philosopher at heart who started as an auditor, has written extensively on evaluation ethics. He is also the co-author (with Edward S. Halpern) of Linking Auditing and Metaevaluation.

16.   Peter H. Rossi, co-author with Howard E. Freeman and Mark E. Lipsey, wrote Evaluation: A Systematic Approach , and is a pioneer in evaluation research.

17. W. James Popham, a leader in educational evaluation, and authored the volume, Educational Evaluation

18. Jason Millman was a pioneer of teacher evaluation and author of  Handbook of Teacher Evaluation

19.  William R. Shadish co-edited (with Laura C. Leviton and Thomas Cook) of Foundations of Program Evaluation: Theories of Practice . His work in theories of evaluation practice earned him the Paul F. Lazarsfeld Award for Evaluation Theory, from the American Evaluation Association in 1994.

20.   Laura C. Leviton (co-editor with Will Shadish and Tom Cook–see above) of Foundations of Program Evaluation: Theories of Practice has pioneered work in participatory evaluation.

 

 

Although I’ve only list 20 leaders, movers and shakers, in the evaluation field, there are others who also deserve mention:  John Owen, Deb Rog, Mark Lipsey, Mel Mark, Jonathan Morell, Midge Smith, Lois-Ellin Datta, Patricia Rogers, Sue Funnell, Jean King, Laurie Stevahn, John, McLaughlin, Michale Morris, Nick Smith, Don Dillman, Karen Kirkhart, among others.

If you want to meet the movers and shakers, I suggest you attend the American Evaluation Association annual meeting.  In 2011, it will be held in Anaheim CA, November 2 – 5; professional development sessions are being offered October 31, November 1 and 2, and also November 6.  More conference information can be found here.