Can there be inappropriate use of evaluation studies?
Jody Fitzpatrick¹ and her co-authors Jim Sanders and Blaine Worthen, in Program Evaluation: Alternative Approaches and Practical Guidelines (2011) provide several examples of inappropriate evaluation use. Before they give the examples, they share some wise words from Nick Smith² . Nick says there are two broad categories for declining conducting an evaluation. They are “1) when the evaluation could harm the field of evaluation, or 2) when it would fail to support the social good.” Fitzpatrick, Sanders, and Worthen (2011) go on to say that “these problems may arise when it is likely that the ultimate quality of the evaluation will be questionable, major clients would be alienated or misled concerning what evaluation can do, resources will be inadequate, or ethical principles would be violated” (p. 265).
The examples provided are
- Evaluation would produce trivial information;
- Evaluation results will not be used;
- Evaluation cannot yield useful, valid information;
- Type of evaluation is premature for the stage of the program; and
- Propriety of evaluation is doubtful.
When I study these examples (there may be others; I’m quoting Fitzpatrick, Sanders, and Worthen, 2011), I find that these are examples often found in the published literature. As a reviewer, I find “show and tell” evaluations of little value because they produce trivial information. They report a study that has limited or insufficient impact and that has little or no potential for continuation. The cost of conducting a formal evaluation would easily outweigh the value–if monetized–(merit or worth) of the program and would yield little information useful for others in the field. The intention might be well designed; the product is less than ideal.
Use (Utility) is the first standard listed in the Program Evaluation Standards. To me that says that use is important if evaluation is really making a difference. (Additional standards are Feasibility, Propriety, Accuracy, and Evaluation Accountability.) The overview of this standard reinforces the importance to stakeholders and suggests that “examining the variety of possible uses for evaluation processes, findings, and products” is a good place to start understanding this standard. New faculty/evaluators (who are given the charge to make sure their programs work are often overwhelmed by the enormity of evaluating all their programs. I give advice to choose one this year and another, different one, next year so that in five years all programs will have been evaluated. That increases meaningfulness of the evaluation and increases the possible use. Fitzpatrick, Sanders, and Worthen (2011) say, “Evaluators should avoid meaningless, ritualistic evaluations or pro forma exercises in which evaluation only appears to justify decisions actually made for personal or political reasons” (p. 226). They give the example of the DARE program evaluation.
There is a wealth of information in these examples; I will discuss the other three reasons next week.
molly.
- Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines. Boston, MA: Pearson Education, Inc.
- Smith, N. (1988). Professional reasons for declining an evaluation contract. American Journal of Evaluation, 19, 177-190.
great blog and it helps me to have a better understanding of evaluation and even how to plan while evaluating.
I hope you find evaluation exciting and stimulating–it is a wonderful field. Planning is important when one thinks to evaluate (and one is evaluating all the time!).