About two years ago, I conducted a 17 month hybrid evaluation preparation program for the Western Region Extension Service faculty. There were over 30 individuals involved. I was the evaluation expert; Jim LindstromJames-Lindstrom (who was at WSU at the time) was the cheerleader, the encourager, the professional development person. I really couldn’t have done it without him. (Thank you, Jim.) Now, to maximize this program and make it available to others who were not able to participate, I’ve been asked to explore an option for creating an on-line version of the WECT (say west) program. It would be loaded through the OSU professional and continuing education (PACE) venue. To that end, I am calling on those of you who participated in the original program (and any other readers) to provide me with feedback of the following:

  1. What was useful?
  2. What needed to be added?
  3. What could be more in depth?
  4. What could be deleted?
  5. Other comments?

Please be as specific as possible.

I can go to the competency literature (of which there is a lot) and redevelop WECT from those guidelines.  (For more information on competencies see: King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a taxonomy of essential evaluator competencies. American Journal of Evaluation, 22(2), 229-247.) Or I could use the Canadian system as a foundation. (For more information see this link.)

I doubt if I can develop an on-line version that would cover (or do justice) to all those competencies.

So I turn to you my readers. Let me know what you think.

my two cents.

molly.

Last week, I started a discussion on inappropriate evaluations. I was using the Fitzpatrick jody fitzpatrick, Sanders Jim Sanders, and Worthen blaine worthen text for the discussion fitzpatrick book 2 (Program Evaluation: Alternative approaches and practical guidelines, 2011. See here.) There were three other examples given in that text which were:

  1. Evaluation cannot yield useful, valid information;
  2. Type of evaluation is premature for the stage of the program; and
  3. Propriety of evaluation is doubtful.

I will cover them today.

First, if the evaluation doesn’t (or isn’t likely to) produce relevant information, don’t do it. If factors like inadequate resources–personnel, funding, time, lack of administrative support, impossible evaluation tasks, or inaccessible data (which are typically outside the evaluator’s control), give it a pass as all of these factors make the likelihood that the evaluation will yield useful, valid information slim. Fitzpatrick, Sanders, and Worthen say, “A bad evaluation is worse than none at all…”.

Then consider the type of evaluation that is requested. Should you do a formative, a summative, or a developmental evaluation? The tryout phase of a program typically demands a formative evaluation and not a summative evaluations despite the need to demonstrate impact. You may not demonstrate an effect at all because of timing. Consider running the program for a while (more than once or twice in a month). Decide if you are going to use the results for only programmatic improvement or for programmatic improvement AND impact.

Finally consider if the propriety of the evaluation is worthwhile. Propriety is the third standard in the Joint Committee Standards The_Program_Evaluation_Standards_3ed. Propriety helps establish evaluation quality by protecting the rights of those involved in the evaluation–the target audience, the evaluators, program staff, and other stakeholders. If you haven’t read the Standards, I recommend that you do.

 

New Topic (and timely): Comments.

It has been a while since I’ve commented on any feedback I get in the form of comments on blog posts. I read everyone. I get them both here as I write and as an email. Sometimes they are in a language I don’t read or understand and, unfortunately, the on-line translators don’t always make sense. Sometimes they are encouraging comments (keep writing; keep blogging; thank you; etc.). Sometimes there are substantive comments that lead me to think about things evaluation differently. Regardless of what the message is: THANK YOU! For commenting. Remember, I read each one.

my two cents.

molly.

Can there be inappropriate use of evaluation studies?

Jody Fitzpatrick¹ jody fitzpatrick and her co-authors Jim SandersJim Sanders and Blaine Worthen,blaine worthen in Program Evaluation: Alternative Approaches and Practical Guidelines (2011)fitzpatrick book 2 provide several examples of inappropriate evaluation use. Before they give the examples, they share some wise words from Nick Smith² NickSmith_forweb. Nick says there are two broad categories for declining conducting an evaluation. They are “1) when the evaluation could harm the field of evaluation, or 2) when it would fail to support the social good.” Fitzpatrick, Sanders, and Worthen (2011) go on to say that “these problems may arise when it is likely that the ultimate quality of the evaluation will be questionable, major clients would be alienated or misled concerning what evaluation can do, resources will be inadequate, or ethical principles would be violated” (p. 265).

The examples provided are

  1. Evaluation would produce trivial information;
  2. Evaluation results will not be used;
  3. Evaluation cannot yield useful, valid information;
  4. Type of evaluation is premature for the stage of the program; and
  5. Propriety of evaluation is doubtful.

When I  study these examples (there may be others; I’m quoting Fitzpatrick, Sanders, and Worthen, 2011), I find that these are examples often found in the published literature. As a reviewer, I find “show and tell” evaluations of little value because they produce trivial information. They report a study that has limited or insufficient impact and that has little or no potential for continuation. The cost of conducting a formal evaluation would easily outweigh the value–if monetized–(merit or worth) of the program and would yield little information useful for others in the field. The intention might be well designed; the product is less than ideal. Continue reading