Taken from the Plexus Calls email  for Friday, May 29, 2015. “What is a simple rule? Royce Holladay has described simple rules as the ‘specific, uncomplicated instructions that guide behavior and create the structure within which human beings can live their lives.’ ” How do individuals, organizations and businesses identify their simple rule? What are the guidelines that can help align their values and their actions?

First a little about Royce Holladay, also from the same email:  Royce Holladay is co author, with Mallary Tytel, of Simple Rules: Radical Inquiry into Self, a book that aids recognition of the patterns that show up repeatedly in our lives. With that knowledge, individuals and groups are better able to use stories, metaphors and other tools to examine the interactions that influence the course of our lives and careers.

What if you substituted “evaluator” for “human beings”? (Yes, I know that evaluators are humons first and then evaluators.) What would you say about simple rules as evaluators? What guidelines can help align evaluators’ values and actions?

Last week I spoke of the AEA Guiding PrinciplesGuiding principles and the Joint Committee Program Evaluation StandardsThe_Program_Evaluation_Standards_3ed. Perhaps they serve as the simple rule for evaluators? They are simple rules (though not proscriptive, just suggestive). The AEA isn’t the ethics police; only a guide. Go on line and read the Guiding Principles. They are simple. They are clear. There are only five.

    1. Systematic Inquiry: Evaluators conduct systematic, data-based inquiries.
    2. Competence: Evaluators provide competent performance to stakeholders.
    3. Integrity/Honesty:  Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process.
    4. Respect for People: Evaluators respect the security, dignity and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
    5. Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

The Program Evaluation Standards are also clear. There are also five (and those five have several parts so they are not as simple).

  1. Utility (8 sub-parts)
  2. Feasibility (4 sub-parts)
  3. Propriety (7 sub-parts)
  4. Accuracy (8 sub-parts)
  5. Evaluation Accountability (3 sub-parts)

You can down load the Guiding Principles from the AEA website. Y0u can get the Standards book here or here. If they are not on your shelf, they need to be. They are simple rules.

mytwo cents.

molly.

I didn’t blog last week. I made a choice: Get work off my desk before I left for my daughter’s commencement or write this week’s blog. I chose to get work off my desk, and even then didn’t get all the work done. Choices are tough. I often wonder if there is a right answer or just an answer with less consequences. I don’t know. I will continue to make choices; I will continue to weigh my options. I hope that I am doing “good”. I often wonder whether I am.

In evaluation, there are a lot of choices to make at any stage of the evaluation, beginning to end. Since most of the programs I evaluate have an educational focus, I found this quote meaningful. It comes from something David Foster Wallace is credited with (he is the author who is known for “how’s the water”) “Overall purpose of higher education is to be able to consciously choose how to perceive others, think about meaning, and act appropriately in everyday life.” Wallace argues that the “true freedom acquired through education is the ability to be adjusted, conscious, and sympathetic.”

Although not speaking specifically to evaluators, I think that his thoughts are germane to evaluation (substitute evaluation for higher education/education). Today I read a piece on some social media venue that reminded us to try to see life/things/items as others see them (the example used was a black/red book). I was reminded to consciously choose how to view things as others perceive them. And as a result of perceiving things as others see them perhaps I can act appropriately. The AEA guiding principlesGuiding principles and the Program Evaluation Standards The_Program_Evaluation_Standards_3ed help evaluators to  to hear voices of others; to act appropriately; to consciously choose what needs to be done. No easy task.

 

my two cents.

molly.

Like many people, I find change hard. In fact, I really don’t like change. I think this is the result of a high school experience; one-third of my classmates left each year. (I was a military off-spring; we changed assignments every three years.)

Yet, in today’s world change is probably the only constant. Does that make it fun? Not necessarily. Does that make it easy? Nope. Does that make it necessary? Yep.

Evaluators deal with change regularly. New programs are required; those must be evaluated. Old programs are revised; those must be evaluated. New approaches are developed and presented to the field. (When I first became an evaluator, there wasn’t a systems approach to evaluation; there wasn’t developmental evaluation; I could continue.) New technologies are available and must be used even if the old one wasn’t broken (even for those of us who are techno-peasants).

I just finished a major qualitative evaluation that involved real-time virtual focus groups.virtual focus group When I researched this topic (virtual focus groups), I found a lot of information about non-synchronous focus groups, focus groups using a conferencing software, even synchronous focus groups without pictures. online focus groups I didn’t find anything about using real-time synchronous virtual focus groups. Unfortunately, we didn’t have much money even though there are services available. Continue reading

At a loss for what to write, I once again went to one of my favorite books, Michael Scriven’s ScrivenEvaluation Thesaurus Scriven book cover. This time when I opened the volume randomly, I came upon the entry for meta-evaluation. This is a worthy topic, one that isn’t addressed often. So this week, I’ll talk about meta-evaluation and quote Scriven as I do.

First, what is meta-evaluation? This is an evaluation approach which is the evaluation of evaluations (and “indirectly, the evaluation of evaluators”). Scriven suggests the application of an evaluation-specific checklist or a Key Evaluation Checklist (KEC) (p. 228). Although this approach can be used to evaluate one’s own work, the results are typically unreliable which implies (if one can afford it) to use an independent evaluator to conduct a meta-evaluation of your evaluations.

Then, Scriven goes on to say the following key points:

  • Meta-evaluation is the professional imperative of evaluation;
  • Meta-evaluation can be done formatively or summatively or both; and
  • Use the KEC to generate a new evaluation OR apply the checklist to the original evaluation as a product.

He lists the parts a KEC involved in a meta evaluation; this process includes 13 steps (pp. 230-231).

He gives the following reference:

Stufflebeam, D. (1981). Meta-evaluation: Concepts, standards, and uses. In R. Berk (Ed.), Educational evaluation methodology: The state of the art. Baltimore, MD: Johns Hopkins.

 

About two years ago, I conducted a 17 month hybrid evaluation preparation program for the Western Region Extension Service faculty. There were over 30 individuals involved. I was the evaluation expert; Jim LindstromJames-Lindstrom (who was at WSU at the time) was the cheerleader, the encourager, the professional development person. I really couldn’t have done it without him. (Thank you, Jim.) Now, to maximize this program and make it available to others who were not able to participate, I’ve been asked to explore an option for creating an on-line version of the WECT (say west) program. It would be loaded through the OSU professional and continuing education (PACE) venue. To that end, I am calling on those of you who participated in the original program (and any other readers) to provide me with feedback of the following:

  1. What was useful?
  2. What needed to be added?
  3. What could be more in depth?
  4. What could be deleted?
  5. Other comments?

Please be as specific as possible.

I can go to the competency literature (of which there is a lot) and redevelop WECT from those guidelines.  (For more information on competencies see: King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a taxonomy of essential evaluator competencies. American Journal of Evaluation, 22(2), 229-247.) Or I could use the Canadian system as a foundation. (For more information see this link.)

I doubt if I can develop an on-line version that would cover (or do justice) to all those competencies.

So I turn to you my readers. Let me know what you think.

my two cents.

molly.

Last week, I started a discussion on inappropriate evaluations. I was using the Fitzpatrick jody fitzpatrick, Sanders Jim Sanders, and Worthen blaine worthen text for the discussion fitzpatrick book 2 (Program Evaluation: Alternative approaches and practical guidelines, 2011. See here.) There were three other examples given in that text which were:

  1. Evaluation cannot yield useful, valid information;
  2. Type of evaluation is premature for the stage of the program; and
  3. Propriety of evaluation is doubtful.

I will cover them today.

First, if the evaluation doesn’t (or isn’t likely to) produce relevant information, don’t do it. If factors like inadequate resources–personnel, funding, time, lack of administrative support, impossible evaluation tasks, or inaccessible data (which are typically outside the evaluator’s control), give it a pass as all of these factors make the likelihood that the evaluation will yield useful, valid information slim. Fitzpatrick, Sanders, and Worthen say, “A bad evaluation is worse than none at all…”.

Then consider the type of evaluation that is requested. Should you do a formative, a summative, or a developmental evaluation? The tryout phase of a program typically demands a formative evaluation and not a summative evaluations despite the need to demonstrate impact. You may not demonstrate an effect at all because of timing. Consider running the program for a while (more than once or twice in a month). Decide if you are going to use the results for only programmatic improvement or for programmatic improvement AND impact.

Finally consider if the propriety of the evaluation is worthwhile. Propriety is the third standard in the Joint Committee Standards The_Program_Evaluation_Standards_3ed. Propriety helps establish evaluation quality by protecting the rights of those involved in the evaluation–the target audience, the evaluators, program staff, and other stakeholders. If you haven’t read the Standards, I recommend that you do.

 

New Topic (and timely): Comments.

It has been a while since I’ve commented on any feedback I get in the form of comments on blog posts. I read everyone. I get them both here as I write and as an email. Sometimes they are in a language I don’t read or understand and, unfortunately, the on-line translators don’t always make sense. Sometimes they are encouraging comments (keep writing; keep blogging; thank you; etc.). Sometimes there are substantive comments that lead me to think about things evaluation differently. Regardless of what the message is: THANK YOU! For commenting. Remember, I read each one.

my two cents.

molly.

Can there be inappropriate use of evaluation studies?

Jody Fitzpatrick¹ jody fitzpatrick and her co-authors Jim SandersJim Sanders and Blaine Worthen,blaine worthen in Program Evaluation: Alternative Approaches and Practical Guidelines (2011)fitzpatrick book 2 provide several examples of inappropriate evaluation use. Before they give the examples, they share some wise words from Nick Smith² NickSmith_forweb. Nick says there are two broad categories for declining conducting an evaluation. They are “1) when the evaluation could harm the field of evaluation, or 2) when it would fail to support the social good.” Fitzpatrick, Sanders, and Worthen (2011) go on to say that “these problems may arise when it is likely that the ultimate quality of the evaluation will be questionable, major clients would be alienated or misled concerning what evaluation can do, resources will be inadequate, or ethical principles would be violated” (p. 265).

The examples provided are

  1. Evaluation would produce trivial information;
  2. Evaluation results will not be used;
  3. Evaluation cannot yield useful, valid information;
  4. Type of evaluation is premature for the stage of the program; and
  5. Propriety of evaluation is doubtful.

When I  study these examples (there may be others; I’m quoting Fitzpatrick, Sanders, and Worthen, 2011), I find that these are examples often found in the published literature. As a reviewer, I find “show and tell” evaluations of little value because they produce trivial information. They report a study that has limited or insufficient impact and that has little or no potential for continuation. The cost of conducting a formal evaluation would easily outweigh the value–if monetized–(merit or worth) of the program and would yield little information useful for others in the field. The intention might be well designed; the product is less than ideal. Continue reading

I would like to think that the world is a better place than it was 50 years ago. In many ways I suppose it is; I wish that were true in all ways.

Human rights were violated in the name of religious freedom in Indiana last week (not to mention the other 19 states which have Religious Freedom Restoration Act). “The statute shows every sign of having been carefully designed to put new obstacles in the path of equality; and it has been publicly sold with deceptive claims that it is ‘nothing new’.” (Thank you, Garrett Epps). Then there are the those states which follow the Hobby Lobby ruling, a different set.

The eve of Passover is Friday (which also happens to be Good Friday). Passover vegetrian passover (or Pesach) is a celebration commemorating the Israelites freedom from slavery imposed by ancient Egypt. Putting an orange on the Seder plate helps remember that liberation specifically the liberation of the marginalized.

Passover is the only holiday that celebrates human rights and individual freedoms.

Does anyone else see the irony with this Indiana law?

This is an evaluation issue. How can you make a difference if you restrict liberation (like the recently passed Indiana law)? What is the merit, the worth, the value of restriction? I don’t think there is any.

my two cents.

molly.

Today is the middle of Spring Break at Oregon State University.

What did you do today that involved thinking evaluatively?

Did you decide to go to work?work-clip art

Did you decide to go to the beach?oregon beach

Did you decide you were sick?sick clip art

Did you decide you would work in the yard/garden?garden clip art

Did you decide to stop and smell the roses?smell the roses Continue reading

How many of you are planning on attending the American Evaluation Association (AEA)AEA logo conference in Chicago this November? AEA just closed its call for proposals on Monday, March 16. Hopefully, you were able to submit prior to the deadline. Notifications of acceptance will be announced in July. It is a lot of work to review those proposals, schedule those proposals, and make sure that there is a balance of topics and presentation types across the week.

I hope anyone (everyone) interested in program evaluation and all the evaluation permutations (of which there are many) will make an effort to attend. I plan to be there.

AEA is my professional home. The first meeting I attended was in 1981 in Austin, Texas. I was a graduate student; several of us drove from Tucson to Austin.(Let me tell you West-TexasWest Texas west-texas-desert is quite an experience; certainly a bucket list opportunity.) That meeting was a combined meeting of the Evaluation Research Society and Evaluation Network. It had about 200 attendees. Quite a difference from meetings experienced in the 21st century. AEA (the name and the organization) became official with the vote of the membership in 1986. Who would have thought that AEA would be the leading evaluation association in the country, possibly in the world? The membership page says that there are members who come from 60 foreign countries. I have met marvelous folks there. I count some of my best friends as AEA members. Certainly the landscape of attendees has changed regularly over the years. For a founding member, that evolution has been interesting to watch. As a board member and as a past-president (among other roles), being part of the organizational change has been exciting. I urge you to attend; I urge you to get involved.

Hope to see you in Chicago in November.

 

If you haven’t taken the my survey, please do. It is found here.

my two cents.

molly.