Taken from the Plexus Calls email for Friday, May 29, 2015. “What is a simple rule? Royce Holladay has described simple rules as the ‘specific, uncomplicated instructions that guide behavior and create the structure within which human beings can live their lives.’ ” How do individuals, organizations and businesses identify their simple rule? What are the guidelines that can help align their values and their actions?
First a little about Royce Holladay, also from the same email: Royce Holladay is co author, with Mallary Tytel, of Simple Rules: Radical Inquiry into Self, a book that aids recognition of the patterns that show up repeatedly in our lives. With that knowledge, individuals and groups are better able to use stories, metaphors and other tools to examine the interactions that influence the course of our lives and careers.
What if you substituted “evaluator” for “human beings”? (Yes, I know that evaluators are humons first and then evaluators.) What would you say about simple rules as evaluators? What guidelines can help align evaluators’ values and actions?
Last week I spoke of the AEA Guiding Principles and the Joint Committee Program Evaluation Standards. Perhaps they serve as the simple rule for evaluators? They are simple rules (though not proscriptive, just suggestive). The AEA isn’t the ethics police; only a guide. Go on line and read the Guiding Principles. They are simple. They are clear. There are only five.
- Systematic Inquiry: Evaluators conduct systematic, data-based inquiries.
- Competence: Evaluators provide competent performance to stakeholders.
- Integrity/Honesty: Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process.
- Respect for People: Evaluators respect the security, dignity and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
- Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.
The Program Evaluation Standards are also clear. There are also five (and those five have several parts so they are not as simple).
- Utility (8 sub-parts)
- Feasibility (4 sub-parts)
- Propriety (7 sub-parts)
- Accuracy (8 sub-parts)
- Evaluation Accountability (3 sub-parts)
You can down load the Guiding Principles from the AEA website. Y0u can get the Standards book here or here. If they are not on your shelf, they need to be. They are simple rules.
I didn’t blog last week. I made a choice: Get work off my desk before I left for my daughter’s commencement or write this week’s blog. I chose to get work off my desk, and even then didn’t get all the work done. Choices are tough. I often wonder if there is a right answer or just an answer with less consequences. I don’t know. I will continue to make choices; I will continue to weigh my options. I hope that I am doing “good”. I often wonder whether I am.
In evaluation, there are a lot of choices to make at any stage of the evaluation, beginning to end. Since most of the programs I evaluate have an educational focus, I found this quote meaningful. It comes from something David Foster Wallace is credited with (he is the author who is known for “how’s the water”) “Overall purpose of higher education is to be able to consciously choose how to perceive others, think about meaning, and act appropriately in everyday life.” Wallace argues that the “true freedom acquired through education is the ability to be adjusted, conscious, and sympathetic.”
Although not speaking specifically to evaluators, I think that his thoughts are germane to evaluation (substitute evaluation for higher education/education). Today I read a piece on some social media venue that reminded us to try to see life/things/items as others see them (the example used was a black/red book). I was reminded to consciously choose how to view things as others perceive them. And as a result of perceiving things as others see them perhaps I can act appropriately. The AEA guiding principles and the Program Evaluation Standards help evaluators to to hear voices of others; to act appropriately; to consciously choose what needs to be done. No easy task.
Like many people, I find change hard. In fact, I really don’t like change. I think this is the result of a high school experience; one-third of my classmates left each year. (I was a military off-spring; we changed assignments every three years.)
Yet, in today’s world change is probably the only constant. Does that make it fun? Not necessarily. Does that make it easy? Nope. Does that make it necessary? Yep.
Evaluators deal with change regularly. New programs are required; those must be evaluated. Old programs are revised; those must be evaluated. New approaches are developed and presented to the field. (When I first became an evaluator, there wasn’t a systems approach to evaluation; there wasn’t developmental evaluation; I could continue.) New technologies are available and must be used even if the old one wasn’t broken (even for those of us who are techno-peasants).
I just finished a major qualitative evaluation that involved real-time virtual focus groups. When I researched this topic (virtual focus groups), I found a lot of information about non-synchronous focus groups, focus groups using a conferencing software, even synchronous focus groups without pictures. I didn’t find anything about using real-time synchronous virtual focus groups. Unfortunately, we didn’t have much money even though there are services available. Continue reading
At a loss for what to write, I once again went to one of my favorite books, Michael Scriven’s Evaluation Thesaurus . This time when I opened the volume randomly, I came upon the entry for meta-evaluation. This is a worthy topic, one that isn’t addressed often. So this week, I’ll talk about meta-evaluation and quote Scriven as I do.
First, what is meta-evaluation? This is an evaluation approach which is the evaluation of evaluations (and “indirectly, the evaluation of evaluators”). Scriven suggests the application of an evaluation-specific checklist or a Key Evaluation Checklist (KEC) (p. 228). Although this approach can be used to evaluate one’s own work, the results are typically unreliable which implies (if one can afford it) to use an independent evaluator to conduct a meta-evaluation of your evaluations.
Then, Scriven goes on to say the following key points:
- Meta-evaluation is the professional imperative of evaluation;
- Meta-evaluation can be done formatively or summatively or both; and
- Use the KEC to generate a new evaluation OR apply the checklist to the original evaluation as a product.
He lists the parts a KEC involved in a meta evaluation; this process includes 13 steps (pp. 230-231).
He gives the following reference:
Stufflebeam, D. (1981). Meta-evaluation: Concepts, standards, and uses. In R. Berk (Ed.), Educational evaluation methodology: The state of the art. Baltimore, MD: Johns Hopkins.