Knowledge is personal!
A while ago I read a blog by Harold Jarche. He was talking about knowledge management (the field in which he works). That field makes the claim that knowledge can be transferred; he makes the claim that knowledge cannot be transferred. He goes on to say that we can share (transfer) information; we can share data; we cannot share knowledge. I say once we share the information, the other person has the choice to make that shared information part of her/his knowledge or not. Stories help individuals see (albeit, briefly) others’ knowledge.
Now, puzzling the phrase, “Knowledge is personal”. I would say, “The only thing ‘they” can’t take away from you is knowledge.” (The corollary to that is “They may take your car, your house, your life; they cannot take your knowledge!”).
So I am reminded, when I remember that knowledge is personal and cannot be taken away from you, that there are evaluation movements and models which are established to empower people with knowledge, specifically evaluation knowledge. I must wonder, then, if by sharing the information, we are sharing knowledge? If people are really empowered? To be sure, we share information (in this case about how to plan, implement, analyze, and report an evaluation). Is that sharing knowledge?
Fetterman (and Wandersman in their 2005 Guilford Press volume*) says that “empowerment evaluation is committed to contributing to knowledge creation”. (Yes, they are citing Lentz, et al., 2005*; and Nonaka & Takeuchi, 1995*., just to be transparent.) So I wonder, if knowledge is personal and known only to the individual, how can “they” say that empowerment evaluation is contributing to knowledge creation. Is it because knowledge is personal and every individual creates her/his own knowledge through that experience? Or does empowerment evaluation contribute NOT to knowledge creation but information creation? (NOTE: This is not a criticism of empowerment evaluation, only an example using empowerment evaluation of the dissonance I’m experiencing; in fact, Fetterman defines empowerment evaluation as “the use of evaluation concepts, techniques, and findings to foster improvement and self-determination”. It is only later in the volume cited that the statement of knowledge creation)
Given that knowledge is personal, it would make sense that knowledge is implicit and implicit knowledge requires interpretation to make sense of it. Hence, stories because stories can help share implicit knowledge. As each individual seeks information to become knowledge, that same individual makes that information into knowledge and that knowledge implicit. Jarche says, “As each person seeks information, makes sense of it through reflection and articulation, and then shares it through conversation…” I would add, “and shared as information”.
Keep that in mind the next time you want to measure knowledge as part of KASA on a survey.
Thinking for yourself is a key competency for evaluators. Scriven says that critical thinking is “The name of an approach to or a subject within the curriculum that might equally well be called ‘evaluative thinking…’ “.
Certainly, one of the skills I taught my daughters from an early age is to evaluate experiences both qualitatively and quantitatively. They got so good at this exercise, they often preempted me with their reports. They learned early that critical thinking is evaluative, that critical doesn’t mean being negative, rather it means being thoughtful or analytical. Scriven goes on to say, “The result of critical thinking is in fact often to provide better support for a position under consideration or to create and support a new position.” I usually asked my girls to evaluate an experience to determine if we would do that experience (or want to do it) again. Recently, I had the opportunity to do just that. My younger daughter had not been to the Ringling Museum in Sarasota FL; my older daughter had (she went to college in FL). She agreed, after she took me, that we needed to go as a family. We did. We all agreed that it was worth the price of admission. An example of critical thinking–where we provided support for a position under consideration.
Could we have done this without the ability to critically think? Maybe. Could we have come to an agreement that it was worth seeing more than once with out this ability? Probably not. Since the premise of this blog is that evaluation is something that everyone (whether they know it or not) does every day, then would it follow that critical thinking is done everyday? Probably. Yet, I wonder if you need this skill to get out of bed? To decide what to eat for breakfast? To develop the content of a blog? Do I need analysis and/or thoughtfulness to develop a content of a blog? It may help. Often, the content is what ever happens to catch my attention or stick in my caw the day I start my blog. Yet, I wonder…
Evaluation is an activity that requires thoughtfulness and analysis. Thoughtfulness in planning and implementing; analysis in implementing and data examination. Both in final report preparation and presentation. This is a skill that all evaluators need. It is not acquired as a function of birth; yet it is taught through application. But people may not have all the information they need. Can people (evaluators) be critical thinkers if they are not informed? Can people (evaluators)be thoughtful and analytical if they are not informed? Or just impassioned? Does information just cloud the thoughtfulness and analysis? Something to ponder…
Chris Lysy, at Fresh Spectrum, had a guest contributor in his most recent blog, Rakesh Mohan.
Rakesh says “…evaluators forget that evaluation is inherently political because it involves making judgment about prioritization, distribution, and use of resources.”
I agree that evaluators can make judgements about prioritization, distribution and resource use. I wonder if making judgements is built in to the role of evaluator; is even taught to the nascent evaluator? I also wonder if the Principal Investigator (PI) has much to say about the judgements. What if the evaluator interprets the findings one way and the PI doesn’t agree. Is that political? Or not. Does the PI have the final say about what the outcomes mean (the prioritization, distribution, and resource use)? Does the evaluator make recommendations or does the evaluator only draw conclusions? Then where do comments on the prioritization, the distribution, the resource use come into the discussion? Are they recommendations or are they conclusions?
I decided I would see what my library says about politics: Scriven’s Thesaurus* talks about the politics of evaluation; Fitzpatrick, Sanders, and Worthen* have a chapter on “Political, Interpersonal, and Ethical Issues in Evaluation” (chapter 3); Rossi, Lipsey, and Freeman* have a section on political context (pp. 18-20) and a section on political process (pp. 381-393) that includes policy and policy implications. The 1982 Cronbach* volume (Designing Evaluatations of Educational and Social Programs) has a brief discussion (of multiple perspectives) and the classic 1980 volume, Toward Reform of Program Evaluation, also addresses the topic*. Least I neglect to include those authors who ascribe to the naturalistic approaches, Guba and Lincoln talk about the politics of evaluation (pp. 295-299) in their1981 volume, Effective Evaluation . The political aspects of evaluation have been part of the field for a long time.
So–because politics has been and continues to be part of evaluation, perhaps what Mohan says is relevant. When I look at Scriven’s comments in the Thesauras, the comment that stands out is, “Better education for the citizen about –and in–evaluation, may be the best route to improvement, short of a political leader with the charisma to persuade us of anything and the brains to persuade us to imporve our critical thinking.” Since the likelihood that we will see a political leader to persuade us is slim, perhaps education is the best approach. And like Mohan says, invite them to the conference. (After all, education comes in all sizes and experiences.) Perhaps then policy makers, politicians, press, and public will be able understand and make a difference BECAUSE OF EVALUATION!
*Scriven, M. (1991). Evaluation thesaurus. Newbury Park, CA: Sage.
*Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines. (4th ed.) Boston, MA: Pearson
*Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.) Thousand Oaks, CA: Sage.
*Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass Inc. Publishers.
*Cronbach, L. J. et al. (1980). Toward reform of program evaluation. San Francisco, CA: Jossey-Bass Inc. Publishers.
*Guba, E. G. & Lincoln, Y. S. (1981). Effective evaluation. San Francisco, CA: Jossey-Bass Inc. Publishers.
Taken from the Plexus Calls email for Friday, May 29, 2015. “What is a simple rule? Royce Holladay has described simple rules as the ‘specific, uncomplicated instructions that guide behavior and create the structure within which human beings can live their lives.’ ” How do individuals, organizations and businesses identify their simple rule? What are the guidelines that can help align their values and their actions?
First a little about Royce Holladay, also from the same email: Royce Holladay is co author, with Mallary Tytel, of Simple Rules: Radical Inquiry into Self, a book that aids recognition of the patterns that show up repeatedly in our lives. With that knowledge, individuals and groups are better able to use stories, metaphors and other tools to examine the interactions that influence the course of our lives and careers.
What if you substituted “evaluator” for “human beings”? (Yes, I know that evaluators are humons first and then evaluators.) What would you say about simple rules as evaluators? What guidelines can help align evaluators’ values and actions?
Last week I spoke of the AEA Guiding Principles and the Joint Committee Program Evaluation Standards. Perhaps they serve as the simple rule for evaluators? They are simple rules (though not proscriptive, just suggestive). The AEA isn’t the ethics police; only a guide. Go on line and read the Guiding Principles. They are simple. They are clear. There are only five.
The Program Evaluation Standards are also clear. There are also five (and those five have several parts so they are not as simple).
Like many people, I find change hard. In fact, I really don’t like change. I think this is the result of a high school experience; one-third of my classmates left each year. (I was a military off-spring; we changed assignments every three years.)
Yet, in today’s world change is probably the only constant. Does that make it fun? Not necessarily. Does that make it easy? Nope. Does that make it necessary? Yep.
Evaluators deal with change regularly. New programs are required; those must be evaluated. Old programs are revised; those must be evaluated. New approaches are developed and presented to the field. (When I first became an evaluator, there wasn’t a systems approach to evaluation; there wasn’t developmental evaluation; I could continue.) New technologies are available and must be used even if the old one wasn’t broken (even for those of us who are techno-peasants).
I just finished a major qualitative evaluation that involved real-time virtual focus groups. When I researched this topic (virtual focus groups), I found a lot of information about non-synchronous focus groups, focus groups using a conferencing software, even synchronous focus groups without pictures. I didn’t find anything about using real-time synchronous virtual focus groups. Unfortunately, we didn’t have much money even though there are services available. Read the rest of this entry »
At a loss for what to write, I once again went to one of my favorite books, Michael Scriven’s Evaluation Thesaurus . This time when I opened the volume randomly, I came upon the entry for meta-evaluation. This is a worthy topic, one that isn’t addressed often. So this week, I’ll talk about meta-evaluation and quote Scriven as I do.
First, what is meta-evaluation? This is an evaluation approach which is the evaluation of evaluations (and “indirectly, the evaluation of evaluators”). Scriven suggests the application of an evaluation-specific checklist or a Key Evaluation Checklist (KEC) (p. 228). Although this approach can be used to evaluate one’s own work, the results are typically unreliable which implies (if one can afford it) to use an independent evaluator to conduct a meta-evaluation of your evaluations.
Then, Scriven goes on to say the following key points:
He lists the parts a KEC involved in a meta evaluation; this process includes 13 steps (pp. 230-231).
He gives the following reference:
Stufflebeam, D. (1981). Meta-evaluation: Concepts, standards, and uses. In R. Berk (Ed.), Educational evaluation methodology: The state of the art. Baltimore, MD: Johns Hopkins.
About two years ago, I conducted a 17 month hybrid evaluation preparation program for the Western Region Extension Service faculty. There were over 30 individuals involved. I was the evaluation expert; Jim Lindstrom (who was at WSU at the time) was the cheerleader, the encourager, the professional development person. I really couldn’t have done it without him. (Thank you, Jim.) Now, to maximize this program and make it available to others who were not able to participate, I’ve been asked to explore an option for creating an on-line version of the WECT (say west) program. It would be loaded through the OSU professional and continuing education (PACE) venue. To that end, I am calling on those of you who participated in the original program (and any other readers) to provide me with feedback of the following:
Please be as specific as possible.
I can go to the competency literature (of which there is a lot) and redevelop WECT from those guidelines. (For more information on competencies see: King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a taxonomy of essential evaluator competencies. American Journal of Evaluation, 22(2), 229-247.) Or I could use the Canadian system as a foundation. (For more information see this link.)
I doubt if I can develop an on-line version that would cover (or do justice) to all those competencies.
So I turn to you my readers. Let me know what you think.
Last week, I started a discussion on inappropriate evaluations. I was using the Fitzpatrick , Sanders , and Worthen text for the discussion (Program Evaluation: Alternative approaches and practical guidelines, 2011. See here.) There were three other examples given in that text which were:
I will cover them today.
First, if the evaluation doesn’t (or isn’t likely to) produce relevant information, don’t do it. If factors like inadequate resources–personnel, funding, time, lack of administrative support, impossible evaluation tasks, or inaccessible data (which are typically outside the evaluator’s control), give it a pass as all of these factors make the likelihood that the evaluation will yield useful, valid information slim. Fitzpatrick, Sanders, and Worthen say, “A bad evaluation is worse than none at all…”.
Then consider the type of evaluation that is requested. Should you do a formative, a summative, or a developmental evaluation? The tryout phase of a program typically demands a formative evaluation and not a summative evaluations despite the need to demonstrate impact. You may not demonstrate an effect at all because of timing. Consider running the program for a while (more than once or twice in a month). Decide if you are going to use the results for only programmatic improvement or for programmatic improvement AND impact.
Finally consider if the propriety of the evaluation is worthwhile. Propriety is the third standard in the Joint Committee Standards . Propriety helps establish evaluation quality by protecting the rights of those involved in the evaluation–the target audience, the evaluators, program staff, and other stakeholders. If you haven’t read the Standards, I recommend that you do.
New Topic (and timely): Comments.
It has been a while since I’ve commented on any feedback I get in the form of comments on blog posts. I read everyone. I get them both here as I write and as an email. Sometimes they are in a language I don’t read or understand and, unfortunately, the on-line translators don’t always make sense. Sometimes they are encouraging comments (keep writing; keep blogging; thank you; etc.). Sometimes there are substantive comments that lead me to think about things evaluation differently. Regardless of what the message is: THANK YOU! For commenting. Remember, I read each one.
Can there be inappropriate use of evaluation studies?
Jody Fitzpatrick¹ and her co-authors Jim Sanders and Blaine Worthen, in Program Evaluation: Alternative Approaches and Practical Guidelines (2011) provide several examples of inappropriate evaluation use. Before they give the examples, they share some wise words from Nick Smith² . Nick says there are two broad categories for declining conducting an evaluation. They are “1) when the evaluation could harm the field of evaluation, or 2) when it would fail to support the social good.” Fitzpatrick, Sanders, and Worthen (2011) go on to say that “these problems may arise when it is likely that the ultimate quality of the evaluation will be questionable, major clients would be alienated or misled concerning what evaluation can do, resources will be inadequate, or ethical principles would be violated” (p. 265).
The examples provided are
When I study these examples (there may be others; I’m quoting Fitzpatrick, Sanders, and Worthen, 2011), I find that these are examples often found in the published literature. As a reviewer, I find “show and tell” evaluations of little value because they produce trivial information. They report a study that has limited or insufficient impact and that has little or no potential for continuation. The cost of conducting a formal evaluation would easily outweigh the value–if monetized–(merit or worth) of the program and would yield little information useful for others in the field. The intention might be well designed; the product is less than ideal. Read the rest of this entry »
I would like to think that the world is a better place than it was 50 years ago. In many ways I suppose it is; I wish that were true in all ways.
Human rights were violated in the name of religious freedom in Indiana last week (not to mention the other 19 states which have Religious Freedom Restoration Act). “The statute shows every sign of having been carefully designed to put new obstacles in the path of equality; and it has been publicly sold with deceptive claims that it is ‘nothing new’.” (Thank you, Garrett Epps). Then there are the those states which follow the Hobby Lobby ruling, a different set.
The eve of Passover is Friday (which also happens to be Good Friday). Passover (or Pesach) is a celebration commemorating the Israelites freedom from slavery imposed by ancient Egypt. Putting an orange on the Seder plate helps remember that liberation specifically the liberation of the marginalized.
Passover is the only holiday that celebrates human rights and individual freedoms.
Does anyone else see the irony with this Indiana law?
This is an evaluation issue. How can you make a difference if you restrict liberation (like the recently passed Indiana law)? What is the merit, the worth, the value of restriction? I don’t think there is any.