My friend and colleague, Patricia Rogers, says of cognitive bias , “It would be good to think through these in terms of systematic evaluation approaches and the extent to which they address these.” This was in response to the article herecognitive bias The article says that the human brain is capable of 10 to the 16th power (a big number) processes per second. Despite being faster than a speeding bullet, etc., the human brain has ” annoying glitches (that) cause us to make questionable decisions and reach erroneous conclusions.”

Bias is something that evaluators deal with all the time. There is desired response bias, non-response bias, recency and immediacy bias, measurement bias, and…need I say more? Isn’t evaluation and aren’t evaluators supposed to be “objective”? That we as evaluators behave in an ethical manner? That we have dealt with potential bias and conflicts of interest. That is where cognitive bias appear. And you might not know it at all. Continue reading

I just got back from a road trip across Southern Alabama with my younger daughter.southern alabama We started from Birmingham and drove a very circuitous route ending in Mobile and the surrounding areas, then returned to Birmingham for her to start her second year at Birmingham-Southern College.

As we traveled, I read a book by Bill McKibben (one of many) called Oil and Honey: The Education of an Unlikely Activist. It is a memoir, a personal recounting of the early years of this decade, which corresponded with the years my older daughter was in college (2011-2014). I met Bill McKibben, who, in 2008, is credited with starting the non-profit, 350.0rg, and is currently listed as “senior adviser and co-founder”. He is a passionate, soft-spoken man, who beleives that the world is on a short fuse. He really seems to believe that there is a better way to have a future. He, like Gandhi, is taking a stand.  Oil and Honey puts into action Gandhi’s saying about being the change you want to seegandhi and change. As the subtitle indicates, McKibben is an unlikely activist. He is a self-described non-leader who led and advises the global effort to increase awareness of climate change/chaos. When your belief is on the line, you do what has to be done.

Evaluators are the same way. When your belief is on the line, you do what has to be done. And, hopefully, in the process you are the change that you want to see in the world. But know it cannot happen one pipeline at a time. The fossil fuel industry has too much money. So what do you do? You start a campaign. That is what 350.org has done:  “There are currently fossil fuel divestment campaigns at 308 colleges and universities, 105 cities and states, and 6 religious institutions.”(Wikipedia, 350.0rg) (Scroll down to the heading “Fossil Fuel Divestment” to see the complete discussion.) Those are clear numbers, hard data for consumption. (Unfortunately, the  divestment campaign at OSU failed.)

So I see the question as one of impact, though not specifically world peace (my ultimate impact). If there is no planet on which to work for world peace, there in no need for world peace. Evaluators can help. They can look at data critically. They can read the numbers. They can gather the words. This may be the best place for the use of pictures (they are, after all, worth 1000 words).  Perhaps by combining efforts, the outcome will be an impact that benefits all humanity and builds a tomorrow for the babies born today.

my two cents.

molly.

 

Ignorance is a choice.ignorance

Not knowing may be “easier”; you know, less confusing, less intimidating, less fearful, less embarrassing.

I remember when I first asked the question, “Is it easier not knowing?” What I was asking was “By choosing to not know, did I really make a choice, or was it a default position?” Because if you consciously avoid knowing, do you really not know or are you just ignoring the obvious. Perhaps it goes back to the saying common on social media today: “Great people talk about ideas; average people talk about things; small people talk about other people” (which is a variation of what Elanor Roosevelt said).great minds-people Continue reading

“fate is chance; destiny is choice”.destiny-calligraphy-poster-c123312071

Went looking for who said that originally so that I could give credit. Found this as the closest saying: “Destiny is no matter of chance. It is a matter of choice: It is not a thing to be waited for, it is a thing to be achieved.

William Jennings Bryan

 

Evaluation is like destiny. There are many choices to make. How do you choose? What do you choose?

Would you listen to the dictates of the Principal Investigator even if you know there are other, perhaps better, ways to evaluate the program?

What about collecting data? Are you collecting it because it would be “nice”? OR are you collecting it because you will use the data to answer a question?

What tools do you use to make your choices? What resources do you use?

I’m really curious. It is summer and although I have a list (long to be sure) of reading, I wonder what else is out there, specifically relating to making choices? (And yes, I could use my search engine; I’d rather hear from my readers!)

Let me know. PLEASE!

my two cents.

molly.

Knowledge is personal!

A while ago I read a blog by Harold Jarche. He was talking about knowledge management (the field in which he works). That field  makes the claim that knowledge can be transferred; he makes the claim that knowledge cannot be transferred.  He goes on to say that we can share (transfer) information; we can share data; we cannot share knowledge. I say once we share the information, the other person has the choice to make that shared information part of her/his knowledge or not. Stories help individuals see (albeit, briefly) others’ knowledge.

Now,  puzzling the phrase, “Knowledge is personal”.  I would say, “The only thing ‘they” can’t take away from you is knowledge.” (The corollary to that is “They may take your car, your house, your life; they cannot take your knowledge!”).

So I am reminded, when I remember that knowledge is personal and cannot be taken away from you, that there are evaluation movements and models which are established to empower people with knowledge, specifically evaluation knowledge. I must wonder, then, if by sharing the information, we are sharing knowledge? If people are really empowered? To be sure, we share information (in this case about how to plan, implement, analyze, and report an evaluation). Is that sharing knowledge?

Fetterman (and Wandersman in their 2005 Guilford Press volume*)AA027003 says that “empowerment evaluation is committed to contributing to knowledge creation”. (Yes, they are citing Lentz, et al., 2005*; and Nonaka & Takeuchi, 1995*., just to be transparent.) So I wonder, if knowledge is personal and known only to the individual, how can “they” say that empowerment evaluation is contributing to knowledge creation. Is it because knowledge is personal and every individual creates her/his own knowledge through that experience? Or does empowerment evaluation contribute NOT to knowledge creation but information creation? (NOTE: This is not a criticism of empowerment evaluation, only an example using empowerment evaluation of the dissonance I’m experiencing; in fact, Fetterman defines empowerment evaluation as “the use of evaluation concepts, techniques, and findings to foster improvement and self-determination”. It is only later in the volume cited that the statement of knowledge creation)

Given that knowledge is personal, it would make sense that knowledge is implicit and implicit knowledge requires interpretation to make sense of it. Hence, stories because stories can help share implicit knowledge. As each individual seeks information to become knowledge, that same individual makes that information into knowledge and that knowledge implicit.  Jarche says, “As each person seeks information, makes sense of it through reflection and articulation, and then shares it through conversation…” I would add, “and shared as information”.

Keep that in mind the next time you want to measure knowledge as part of KASA on a survey.

my two cents.

molly.

  1. * Fetterman, D. M. & Wandersman, A. (eds.) (2005). Empowerment evaluation principles in practice. New Y0rk: Guilford Press.
  2. Lentz, B. E., Imm, P. S., Yost, J. B., Johnson, N. P., Barron, C., Lindberg, M. S. & Treistman, J. In D. M. Fetterman & A. Wandersman (Eds.), Empowerment evaluation principles in practice. New York: Guilford Press.
  3. Nonaka, I., & Takeuchi, K. (1995). The knowledge creating company. New York: Oxford University Press.

Chris Lysy, at Fresh Spectrum, had a guest contributor in his most recent blog, Rakesh Mohan.

Rakesh says “…evaluators forget that evaluation is inherently political because it involves making judgment about prioritization, distribution, and use of resources.”

I agree that evaluators can make judgements about prioritization, distribution and resource use. I wonder if making judgements is built in to the role of evaluator; is even taught to the nascent evaluator? I also wonder if the Principal Investigator (PI) has much to say about the judgements. What if the evaluator interprets the findings one way and the PI doesn’t agree. Is that political? Or not. Does the PI have the final say about what the outcomes mean (the prioritization, distribution, and resource use)? Does the evaluator make recommendations or does the evaluator only draw conclusions? Then where do comments on the prioritization, the distribution, the resource use come into the discussion? Are they recommendations or are they conclusions?

I decided I would see what my library says about politics: Scriven’s Thesaurus* Scriven book covertalks about the politics of evaluation; Fitzpatrick, Sanders, and Worthen* fitzpatrick book 2 have a chapter on “Political, Interpersonal, and Ethical Issues in Evaluation” (chapter 3);  Rossi, Lipsey, and Freeman* have a section on political context (pp. 18-20) and a section on political process (pp. 381-393) that includes policy and policy implications. The 1982 Cronbach* lee j. cronbachvolume (Designing Evaluatations of Educational and Social Programs)  has a brief discussion (of multiple perspectives) and the classic 1980 volume, Toward Reform of Program Evaluationcronbach toward reform, also addresses the topic*. Least I neglect to include those authors who ascribe to the naturalistic approaches, Guba and Lincoln  talk about the politics of evaluation (pp. 295-299) in their1981  volume, Effective Evaluation effective evaluation. The political aspects of evaluation have been part of the field for a long time.

So–because politics has been and continues to be part of evaluation, perhaps what Mohan says is relevant. When I look at Scriven’s comments in the Thesauras, the comment that stands out is, “Better education for the citizen about –and in–evaluation, may be the best route to improvement, short of a political leader with the charisma to persuade us of anything and the brains to persuade us to imporve our critical thinking.”  Since the likelihood that we will see a political leader to persuade us is slim, perhaps education is the best approach. And like Mohan says, invite them to the conference. (After all, education comes in all sizes and experiences.) Perhaps then policy makers, politicians, press, and public will be able understand and make a difference BECAUSE OF EVALUATION!

 

*Scriven, M. (1991). Evaluation thesaurus. Newbury Park, CA: Sage.

*Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines. (4th ed.) Boston, MA: Pearson

*Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.) Thousand Oaks, CA: Sage.

*Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass Inc. Publishers.

*Cronbach, L. J. et al. (1980). Toward reform of program evaluation. San Francisco, CA: Jossey-Bass Inc. Publishers.

*Guba, E. G. & Lincoln, Y. S. (1981). Effective evaluation. San Francisco, CA: Jossey-Bass Inc. Publishers.

 

 

At a loss for what to write, I once again went to one of my favorite books, Michael Scriven’s ScrivenEvaluation Thesaurus Scriven book cover. This time when I opened the volume randomly, I came upon the entry for meta-evaluation. This is a worthy topic, one that isn’t addressed often. So this week, I’ll talk about meta-evaluation and quote Scriven as I do.

First, what is meta-evaluation? This is an evaluation approach which is the evaluation of evaluations (and “indirectly, the evaluation of evaluators”). Scriven suggests the application of an evaluation-specific checklist or a Key Evaluation Checklist (KEC) (p. 228). Although this approach can be used to evaluate one’s own work, the results are typically unreliable which implies (if one can afford it) to use an independent evaluator to conduct a meta-evaluation of your evaluations.

Then, Scriven goes on to say the following key points:

  • Meta-evaluation is the professional imperative of evaluation;
  • Meta-evaluation can be done formatively or summatively or both; and
  • Use the KEC to generate a new evaluation OR apply the checklist to the original evaluation as a product.

He lists the parts a KEC involved in a meta evaluation; this process includes 13 steps (pp. 230-231).

He gives the following reference:

Stufflebeam, D. (1981). Meta-evaluation: Concepts, standards, and uses. In R. Berk (Ed.), Educational evaluation methodology: The state of the art. Baltimore, MD: Johns Hopkins.

 

Last week, I started a discussion on inappropriate evaluations. I was using the Fitzpatrick jody fitzpatrick, Sanders Jim Sanders, and Worthen blaine worthen text for the discussion fitzpatrick book 2 (Program Evaluation: Alternative approaches and practical guidelines, 2011. See here.) There were three other examples given in that text which were:

  1. Evaluation cannot yield useful, valid information;
  2. Type of evaluation is premature for the stage of the program; and
  3. Propriety of evaluation is doubtful.

I will cover them today.

First, if the evaluation doesn’t (or isn’t likely to) produce relevant information, don’t do it. If factors like inadequate resources–personnel, funding, time, lack of administrative support, impossible evaluation tasks, or inaccessible data (which are typically outside the evaluator’s control), give it a pass as all of these factors make the likelihood that the evaluation will yield useful, valid information slim. Fitzpatrick, Sanders, and Worthen say, “A bad evaluation is worse than none at all…”.

Then consider the type of evaluation that is requested. Should you do a formative, a summative, or a developmental evaluation? The tryout phase of a program typically demands a formative evaluation and not a summative evaluations despite the need to demonstrate impact. You may not demonstrate an effect at all because of timing. Consider running the program for a while (more than once or twice in a month). Decide if you are going to use the results for only programmatic improvement or for programmatic improvement AND impact.

Finally consider if the propriety of the evaluation is worthwhile. Propriety is the third standard in the Joint Committee Standards The_Program_Evaluation_Standards_3ed. Propriety helps establish evaluation quality by protecting the rights of those involved in the evaluation–the target audience, the evaluators, program staff, and other stakeholders. If you haven’t read the Standards, I recommend that you do.

 

New Topic (and timely): Comments.

It has been a while since I’ve commented on any feedback I get in the form of comments on blog posts. I read everyone. I get them both here as I write and as an email. Sometimes they are in a language I don’t read or understand and, unfortunately, the on-line translators don’t always make sense. Sometimes they are encouraging comments (keep writing; keep blogging; thank you; etc.). Sometimes there are substantive comments that lead me to think about things evaluation differently. Regardless of what the message is: THANK YOU! For commenting. Remember, I read each one.

my two cents.

molly.

Can there be inappropriate use of evaluation studies?

Jody Fitzpatrick¹ jody fitzpatrick and her co-authors Jim SandersJim Sanders and Blaine Worthen,blaine worthen in Program Evaluation: Alternative Approaches and Practical Guidelines (2011)fitzpatrick book 2 provide several examples of inappropriate evaluation use. Before they give the examples, they share some wise words from Nick Smith² NickSmith_forweb. Nick says there are two broad categories for declining conducting an evaluation. They are “1) when the evaluation could harm the field of evaluation, or 2) when it would fail to support the social good.” Fitzpatrick, Sanders, and Worthen (2011) go on to say that “these problems may arise when it is likely that the ultimate quality of the evaluation will be questionable, major clients would be alienated or misled concerning what evaluation can do, resources will be inadequate, or ethical principles would be violated” (p. 265).

The examples provided are

  1. Evaluation would produce trivial information;
  2. Evaluation results will not be used;
  3. Evaluation cannot yield useful, valid information;
  4. Type of evaluation is premature for the stage of the program; and
  5. Propriety of evaluation is doubtful.

When I  study these examples (there may be others; I’m quoting Fitzpatrick, Sanders, and Worthen, 2011), I find that these are examples often found in the published literature. As a reviewer, I find “show and tell” evaluations of little value because they produce trivial information. They report a study that has limited or insufficient impact and that has little or no potential for continuation. The cost of conducting a formal evaluation would easily outweigh the value–if monetized–(merit or worth) of the program and would yield little information useful for others in the field. The intention might be well designed; the product is less than ideal. Continue reading

A recent blog (not mine) talked about the client’s evaluation use.evaluation cycle and use The author says that she feels “…successful…if the client is using the data…” This statement allowed me to stop and pause and think about data use. The author continues with the comment about the difference between “…facilitating the client’s understanding of the data in order to create plans and telling the client exactly what the data means and what to do with it.”

I work with Extension professionals who may or may not understand the methodology, the data analysis, or the results. How does one communicate with Extension professionals who may be experts in their content area (cereal crops, nutrition, aging, invasive species) and know little about the survey on which they worked? Is my best guess (not knowing the content area) a good guess? Do Extension professionals really use the evaluation findings?  If I suggest that the findings could say this, or suggest that the findings could say that, am I preventing a learning opportunity from happening? Continue reading