My friend and colleague, Patricia Rogers, says of cognitive bias , “It would be good to think through these in terms of systematic evaluation approaches and the extent to which they address these.” This was in response to the article herecognitive bias The article says that the human brain is capable of 10 to the 16th power (a big number) processes per second. Despite being faster than a speeding bullet, etc., the human brain has ” annoying glitches (that) cause us to make questionable decisions and reach erroneous conclusions.”

Bias is something that evaluators deal with all the time. There is desired response bias, non-response bias, recency and immediacy bias, measurement bias, and…need I say more? Isn’t evaluation and aren’t evaluators supposed to be “objective”? That we as evaluators behave in an ethical manner? That we have dealt with potential bias and conflicts of interest. That is where cognitive bias appear. And you might not know it at all. Continue reading

KASA. You’ve heard the term many times. Have you really stopped to think about what it means? What evaluation approach you will use if you want to determine a difference in KASA? What analyses you will use? How you will report the findings?

Probably not. You just know that you need to measure KNOWLEDGE, ATTITUDE, SKILLS, and ASPIRATIONS.

The Encyclopedia of Evaluation (edited by Sandra Mathisonsandra mathison) says that they influence the adoption of selected practices and technologies (i.e., programs). Claude Bennett Claude Bennett uses KASA in his TOP model  Bennett Hierarchy.I’m sure there are other sources. Continue reading

I’ve been stuck.

I haven’t blogged for three weeks. I haven’t blogged because I don’t have a topic. Oh, I’ve plenty to say (I am never for a loss of words… 🙂 ) I want something to relate to evaluation. Relate clearly. Without question. Evaluation.

So after 5 years, I’m going to start over. Evaluation is an everyday activity!

Evaluative thinking is something you do everyday; probably all day. (I don’t know about when you are a sleep, so I said probably.) I think evaluative thinking is one of those skills that everyone needs to learn systematically. I think everyone learns at least a part of evaluative thinking as they grow; the learning may not be systematic. I would put that skill in the same category as critical (not negative but thoughtful) thinking, team building, leadership, communication skills (both verbal and written), technological facility as well as some others which escape me right now. I would add systematic evaluative thinking.

Everyone has criteria on which decisions are based. Look at how you choose a package of cookies or a can of corn can of corn 2 at the grocery store. What criteria do you use for choosing? Yet that wasn’t taught to you; it was just something you developed. Evaluative thinking is more than just choosing what you want for dinner. AARP lists problem solving as part of the critical thinking skills. I think it is more than just problem solving; I do agree that it is a critical thinking skill (see graphic, from Grant Tilus, Rasmussen College).Core Critical Thinking Skills

So you think thoughtfully about most events/activities/things that you do throughout the day. And you learn over time what works and what doesn’t; what has value and what doesn’t. You learn to discern the conditions under which something works; you learn what changes the composition of the outcome. You begin to think evaluatively about most things. One day you realize that you are critically thinking about what you can, will, and need to do. Evaluative thinking has become systematic. You realize that it depends on many factors. You realize that evaluative thinking is a part of who you are. You are an evaluator, even if you are a psychologist or geologist or engineer or educator first.

 

my two cents.

molly.

I just got back from a road trip across Southern Alabama with my younger daughter.southern alabama We started from Birmingham and drove a very circuitous route ending in Mobile and the surrounding areas, then returned to Birmingham for her to start her second year at Birmingham-Southern College.

As we traveled, I read a book by Bill McKibben (one of many) called Oil and Honey: The Education of an Unlikely Activist. It is a memoir, a personal recounting of the early years of this decade, which corresponded with the years my older daughter was in college (2011-2014). I met Bill McKibben, who, in 2008, is credited with starting the non-profit, 350.0rg, and is currently listed as “senior adviser and co-founder”. He is a passionate, soft-spoken man, who beleives that the world is on a short fuse. He really seems to believe that there is a better way to have a future. He, like Gandhi, is taking a stand.  Oil and Honey puts into action Gandhi’s saying about being the change you want to seegandhi and change. As the subtitle indicates, McKibben is an unlikely activist. He is a self-described non-leader who led and advises the global effort to increase awareness of climate change/chaos. When your belief is on the line, you do what has to be done.

Evaluators are the same way. When your belief is on the line, you do what has to be done. And, hopefully, in the process you are the change that you want to see in the world. But know it cannot happen one pipeline at a time. The fossil fuel industry has too much money. So what do you do? You start a campaign. That is what 350.org has done:  “There are currently fossil fuel divestment campaigns at 308 colleges and universities, 105 cities and states, and 6 religious institutions.”(Wikipedia, 350.0rg) (Scroll down to the heading “Fossil Fuel Divestment” to see the complete discussion.) Those are clear numbers, hard data for consumption. (Unfortunately, the  divestment campaign at OSU failed.)

So I see the question as one of impact, though not specifically world peace (my ultimate impact). If there is no planet on which to work for world peace, there in no need for world peace. Evaluators can help. They can look at data critically. They can read the numbers. They can gather the words. This may be the best place for the use of pictures (they are, after all, worth 1000 words).  Perhaps by combining efforts, the outcome will be an impact that benefits all humanity and builds a tomorrow for the babies born today.

my two cents.

molly.

 

“fate is chance; destiny is choice”.destiny-calligraphy-poster-c123312071

Went looking for who said that originally so that I could give credit. Found this as the closest saying: “Destiny is no matter of chance. It is a matter of choice: It is not a thing to be waited for, it is a thing to be achieved.

William Jennings Bryan

 

Evaluation is like destiny. There are many choices to make. How do you choose? What do you choose?

Would you listen to the dictates of the Principal Investigator even if you know there are other, perhaps better, ways to evaluate the program?

What about collecting data? Are you collecting it because it would be “nice”? OR are you collecting it because you will use the data to answer a question?

What tools do you use to make your choices? What resources do you use?

I’m really curious. It is summer and although I have a list (long to be sure) of reading, I wonder what else is out there, specifically relating to making choices? (And yes, I could use my search engine; I’d rather hear from my readers!)

Let me know. PLEASE!

my two cents.

molly.

independence-2

Erma Bombeck said “You have to love a nation that celebrates its independence every July 4th not with a parade of guns, tanks, and soldiers, who file by the White House in a show of strength and muscle, but with family picnics, where kids throw frisbees, potato salad gets iffy, and the flies die from happiness. You may think you’ve overeaten, but its patriotism.”

I heard this quote on my way back from Sunriver, OR on Splendid Table, an American Public Media show I don’t get to listen to very often and has wonderful tidbits of information, not necessarily evaluative. Since I had just celebrated July 4th, this quote was most apropos! I also heard snippets of a broadcast (probably on NPR) that talked about patriotism/being patriotic. For me, tradition is patriotic. You know blueberry pie on the 4th of Julyblueberry pie natural light; potato salad; pasta; and of course, fireworks (unless the fire danger is extreme [like it was in Sunriver] and then all you can hope is that people will be VERY VERY careful!

So what do you think makes for patriotism? What do you do to be patriotic? Certainly, for me, it wouldn’t be 4th of July without blueberry pie and my “redwhiteblue” t-shirt. I don’t need fireworks or potato salad… 🙂 What makes this celebratory for me is the fact that I am assured freedom from want, freedom of worship, freedom from fear, and freedom of speech and I realize that they are only as free as I make them. four-freedoms-2

Franklin Delano Roosevelt said it clearly in his speech to congress, January 6, 1941: “In the future days, which we seek to make secure, we look forward to a world founded upon four essential human freedoms.

The first is freedom of speech and expression — everywhere in the world.

The second is freedom of every person to worship God in his (sic) own way — everywhere in the world.

The third is freedom from want — which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants — everywhere in the world.

The fourth is freedom from fear — which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbor– anywhere in the world…”

This is an exercise in evaluative thinking. What do you think (about patriotism)? What criteria do you use to think this?

mytwo cents.

molly.

Knowledge is personal!

A while ago I read a blog by Harold Jarche. He was talking about knowledge management (the field in which he works). That field  makes the claim that knowledge can be transferred; he makes the claim that knowledge cannot be transferred.  He goes on to say that we can share (transfer) information; we can share data; we cannot share knowledge. I say once we share the information, the other person has the choice to make that shared information part of her/his knowledge or not. Stories help individuals see (albeit, briefly) others’ knowledge.

Now,  puzzling the phrase, “Knowledge is personal”.  I would say, “The only thing ‘they” can’t take away from you is knowledge.” (The corollary to that is “They may take your car, your house, your life; they cannot take your knowledge!”).

So I am reminded, when I remember that knowledge is personal and cannot be taken away from you, that there are evaluation movements and models which are established to empower people with knowledge, specifically evaluation knowledge. I must wonder, then, if by sharing the information, we are sharing knowledge? If people are really empowered? To be sure, we share information (in this case about how to plan, implement, analyze, and report an evaluation). Is that sharing knowledge?

Fetterman (and Wandersman in their 2005 Guilford Press volume*)AA027003 says that “empowerment evaluation is committed to contributing to knowledge creation”. (Yes, they are citing Lentz, et al., 2005*; and Nonaka & Takeuchi, 1995*., just to be transparent.) So I wonder, if knowledge is personal and known only to the individual, how can “they” say that empowerment evaluation is contributing to knowledge creation. Is it because knowledge is personal and every individual creates her/his own knowledge through that experience? Or does empowerment evaluation contribute NOT to knowledge creation but information creation? (NOTE: This is not a criticism of empowerment evaluation, only an example using empowerment evaluation of the dissonance I’m experiencing; in fact, Fetterman defines empowerment evaluation as “the use of evaluation concepts, techniques, and findings to foster improvement and self-determination”. It is only later in the volume cited that the statement of knowledge creation)

Given that knowledge is personal, it would make sense that knowledge is implicit and implicit knowledge requires interpretation to make sense of it. Hence, stories because stories can help share implicit knowledge. As each individual seeks information to become knowledge, that same individual makes that information into knowledge and that knowledge implicit.  Jarche says, “As each person seeks information, makes sense of it through reflection and articulation, and then shares it through conversation…” I would add, “and shared as information”.

Keep that in mind the next time you want to measure knowledge as part of KASA on a survey.

my two cents.

molly.

  1. * Fetterman, D. M. & Wandersman, A. (eds.) (2005). Empowerment evaluation principles in practice. New Y0rk: Guilford Press.
  2. Lentz, B. E., Imm, P. S., Yost, J. B., Johnson, N. P., Barron, C., Lindberg, M. S. & Treistman, J. In D. M. Fetterman & A. Wandersman (Eds.), Empowerment evaluation principles in practice. New York: Guilford Press.
  3. Nonaka, I., & Takeuchi, K. (1995). The knowledge creating company. New York: Oxford University Press.

Thinking for yourself is a key competency for evaluators. Scriven says that critical thinking is “The name of an approach to or a subject within the curriculum that might equally well be called ‘evaluative thinking…’ “.

Certainly, one of the skills I taught my daughters from an early age is to evaluate experiences both qualitatively and quantitatively. They got so good at this exercise, they often preempted me with their reports. They learned early that critical thinking is evaluative, that critical doesn’t mean being negative, rather it means being thoughtful or analytical. Scriven goes on to say, “The result of critical thinking is in fact often to provide better support for a position under consideration or to create and support a new position.” I usually asked my girls to evaluate an experience to determine if we would do that experience (or want to do it) again.  Recently, I had the opportunity to do just that. My younger daughter had not been to the Ringling Museum in Sarasota FL; my older daughter had (she went to college in FL). She agreed, after she took me, that we needed to go as a family. We did. We all agreed that it was worth the price of admission. An example of critical thinking–where we provided support for a position under consideration.

Could we have done this without the ability to critically think? Maybe. Could we have come to an agreement that it was worth seeing more than once with out this ability? Probably not. Since the premise of this blog is that evaluation is something that everyone (whether they know it or not) does every day, then would it follow that critical thinking is done everyday? Probably. Yet, I wonder if you need this skill to get out of bed? To decide what to eat for breakfast? To develop the content of a blog? Do I need analysis and/or thoughtfulness to develop a content of a blog? It may help. Often, the content is what ever happens to catch my attention or stick in my caw the day I start my blog. Yet, I wonder…

Evaluation is an activity that requires thoughtfulness and analysis. Thoughtfulness in planning and implementing; analysis in implementing and data examination. Both in final report preparation and presentation. This is a skill that all evaluators need. It is not acquired as a function of birth; yet it is taught through application. But people may not have all the information they need. Can people (evaluators) be critical thinkers if they are not informed? Can people (evaluators)be thoughtful and analytical if they are not informed? Or just impassioned?  Does information just cloud the thoughtfulness and analysis? Something to ponder…

 

mytwo cents.

molly.

Chris Lysy, at Fresh Spectrum, had a guest contributor in his most recent blog, Rakesh Mohan.

Rakesh says “…evaluators forget that evaluation is inherently political because it involves making judgment about prioritization, distribution, and use of resources.”

I agree that evaluators can make judgements about prioritization, distribution and resource use. I wonder if making judgements is built in to the role of evaluator; is even taught to the nascent evaluator? I also wonder if the Principal Investigator (PI) has much to say about the judgements. What if the evaluator interprets the findings one way and the PI doesn’t agree. Is that political? Or not. Does the PI have the final say about what the outcomes mean (the prioritization, distribution, and resource use)? Does the evaluator make recommendations or does the evaluator only draw conclusions? Then where do comments on the prioritization, the distribution, the resource use come into the discussion? Are they recommendations or are they conclusions?

I decided I would see what my library says about politics: Scriven’s Thesaurus* Scriven book covertalks about the politics of evaluation; Fitzpatrick, Sanders, and Worthen* fitzpatrick book 2 have a chapter on “Political, Interpersonal, and Ethical Issues in Evaluation” (chapter 3);  Rossi, Lipsey, and Freeman* have a section on political context (pp. 18-20) and a section on political process (pp. 381-393) that includes policy and policy implications. The 1982 Cronbach* lee j. cronbachvolume (Designing Evaluatations of Educational and Social Programs)  has a brief discussion (of multiple perspectives) and the classic 1980 volume, Toward Reform of Program Evaluationcronbach toward reform, also addresses the topic*. Least I neglect to include those authors who ascribe to the naturalistic approaches, Guba and Lincoln  talk about the politics of evaluation (pp. 295-299) in their1981  volume, Effective Evaluation effective evaluation. The political aspects of evaluation have been part of the field for a long time.

So–because politics has been and continues to be part of evaluation, perhaps what Mohan says is relevant. When I look at Scriven’s comments in the Thesauras, the comment that stands out is, “Better education for the citizen about –and in–evaluation, may be the best route to improvement, short of a political leader with the charisma to persuade us of anything and the brains to persuade us to imporve our critical thinking.”  Since the likelihood that we will see a political leader to persuade us is slim, perhaps education is the best approach. And like Mohan says, invite them to the conference. (After all, education comes in all sizes and experiences.) Perhaps then policy makers, politicians, press, and public will be able understand and make a difference BECAUSE OF EVALUATION!

 

*Scriven, M. (1991). Evaluation thesaurus. Newbury Park, CA: Sage.

*Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines. (4th ed.) Boston, MA: Pearson

*Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.) Thousand Oaks, CA: Sage.

*Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass Inc. Publishers.

*Cronbach, L. J. et al. (1980). Toward reform of program evaluation. San Francisco, CA: Jossey-Bass Inc. Publishers.

*Guba, E. G. & Lincoln, Y. S. (1981). Effective evaluation. San Francisco, CA: Jossey-Bass Inc. Publishers.

 

 

Taken from the Plexus Calls email  for Friday, May 29, 2015. “What is a simple rule? Royce Holladay has described simple rules as the ‘specific, uncomplicated instructions that guide behavior and create the structure within which human beings can live their lives.’ ” How do individuals, organizations and businesses identify their simple rule? What are the guidelines that can help align their values and their actions?

First a little about Royce Holladay, also from the same email:  Royce Holladay is co author, with Mallary Tytel, of Simple Rules: Radical Inquiry into Self, a book that aids recognition of the patterns that show up repeatedly in our lives. With that knowledge, individuals and groups are better able to use stories, metaphors and other tools to examine the interactions that influence the course of our lives and careers.

What if you substituted “evaluator” for “human beings”? (Yes, I know that evaluators are humons first and then evaluators.) What would you say about simple rules as evaluators? What guidelines can help align evaluators’ values and actions?

Last week I spoke of the AEA Guiding PrinciplesGuiding principles and the Joint Committee Program Evaluation StandardsThe_Program_Evaluation_Standards_3ed. Perhaps they serve as the simple rule for evaluators? They are simple rules (though not proscriptive, just suggestive). The AEA isn’t the ethics police; only a guide. Go on line and read the Guiding Principles. They are simple. They are clear. There are only five.

    1. Systematic Inquiry: Evaluators conduct systematic, data-based inquiries.
    2. Competence: Evaluators provide competent performance to stakeholders.
    3. Integrity/Honesty:  Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process.
    4. Respect for People: Evaluators respect the security, dignity and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
    5. Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

The Program Evaluation Standards are also clear. There are also five (and those five have several parts so they are not as simple).

  1. Utility (8 sub-parts)
  2. Feasibility (4 sub-parts)
  3. Propriety (7 sub-parts)
  4. Accuracy (8 sub-parts)
  5. Evaluation Accountability (3 sub-parts)

You can down load the Guiding Principles from the AEA website. Y0u can get the Standards book here or here. If they are not on your shelf, they need to be. They are simple rules.

mytwo cents.

molly.