Recently, I read that 45% of individuals make New Year’s Resolutions and only 8% actually achieve success. Hmmm…not a friendly probability. Perhaps intentions about behavior are indeed more realistic. (Haven’t seen the statistics on that potential change. Mazanian (et al, 1998) pemazman does say stated intention to change is the most significant behavioral indicator.) My intention for 2016 is to provide content related to or about evaluation that provides you with something you didn’t have before you read the post (Point one). Examples follow: Continue reading

The US just celebrated Thanksgiving, the annual day of thankfulness. Thanksgiving Canada celebrated in mid October (October 12). Although other countries celebrate versions of the holiday, originally the US and Canada celebrated in honor of the previous harvest.

Certainly, the Guiding Principles Guiding principles and the Program Evaluation Standards program evaluation standards provide evaluators with a framework to conduct evaluationEvaluation3 work. The work for which I am thankful.

Continue reading

My friend and colleague, Patricia Rogers, says of cognitive bias , “It would be good to think through these in terms of systematic evaluation approaches and the extent to which they address these.” This was in response to the article herecognitive bias The article says that the human brain is capable of 10 to the 16th power (a big number) processes per second. Despite being faster than a speeding bullet, etc., the human brain has ” annoying glitches (that) cause us to make questionable decisions and reach erroneous conclusions.”

Bias is something that evaluators deal with all the time. There is desired response bias, non-response bias, recency and immediacy bias, measurement bias, and…need I say more? Isn’t evaluation and aren’t evaluators supposed to be “objective”? That we as evaluators behave in an ethical manner? That we have dealt with potential bias and conflicts of interest. That is where cognitive bias appear. And you might not know it at all. Continue reading

KASA. You’ve heard the term many times. Have you really stopped to think about what it means? What evaluation approach you will use if you want to determine a difference in KASA? What analyses you will use? How you will report the findings?

Probably not. You just know that you need to measure KNOWLEDGE, ATTITUDE, SKILLS, and ASPIRATIONS.

The Encyclopedia of Evaluation (edited by Sandra Mathisonsandra mathison) says that they influence the adoption of selected practices and technologies (i.e., programs). Claude Bennett Claude Bennett uses KASA in his TOP model  Bennett Hierarchy.I’m sure there are other sources. Continue reading

I’ve been stuck.

I haven’t blogged for three weeks. I haven’t blogged because I don’t have a topic. Oh, I’ve plenty to say (I am never for a loss of words… 🙂 ) I want something to relate to evaluation. Relate clearly. Without question. Evaluation.

So after 5 years, I’m going to start over. Evaluation is an everyday activity!

Evaluative thinking is something you do everyday; probably all day. (I don’t know about when you are a sleep, so I said probably.) I think evaluative thinking is one of those skills that everyone needs to learn systematically. I think everyone learns at least a part of evaluative thinking as they grow; the learning may not be systematic. I would put that skill in the same category as critical (not negative but thoughtful) thinking, team building, leadership, communication skills (both verbal and written), technological facility as well as some others which escape me right now. I would add systematic evaluative thinking.

Everyone has criteria on which decisions are based. Look at how you choose a package of cookies or a can of corn can of corn 2 at the grocery store. What criteria do you use for choosing? Yet that wasn’t taught to you; it was just something you developed. Evaluative thinking is more than just choosing what you want for dinner. AARP lists problem solving as part of the critical thinking skills. I think it is more than just problem solving; I do agree that it is a critical thinking skill (see graphic, from Grant Tilus, Rasmussen College).Core Critical Thinking Skills

So you think thoughtfully about most events/activities/things that you do throughout the day. And you learn over time what works and what doesn’t; what has value and what doesn’t. You learn to discern the conditions under which something works; you learn what changes the composition of the outcome. You begin to think evaluatively about most things. One day you realize that you are critically thinking about what you can, will, and need to do. Evaluative thinking has become systematic. You realize that it depends on many factors. You realize that evaluative thinking is a part of who you are. You are an evaluator, even if you are a psychologist or geologist or engineer or educator first.

 

my two cents.

molly.

Ignorance is a choice.ignorance

Not knowing may be “easier”; you know, less confusing, less intimidating, less fearful, less embarrassing.

I remember when I first asked the question, “Is it easier not knowing?” What I was asking was “By choosing to not know, did I really make a choice, or was it a default position?” Because if you consciously avoid knowing, do you really not know or are you just ignoring the obvious. Perhaps it goes back to the saying common on social media today: “Great people talk about ideas; average people talk about things; small people talk about other people” (which is a variation of what Elanor Roosevelt said).great minds-people Continue reading

“fate is chance; destiny is choice”.destiny-calligraphy-poster-c123312071

Went looking for who said that originally so that I could give credit. Found this as the closest saying: “Destiny is no matter of chance. It is a matter of choice: It is not a thing to be waited for, it is a thing to be achieved.

William Jennings Bryan

 

Evaluation is like destiny. There are many choices to make. How do you choose? What do you choose?

Would you listen to the dictates of the Principal Investigator even if you know there are other, perhaps better, ways to evaluate the program?

What about collecting data? Are you collecting it because it would be “nice”? OR are you collecting it because you will use the data to answer a question?

What tools do you use to make your choices? What resources do you use?

I’m really curious. It is summer and although I have a list (long to be sure) of reading, I wonder what else is out there, specifically relating to making choices? (And yes, I could use my search engine; I’d rather hear from my readers!)

Let me know. PLEASE!

my two cents.

molly.

Knowledge is personal!

A while ago I read a blog by Harold Jarche. He was talking about knowledge management (the field in which he works). That field  makes the claim that knowledge can be transferred; he makes the claim that knowledge cannot be transferred.  He goes on to say that we can share (transfer) information; we can share data; we cannot share knowledge. I say once we share the information, the other person has the choice to make that shared information part of her/his knowledge or not. Stories help individuals see (albeit, briefly) others’ knowledge.

Now,  puzzling the phrase, “Knowledge is personal”.  I would say, “The only thing ‘they” can’t take away from you is knowledge.” (The corollary to that is “They may take your car, your house, your life; they cannot take your knowledge!”).

So I am reminded, when I remember that knowledge is personal and cannot be taken away from you, that there are evaluation movements and models which are established to empower people with knowledge, specifically evaluation knowledge. I must wonder, then, if by sharing the information, we are sharing knowledge? If people are really empowered? To be sure, we share information (in this case about how to plan, implement, analyze, and report an evaluation). Is that sharing knowledge?

Fetterman (and Wandersman in their 2005 Guilford Press volume*)AA027003 says that “empowerment evaluation is committed to contributing to knowledge creation”. (Yes, they are citing Lentz, et al., 2005*; and Nonaka & Takeuchi, 1995*., just to be transparent.) So I wonder, if knowledge is personal and known only to the individual, how can “they” say that empowerment evaluation is contributing to knowledge creation. Is it because knowledge is personal and every individual creates her/his own knowledge through that experience? Or does empowerment evaluation contribute NOT to knowledge creation but information creation? (NOTE: This is not a criticism of empowerment evaluation, only an example using empowerment evaluation of the dissonance I’m experiencing; in fact, Fetterman defines empowerment evaluation as “the use of evaluation concepts, techniques, and findings to foster improvement and self-determination”. It is only later in the volume cited that the statement of knowledge creation)

Given that knowledge is personal, it would make sense that knowledge is implicit and implicit knowledge requires interpretation to make sense of it. Hence, stories because stories can help share implicit knowledge. As each individual seeks information to become knowledge, that same individual makes that information into knowledge and that knowledge implicit.  Jarche says, “As each person seeks information, makes sense of it through reflection and articulation, and then shares it through conversation…” I would add, “and shared as information”.

Keep that in mind the next time you want to measure knowledge as part of KASA on a survey.

my two cents.

molly.

  1. * Fetterman, D. M. & Wandersman, A. (eds.) (2005). Empowerment evaluation principles in practice. New Y0rk: Guilford Press.
  2. Lentz, B. E., Imm, P. S., Yost, J. B., Johnson, N. P., Barron, C., Lindberg, M. S. & Treistman, J. In D. M. Fetterman & A. Wandersman (Eds.), Empowerment evaluation principles in practice. New York: Guilford Press.
  3. Nonaka, I., & Takeuchi, K. (1995). The knowledge creating company. New York: Oxford University Press.