The US just celebrated Thanksgiving, the annual day of thankfulness. Thanksgiving Canada celebrated in mid October (October 12). Although other countries celebrate versions of the holiday, originally the US and Canada celebrated in honor of the previous harvest.

Certainly, the Guiding Principles Guiding principles and the Program Evaluation Standards program evaluation standards provide evaluators with a framework to conduct evaluationEvaluation3 work. The work for which I am thankful.

Continue reading

KASA. You’ve heard the term many times. Have you really stopped to think about what it means? What evaluation approach you will use if you want to determine a difference in KASA? What analyses you will use? How you will report the findings?

Probably not. You just know that you need to measure KNOWLEDGE, ATTITUDE, SKILLS, and ASPIRATIONS.

The Encyclopedia of Evaluation (edited by Sandra Mathisonsandra mathison) says that they influence the adoption of selected practices and technologies (i.e., programs). Claude Bennett Claude Bennett uses KASA in his TOP model  Bennett Hierarchy.I’m sure there are other sources. Continue reading

Knowledge is personal!

A while ago I read a blog by Harold Jarche. He was talking about knowledge management (the field in which he works). That field  makes the claim that knowledge can be transferred; he makes the claim that knowledge cannot be transferred.  He goes on to say that we can share (transfer) information; we can share data; we cannot share knowledge. I say once we share the information, the other person has the choice to make that shared information part of her/his knowledge or not. Stories help individuals see (albeit, briefly) others’ knowledge.

Now,  puzzling the phrase, “Knowledge is personal”.  I would say, “The only thing ‘they” can’t take away from you is knowledge.” (The corollary to that is “They may take your car, your house, your life; they cannot take your knowledge!”).

So I am reminded, when I remember that knowledge is personal and cannot be taken away from you, that there are evaluation movements and models which are established to empower people with knowledge, specifically evaluation knowledge. I must wonder, then, if by sharing the information, we are sharing knowledge? If people are really empowered? To be sure, we share information (in this case about how to plan, implement, analyze, and report an evaluation). Is that sharing knowledge?

Fetterman (and Wandersman in their 2005 Guilford Press volume*)AA027003 says that “empowerment evaluation is committed to contributing to knowledge creation”. (Yes, they are citing Lentz, et al., 2005*; and Nonaka & Takeuchi, 1995*., just to be transparent.) So I wonder, if knowledge is personal and known only to the individual, how can “they” say that empowerment evaluation is contributing to knowledge creation. Is it because knowledge is personal and every individual creates her/his own knowledge through that experience? Or does empowerment evaluation contribute NOT to knowledge creation but information creation? (NOTE: This is not a criticism of empowerment evaluation, only an example using empowerment evaluation of the dissonance I’m experiencing; in fact, Fetterman defines empowerment evaluation as “the use of evaluation concepts, techniques, and findings to foster improvement and self-determination”. It is only later in the volume cited that the statement of knowledge creation)

Given that knowledge is personal, it would make sense that knowledge is implicit and implicit knowledge requires interpretation to make sense of it. Hence, stories because stories can help share implicit knowledge. As each individual seeks information to become knowledge, that same individual makes that information into knowledge and that knowledge implicit.  Jarche says, “As each person seeks information, makes sense of it through reflection and articulation, and then shares it through conversation…” I would add, “and shared as information”.

Keep that in mind the next time you want to measure knowledge as part of KASA on a survey.

my two cents.

molly.

  1. * Fetterman, D. M. & Wandersman, A. (eds.) (2005). Empowerment evaluation principles in practice. New Y0rk: Guilford Press.
  2. Lentz, B. E., Imm, P. S., Yost, J. B., Johnson, N. P., Barron, C., Lindberg, M. S. & Treistman, J. In D. M. Fetterman & A. Wandersman (Eds.), Empowerment evaluation principles in practice. New York: Guilford Press.
  3. Nonaka, I., & Takeuchi, K. (1995). The knowledge creating company. New York: Oxford University Press.

People often say one thing and do another.

This came home clearly to me with a nutrition project conducted with fifth and sixth grade students over the course of two consecutive semesters. We taught them nutrition and fitness and assorted various nutrition and fitness concepts (nutrient density, empty calories, food groups, energy requirements, etc.). We asked them at the beginning to identify which snack they would choose if they were with their friends (apple, carrots, peanut butter crackers, chocolate chip cookie, potato chips). We asked them at the end of the project the same question. They said they would choose an apple both pre and post. On the pretest, in descending order, the  students would choose carrots, potato chips, chocolate chip cookies, and peanut butter crackers. On the post test, in descending order, the students would choose chocolate chip cookies, carrots, potato chips, and peanut butter crackers. (Although the sample sizes were reasonable [i.e., greater than 30], I’m not sure that the difference between 13.0% [potato chips] and 12.7% [peanut butter crackers] was significant. I do not have those data.) Then, we also asked them to choose one real snack. What they said and what they did was not the same, even at the end of the project. Cookies won, hands down in both the treatment and control groups. Discouraging to say the least; disappointing to be sure. What they said they would do and what they actually did were different.

Although this program ran from September through April, and is much longer than the typical professional development conference of a half day (or even a day), what the students said was different from what the students did. We attempted to measure knowledge, attitude, and behavior. We did not measure intention to change.

That experience reminded me of a finding of Paul Mazmanian pemazman. (I know I’ve talked about him and his work before; his work bears repeating.) He did a randomized controlled trial involving continuing medical education and commitment to change. After all, any program worth its salt will result in behavior change, right? So Paul Mazmanian set up this experiment involving doctors, the world’s worst folks with whom to try to change behavior.

He found that “…physicians in both the study and the control groups were significantly more likely to change (47% vs 7%, p<0.001) IF they indicated an INTENT (emphasis added in both cases) to change immediately following the lecture ” (i.e., the continuing education program).  He did a further study and found that a signature stating that they would change didn’t increase the likelihood that they would change.

Bottom line, measure intention to change in evaluating your programs.

References:

Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowitz, M. P. (August 1998). Information about barriers to planned change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8), 882-886.

Mazmanian, P. E., Johnson, R. E., Zhang, A., Boothby, J. & Yeatts, E. J. (June, 2001). Effects of a signature on rates of change: A randomized controlled trial involving continuing education and the commitment-to-change model. Academic Medicine, 76(6), 642-646.