This is the time of resolutions that will provide you with a bountiful new year, 2016. I’ve seen a lot of thoughts on what would be appropriate resolutions: (Light the torches of others. Gloria Steinem) (The world needs a change. Malala Yousafzai) (Gratitude not entitlement. Steven Aitchison) (Do your best. OGMANDINO) Continue reading
Category Archives: Methodology
Conference evaluation
The OSU Extension Service conference started today (#OSUExtCon). There are concurrent sessions, plenary sessions, workshops, twitter feeds, (Jeff Hino is Tweeting), tours, receptions, and meal gatherings. There are lots of activities and they cover four days. But I want to talk about conference evaluation.
The thought occurs to me: “What difference is this making?” Ever the evaluator, I realize that the selection will be different next year (it was different last year) so I wonder how valuable is it to evaluate the concurrent sessions? Given that time doesn’t stand still (fortunately {or not, depending}), the plenary sessions will also be different. Basically, the conference this year will be different from the conference the next time. Yes, it will be valuable for the presenters to have feedback on what they have done and it will be useful for conference planners to have feed back on various aspects of the conference. I still have to ask, “Did it make a difference?”
A long time colleague of mine (formerly at Pennsylvania State University ), Nancy Ellen Kiernan proposed a method of evaluating conferences that I think is important to keep and use. She suggested the use of “Listening Post” as an evaluation method. She says, “The “Listening Posts” consisted of a group of volunteer conference participants who agreed beforehand to “post” themselves in the meeting rooms, corridors, and break rooms and record what conferees told them about the conference as it unfolded [Not unlike Twitter, but with value; parenthetical added]. Employing listening posts is an informal yet structured way to get feedback at a conference or workshop without making participants use pencil and paper.” She put it in “Tipsheet #5” and published the method in Journal of Extension (JoE), the peer reviewed monthly on-line publication.
Quoting from the abstract of the JoE article, “Extension agents often ask, “Isn’t there an informal but somewhat structured way to get feedback at a conference or workshop without using a survey?” This article describes the use of ‘Listening Posts’ and the author gives a number of practical tips for putting this qualitative strategy to use. Benefits include: quality feedback, high participation and enthusiastic support from conferees and the chance to build program ownership among conference workers. Deficits: could exclude very shy persons or result in information most salient to participants.”
I’ve used this method. It works. It does solicit information about what difference the conference made, not whether the participants liked or didn’t like the conference. (This is often what is asked in the evaluation.) Nancy Ellen suggests that the listening post collectors ask the following questions:
- “What did you think of the idea of …this conference? and
- What is one idea or suggestion that you found useful for your professional work? (The value/difference question)
- Then, she suggests, that the participant tell anything else about the conference that is important for us to know.
Make sure the data collectors are distinctive. Make sure they do not ask any additional questions. The results will be interesting.
AEA and Evaluation 2015
I got back to the office Monday after spending last week in Chicago at the AEA annual conference, Evaluation 2015. Next year AEA will be in Atlanta, October 24-29, 2016. Mark your calendars!
I am tired. I take a breath (many breaths), try to catch up (I don’t), and continue to read my email (hundreds of email). I’m sure there are some I will miss–I always do. In the meantime, I process what I experienced. And pass the conference through my criteria for a successful conference: Did I
- See three (and visit with) long time friends: yes.
- Get three new ideas: maybe.
- Meet three new people I’d like to add to my “friendlies” category: maybe.
Why three. Seemed like a good number; more than one (not representative) and less than five (too hard to remember). Continue reading
KASA
KASA. You’ve heard the term many times. Have you really stopped to think about what it means? What evaluation approach you will use if you want to determine a difference in KASA? What analyses you will use? How you will report the findings?
Probably not. You just know that you need to measure KNOWLEDGE, ATTITUDE, SKILLS, and ASPIRATIONS.
The Encyclopedia of Evaluation (edited by Sandra Mathison) says that they influence the adoption of selected practices and technologies (i.e., programs). Claude Bennett uses KASA in his TOP model .I’m sure there are other sources. Continue reading
Taking a stand.
I just got back from a road trip across Southern Alabama with my younger daughter. We started from Birmingham and drove a very circuitous route ending in Mobile and the surrounding areas, then returned to Birmingham for her to start her second year at Birmingham-Southern College.
As we traveled, I read a book by Bill McKibben (one of many) called Oil and Honey: The Education of an Unlikely Activist. It is a memoir, a personal recounting of the early years of this decade, which corresponded with the years my older daughter was in college (2011-2014). I met Bill McKibben, who, in 2008, is credited with starting the non-profit, 350.0rg, and is currently listed as “senior adviser and co-founder”. He is a passionate, soft-spoken man, who beleives that the world is on a short fuse. He really seems to believe that there is a better way to have a future. He, like Gandhi, is taking a stand. Oil and Honey puts into action Gandhi’s saying about being the change you want to see. As the subtitle indicates, McKibben is an unlikely activist. He is a self-described non-leader who led and advises the global effort to increase awareness of climate change/chaos. When your belief is on the line, you do what has to be done.
Evaluators are the same way. When your belief is on the line, you do what has to be done. And, hopefully, in the process you are the change that you want to see in the world. But know it cannot happen one pipeline at a time. The fossil fuel industry has too much money. So what do you do? You start a campaign. That is what 350.org has done: “There are currently fossil fuel divestment campaigns at 308 colleges and universities, 105 cities and states, and 6 religious institutions.”(Wikipedia, 350.0rg) (Scroll down to the heading “Fossil Fuel Divestment” to see the complete discussion.) Those are clear numbers, hard data for consumption. (Unfortunately, the divestment campaign at OSU failed.)
So I see the question as one of impact, though not specifically world peace (my ultimate impact). If there is no planet on which to work for world peace, there in no need for world peace. Evaluators can help. They can look at data critically. They can read the numbers. They can gather the words. This may be the best place for the use of pictures (they are, after all, worth 1000 words). Perhaps by combining efforts, the outcome will be an impact that benefits all humanity and builds a tomorrow for the babies born today.
molly.
Knowledge is personal
Knowledge is personal!
A while ago I read a blog by Harold Jarche. He was talking about knowledge management (the field in which he works). That field makes the claim that knowledge can be transferred; he makes the claim that knowledge cannot be transferred. He goes on to say that we can share (transfer) information; we can share data; we cannot share knowledge. I say once we share the information, the other person has the choice to make that shared information part of her/his knowledge or not. Stories help individuals see (albeit, briefly) others’ knowledge.
Now, puzzling the phrase, “Knowledge is personal”. I would say, “The only thing ‘they” can’t take away from you is knowledge.” (The corollary to that is “They may take your car, your house, your life; they cannot take your knowledge!”).
So I am reminded, when I remember that knowledge is personal and cannot be taken away from you, that there are evaluation movements and models which are established to empower people with knowledge, specifically evaluation knowledge. I must wonder, then, if by sharing the information, we are sharing knowledge? If people are really empowered? To be sure, we share information (in this case about how to plan, implement, analyze, and report an evaluation). Is that sharing knowledge?
Fetterman (and Wandersman in their 2005 Guilford Press volume*) says that “empowerment evaluation is committed to contributing to knowledge creation”. (Yes, they are citing Lentz, et al., 2005*; and Nonaka & Takeuchi, 1995*., just to be transparent.) So I wonder, if knowledge is personal and known only to the individual, how can “they” say that empowerment evaluation is contributing to knowledge creation. Is it because knowledge is personal and every individual creates her/his own knowledge through that experience? Or does empowerment evaluation contribute NOT to knowledge creation but information creation? (NOTE: This is not a criticism of empowerment evaluation, only an example using empowerment evaluation of the dissonance I’m experiencing; in fact, Fetterman defines empowerment evaluation as “the use of evaluation concepts, techniques, and findings to foster improvement and self-determination”. It is only later in the volume cited that the statement of knowledge creation)
Given that knowledge is personal, it would make sense that knowledge is implicit and implicit knowledge requires interpretation to make sense of it. Hence, stories because stories can help share implicit knowledge. As each individual seeks information to become knowledge, that same individual makes that information into knowledge and that knowledge implicit. Jarche says, “As each person seeks information, makes sense of it through reflection and articulation, and then shares it through conversation…” I would add, “and shared as information”.
Keep that in mind the next time you want to measure knowledge as part of KASA on a survey.
molly.
- * Fetterman, D. M. & Wandersman, A. (eds.) (2005). Empowerment evaluation principles in practice. New Y0rk: Guilford Press.
- Lentz, B. E., Imm, P. S., Yost, J. B., Johnson, N. P., Barron, C., Lindberg, M. S. & Treistman, J. In D. M. Fetterman & A. Wandersman (Eds.), Empowerment evaluation principles in practice. New York: Guilford Press.
- Nonaka, I., & Takeuchi, K. (1995). The knowledge creating company. New York: Oxford University Press.
Change
Like many people, I find change hard. In fact, I really don’t like change. I think this is the result of a high school experience; one-third of my classmates left each year. (I was a military off-spring; we changed assignments every three years.)
Yet, in today’s world change is probably the only constant. Does that make it fun? Not necessarily. Does that make it easy? Nope. Does that make it necessary? Yep.
Evaluators deal with change regularly. New programs are required; those must be evaluated. Old programs are revised; those must be evaluated. New approaches are developed and presented to the field. (When I first became an evaluator, there wasn’t a systems approach to evaluation; there wasn’t developmental evaluation; I could continue.) New technologies are available and must be used even if the old one wasn’t broken (even for those of us who are techno-peasants).
I just finished a major qualitative evaluation that involved real-time virtual focus groups. When I researched this topic (virtual focus groups), I found a lot of information about non-synchronous focus groups, focus groups using a conferencing software, even synchronous focus groups without pictures. I didn’t find anything about using real-time synchronous virtual focus groups. Unfortunately, we didn’t have much money even though there are services available. Continue reading
Meta-evaluation
At a loss for what to write, I once again went to one of my favorite books, Michael Scriven’s Evaluation Thesaurus . This time when I opened the volume randomly, I came upon the entry for meta-evaluation. This is a worthy topic, one that isn’t addressed often. So this week, I’ll talk about meta-evaluation and quote Scriven as I do.
First, what is meta-evaluation? This is an evaluation approach which is the evaluation of evaluations (and “indirectly, the evaluation of evaluators”). Scriven suggests the application of an evaluation-specific checklist or a Key Evaluation Checklist (KEC) (p. 228). Although this approach can be used to evaluate one’s own work, the results are typically unreliable which implies (if one can afford it) to use an independent evaluator to conduct a meta-evaluation of your evaluations.
Then, Scriven goes on to say the following key points:
- Meta-evaluation is the professional imperative of evaluation;
- Meta-evaluation can be done formatively or summatively or both; and
- Use the KEC to generate a new evaluation OR apply the checklist to the original evaluation as a product.
He lists the parts a KEC involved in a meta evaluation; this process includes 13 steps (pp. 230-231).
He gives the following reference:
Stufflebeam, D. (1981). Meta-evaluation: Concepts, standards, and uses. In R. Berk (Ed.), Educational evaluation methodology: The state of the art. Baltimore, MD: Johns Hopkins.
Inferential statistics
This is a link to an editorial in Basic and Applied Social Psychology. It says that inferential statistics are no longer allowed by authors in the journal.
“What?”, you ask. Does that have anything to do with evaluation? Yes and no. Most of my readers will not publish here. They will publish in evaluation journals (of which there are many) or if they are Extension professionals, they will publish in the Journal of Extension. And as far as I know, BASP is the only journal which has established an outright ban on inferential statistics. So evaluation journals and JoE still accept inferential statistics.
Still–if one journal can ban the use, can others?
What exactly does that mean–no inferential statistics? The journal editors define this ban as as “…the null hypothesis significance testing procedure is invalid and thus authors would be not required to perform it.” That means that authors will remove all references to p-values, t-values, F-values, or any reference to statements about significant difference (or lack thereof) prior to publication. The editors go on to discuss the use of confidence intervals (No) and Bayesian methods (case-by case) and what inferential statistical procedures are required by the journal. Continue reading
Assets and Needs
Evaluators are often the key people identified to conduct a needs assessment. A needs assessment is identified in the situation that exists before the intervention is designed or implemented. Hopefully. Currently, there is discussion in the field that rather than focusing on needs (i.e., what is missing, needed), there should be discussions of assets (i.e., what is available, strengths). My favorite go-to person on needs assessments is Jim Altschuld who has published a volume that talks about bridging the gap between the two. . In it, he talks about the difference between the two. He says, “Need is a noun, a problem that should be attended to or resolved. It is a gap or discrepancy between the ‘what should be’ and the ‘what is’ conditions”. However, assets/capacity building (emphasis added) refer “…to building a culture in an organization or community so that it can grow and change in accord with its strengths…” Continue reading