Evaluation is political. I am reminded of that fact when I least expect it.
In yesterday’s AEA 365 post, I am reminded that social justice and political activity may be (probably are) linked; are probably sharing many common traits.
In that post the author lists some of the principles she used recently:
Evaluation is a trans-discipline, drawing from many many other ways of thinking. We know that politics (or anything political) is socially constructed. We know that ‘doing to’ is inadequate because ‘doing with’ and ‘doing as’ are ways of sharing knowledge. (I would strive for ‘doing as’.) We also know that there are multiple ways of knowing.
(See Belenky , Clinchy [with Belenky] , Goldberger , and Tarule, Basic Books, 1986 as one.)
(See: Gilligan , Harvard University Press, 1982; among others.)
How does evaluation, social justice, and politics relate?
What if you do not bring representation of the participant groups to the table?
If they are not asked to be at the table or for their opinion?
What if you do not ask the questions that need to be asked of that group?
To whom ARE your are your questions being addressed?
Is that equitable?
Being equitable is one aspect of social justice. There are others.
Evaluation needs to be equitable.
I will be in Atlanta next week at the American Evaluation Association conference. ‘
Maybe I’ll see you there!
The person who was facilitating the session provided the group with clear guidelines.
The Vision statement, defined as “the desired future condition”, will happen in 2-5 years (i.e., What will change?). We defined the change occurring (i.e., in the environment, the economy, the people). The group also identified what future conditions would be possible. We would write the vision statement so that it would happen within 2-5 years, be practical, be measurable, and be realistic. OK…
And be short…because that is what vision statements are.
The Mission statement (once the Vision statement was written and accepted) defined “HOW” we would get to the vision statement. This reminded me of process–something that is important in evaluation. So I went to my thesaurus to find out what that source said about process. Scriven to the rescue, again.
Scriven, in his Evaluation Thesaurus, defines process as the activity that occurs “…between the input and the output, between the start and finish”. Sounds like “how” to me. Process relates to process evaluation. I suggest you read the section on process evaluation on page 277 in the above mentioned source.
Process evaluation rarely functions as the sole evaluation tool because of weak connections between “output quantity and quality”. Process evaluations will probably not generalize to other situations.
However, PROCESS evaluation “…must be looked at as part of any comprehensive evaluation, not as a substitute for inspection of outcomes…” The factors include “the legality of the process, the morality, the enjoyability, the truth of any claims involved, the implementation…, and whatever clues…” that can be provided.
Describing “how ” something is to be done is not easy. It is not output nor outcome. Process is the HOW something will be accomplished if you have specific inputs . It happens between the inputs and the outputs.
To me, the group needs to read about process evaluation in crafting the mission statement in order to get to the HOW.
Logic Model cartoons.
He has offered an alternative to presenting survey data. He has a wonderful cartoon for this.
He is a wonderful resource. Use him. You can contact him through his blog, fresh spectrum.
(Thank you for this thought, Plexus Institute.)
No, I’m not talking about the current political situation in the US. I’m talking about evaluation.
Art Markman makes this comment (the “how do we make decisions…” comment) here. He says “If you dislike every choice you’ve got, you’ll look for one to reject rather than one to prefer—subtle difference, big consequences.” He based this opinion on research, saying that the rejection mind-set allows us to focus on negative information about options and fixate on the one with the smallest downside. Read the rest of this entry »
I wrote a blog about making a difference. Many people have read the original post, recently. And there have been many comments about it and the follow-up posts. Most people have made supportive comments. For example:
Some people have made a less than supportive comment. For example:
Some people have made comments that do not relate to content yet are relevant. For example:
Making a difference. I will keep writing. Making a difference needs to be measured. I keep in mind that stories (comments) are data with soul.
Less than a supportive comment. What is outdated? I need specific comments to which to respond, please. Also, the post to which is being referred is from April, 2012…over four years ago.
Some other comments. I can’t teach how to blog faster for I know nothing about blogspot. I only know a little about WordPress. Stories are data with a soul–important to remember when dealing with qualitative data.
AEA365 is honoring living evaluators for Labor Day (Monday, September 5, 2016).
Some of the living evaluators I know (Jim Altschuld, Tom Chapel, Michael Patton, Karen Kirkhart, Mel Mark, Lois-Ellin Datta, Bob Stake); Some of them I don’t know (Norma Martinez-Rubin, Nora F. Murphy, Ruth P. Saunders, Art Hernandez, Debra Joy Perez). One I’m not sure of at all (Mariana Enriquez). Over the next two weeks, AEA365 is hosting a recognition of living evaluator luminaries.
The wonderful thing is that this give me an opportunity to check out those I don’t know; to read about how others see them, what makes them special. I know that the relationships that develop over the years are dear, very dear.
I also know that the contributions that these folks have made to evaluation cannot be captured in 450 words (although we try). They are living giants, legends if you will.
These living evaluators have helped move the field to where it is today. Documenting their contributions to evaluation enriches the field. We remember them fondly.
If you don’t know them, look for them at AEA ’16 in Atlanta . Check out their professional development sessions or their other contributions (paper, poster, round-table, books, etc). Many of them have been significant contributors to AEA; some have only been with AEA since the early part of this century. All have made a meaningful contribution to AEA.
Many evaluators could be mentioned and are not. Sheila B. Robinson suggests that “…we recognize that many, many evaluators could and should be honored as well as the 13 we feature this time, and we hope to offer another invitation next year for those who would like to contribute a post, so look for that around this time next year, and sign up!
James W. Altschuld Thomas J. Chapel
Norma Martinez-Rubin Michael Quinn Patton
Nora F. Murphy Ruth P. Saunders
Art Hernandez Karen Kirkhart
Mel Mark Lois-Ellin Datta
Debra Joy Perez Bob Stake
Mariana Enriquez (Photo not known/found)
Sheila Robinson has an interesting post which she titled “Outputs are for programs. Outcomes are for people.” Sounds like a logic model to me.
Evaluating something (a strategic plan, an administrative model, a range management program) can be problematic. Especially if all you do is count. So “Do you want to count?” OR “Do you want to determine what difference you made?” I think it all relates to outputs and outcomes.
The model below explains the difference between outputs and outcomes.
. (I tried to find a link on the University of Wisconsin website and UNFORTUNATELY it is no longer there…go figure. Thanks to Sheila, I found this link which talks about outputs and outcomes) I think this model makes clear the difference between Outputs (activities and participation) and Outcomes-Impact (learning, behavior, and conditions). Read the rest of this entry »
Oxford English Dictionary defines possible as capable of being (may/can exist, be done, or happen). It defines probable as worthy of acceptance, believable.
Somebody asked me what was the difference between science fiction and fantasy. Certainly the simple approach is that science fiction deals with the possible (if you can think it, it can happen). Fantasy deals with monsters, fairies, goblins, and other mythical creatures, i.e., majic and majical creatures.
(Disclaimer: I personally believe in majic; much of fantasy deals with magic.) I love the Arthurian legend (it could be fantasy; it has endured for so long it is believable). It is full of majic. I especially like the Marion Zimmer Bradley book, The Mists of Avalon . (I find the feminist perspective refreshing.)
Is fantasy always impossible as Bradbury suggests, or is it just improbable? (Do the rules of physics apply?) This takes me back to Bradbury’s quote and evaluation after the minor digression. Bradbury also says that “Science fiction, again, is the history of ideas, and they’re always ideas that work themselves out and become real and happen in the world.” Not unlike evaluation. Evaluation works itself out and becomes real and happens. Usually.
Often, I am invited to be the evaluator of record after the program has started. I sigh. Then I have a lot of work to do. I must teach folks that evaluation is not an “add on” activity. I must also teach the folks how to identify the difference the program made. Then there is the issue of outputs (activities, participants) vs. outcomes (learning, behavior, conditions). Many principal investigators want to count differences pre-post.
Does the “how many” provide a picture of what difference the program made? If you start with no or few participants and you end with many participants, have you made a difference? Yes, it is possible to count. Counts often meet reporting requirements. They are possible. So is documenting the change in knowledge, behavior, and conditions. It takes more work and more money. It is possible. Will you get to world peace? Probably not. Even if you can think it. World peace may be probable; it may not be possible (at least in my lifetime).
AEA365 ran a blog on vulnerability recently (August 5, 2016). It cited the TED talk by Brené Brown on the same topic. Although I really enjoyed the talk (I haven’t met a TED talk I didn’t like), it was more than her discussion of vulnerability that I enjoyed (although I certainly enjoyed learning that vulnerability is the birth place of joy and connection is why we are here .
She talked about story and its relationship to qualitative data. She says that she is a qualitative researcher and she collects stories. She says that “stories are just data with a soul”. That made a lot of sense to me.
See, I’ve been struggling to figure out how to turn the story into a meaningful outcome without reducing it to a number. (I do not have an answer, yet. If any of you have any ideas, let me know.) She says (quoting a former research professor) that if you cannot measure it, it does not exist. If it doesn’t exist then is what ever you are studying a figment of your imagination? So is there a way to capture a story and aggregate that story with other similar stories to get an outcome WITHOUT REDUCING IT TO A NUMBER? So given that stories are often messy, and given that stories are often complicated, and given that stories are rich in what they tell the researcher, it occurred to me that stories are more than themes and and content analysis. Stories are “data with a soul”.
Yet any book on qualitative data analysis (for example or or ) you will see that there is confusion in the analysis process. Is it the analysis of qualitative data OR is it the qualitative analysis of data. Where do you put the modifier “qualitative”? To understand the distinction, a 2×2 visual might be helpful. (Adapted from Bernard, H. R. & Ryan, G. W. (1996). Qualitative data, quantitative analysis. Cultural Anthropology Methods Journal, 8(1), 9 – 11. Copyright © 1996 Sage Publications.)
We are doing data analysis in all four quadrants. We are analyzing and capturing the deeper meaning of the data in cell A. Yes, we are analyzing data in other cells (B, C, and D) just not the capturing the deeper meaning of those data. Cell D is the quantitative analysis of quantitative data; Cell B is the qualitative analysis of quantitative data; and Cell C is the quantitative analysis of qualitative data. So the question becomes “Do you want deeper meaning from your data?” or “Do you want a number from your data?” (I’m still working on relating this to story!)
It all depends on what you want when you analyze your data. If you want to reduce it to a number, focus on cells B, C, and D. If you want deeper meaning, focus on cell A. Depending on what you want (and how you interpret the data) will be the place where the personal and situational bias occur. No, you cannot be the “objective and dispassionate” scientist. Doesn’t happen in today’s world (probably ever–only I can only speak of today’s world). Everyone has biases and they rear their heads (perhaps ugly heads) when least expected.
You have to try. Regardless.
This quote is often attributed to Eleanor Roosevelt (1884-1962); there are some sites that attribute this quote to Donald Anderson Laird (1897-1969), a psychologist and author (no photo found). Probably, more accurate. I’m not sure that the origin of the saying is really important. It may be enough to keep in mind the saying itself. (I know–how does this relate to evaluation? Trust me, it does.)
Before I was an evaluator, I was a child therapist (I also treated young women). I learned many skills as a therapist that have served me well as an evaluator. Skills like listening, standing up for your self, looking at alternatives. Which leads me to this saying. I had to “handle” others all the time at the same time I had to “handle” my self. I could not “blow up” when reprimanded. I could not become discouraged when someone (the client, the funder) criticized me. I had to learn to laugh when the joke was on me. I had to keep my spirits up when things went wrong. I had to keep cool in emergencies. I had to learn to tune out gossip and negative comments from others. This was a hard time for me. I tend to be passionate when I have an opinion; I have/had opinions (often).
As an evaluator, I am still passionate. Once my evaluation “on” button is pushed, it is hard to turn it off. Yet I still have to handle people. This morning, for example, I met with a fellow faculty member. I had to listen. I had to look for (and at) alternatives. I “handled” with my head; remember, I am passionate about evaluation. I provided her with alternatives and followed through with those alternatives. I handled with my heart.
When others are involved (and in evaluation there are always others), they must be handled with care, with the heart. It goes back to the standards (propriety) and the guiding principles (integrity/honesty, direct respect for people, and responsibilities for general and public welfare). In the current times, it is especially important to have direct respect for people. All people. (Regardless of race, ethnicity, religion, gender identity, sex, national origin, veteran status, and disability.) To be honest and have integrity. One way to make sure you have integrity is to handle with your heart.