Recently, I read that 45% of individuals make New Year’s Resolutions and only 8% actually achieve success. Hmmm…not a friendly probability. Perhaps intentions about behavior are indeed more realistic. (Haven’t seen the statistics on that potential change. Mazanian (et al, 1998) does say stated intention to change is the most significant behavioral indicator.) My intention for 2016 is to provide content related to or about evaluation that provides you with something you didn’t have before you read the post (Point one). Examples follow: Read the rest of this entry »
First, let me say that getting to world peace will not happen in my lifetime (sigh…) and world peace is the ultimate impact. Everything else is an outcome. It may be a long term outcome, that is a condition change (either social, economic, environmental, or civic), or not. Just because the powers that be use a term doesn’t mean the term is being used correctly!
Then let me say that evaluation is the way to know you got to that impact…ultimately, world peace. Ultimately. In the mean time, you will need to find approximate (proxy) measures.
Last week, I attended the Engagement Scholarship Consortium conference in State College, PA, home of Penn State. I had the good fortune to see long time friends, meet new people and get a few new ideas. One of the long time friends I was able to visit with was Nancy Franz, Professor Emeritus, Iowa State University. She did a session called “Four steps to measuring and articulating engagement impact”.
Basically she reduced into four steps (hence, the title) program evaluation. And since engagement scholarship is a “program” it needs to be evaluated to make sure it is making a difference. Folks are slowly coming to that idea if the attendance at her session is any indication (full). She used different words than I would have used; I found myself adding parenthetical comments to her words.
I want to share in words what she shared graphically:
She had a few good suggestions; specifically:
Think about what you do when you evaluate a program. Do you do these four steps? Do you know what impact you are trying to achieve? And if you can’t get to world peace, that’s OK. Each step will bring you closer.
I just got back from a road trip across Southern Alabama with my younger daughter. We started from Birmingham and drove a very circuitous route ending in Mobile and the surrounding areas, then returned to Birmingham for her to start her second year at Birmingham-Southern College.
As we traveled, I read a book by Bill McKibben (one of many) called Oil and Honey: The Education of an Unlikely Activist. It is a memoir, a personal recounting of the early years of this decade, which corresponded with the years my older daughter was in college (2011-2014). I met Bill McKibben, who, in 2008, is credited with starting the non-profit, 350.0rg, and is currently listed as “senior adviser and co-founder”. He is a passionate, soft-spoken man, who beleives that the world is on a short fuse. He really seems to believe that there is a better way to have a future. He, like Gandhi, is taking a stand. Oil and Honey puts into action Gandhi’s saying about being the change you want to see. As the subtitle indicates, McKibben is an unlikely activist. He is a self-described non-leader who led and advises the global effort to increase awareness of climate change/chaos. When your belief is on the line, you do what has to be done.
Evaluators are the same way. When your belief is on the line, you do what has to be done. And, hopefully, in the process you are the change that you want to see in the world. But know it cannot happen one pipeline at a time. The fossil fuel industry has too much money. So what do you do? You start a campaign. That is what 350.org has done: “There are currently fossil fuel divestment campaigns at 308 colleges and universities, 105 cities and states, and 6 religious institutions.”(Wikipedia, 350.0rg) (Scroll down to the heading “Fossil Fuel Divestment” to see the complete discussion.) Those are clear numbers, hard data for consumption. (Unfortunately, the divestment campaign at OSU failed.)
So I see the question as one of impact, though not specifically world peace (my ultimate impact). If there is no planet on which to work for world peace, there in no need for world peace. Evaluators can help. They can look at data critically. They can read the numbers. They can gather the words. This may be the best place for the use of pictures (they are, after all, worth 1000 words). Perhaps by combining efforts, the outcome will be an impact that benefits all humanity and builds a tomorrow for the babies born today.
I keep getting comments about my posts “Does this blog make a difference?”
I want to say thank you for all who read it.
I want to say thank you for all who follow this blog.
Mostly, I am continually amazed that people find what I have to say interesting to come back.
So: Thank you. For reading. For following. For coming back.
I think that is making a difference.
P. S. See you in two weeks!
The use of the term impact is problematic, as I see it. If you (or any evaluator) are going to have an impact, if your program is going to have an impact, if you are going to do anything other than focus on the outcomes, how will you know? Scriven, in his Thesauras , says an impact evaluation is an evaluation which focuses on outcomes rather than process, progress (delivery), or implementation. (Is that an example of using the word to define the word?) Is an impact evaluation the same as an evaluation which captures the outcomes? Read the rest of this entry »
Erma Bombeck said “You have to love a nation that celebrates its independence every July 4th not with a parade of guns, tanks, and soldiers, who file by the White House in a show of strength and muscle, but with family picnics, where kids throw frisbees, potato salad gets iffy, and the flies die from happiness. You may think you’ve overeaten, but its patriotism.”
I heard this quote on my way back from Sunriver, OR on Splendid Table, an American Public Media show I don’t get to listen to very often and has wonderful tidbits of information, not necessarily evaluative. Since I had just celebrated July 4th, this quote was most apropos! I also heard snippets of a broadcast (probably on NPR) that talked about patriotism/being patriotic. For me, tradition is patriotic. You know blueberry pie on the 4th of July; potato salad; pasta; and of course, fireworks (unless the fire danger is extreme [like it was in Sunriver] and then all you can hope is that people will be VERY VERY careful!
So what do you think makes for patriotism? What do you do to be patriotic? Certainly, for me, it wouldn’t be 4th of July without blueberry pie and my “redwhiteblue” t-shirt. I don’t need fireworks or potato salad… What makes this celebratory for me is the fact that I am assured freedom from want, freedom of worship, freedom from fear, and freedom of speech and I realize that they are only as free as I make them.
Franklin Delano Roosevelt said it clearly in his speech to congress, January 6, 1941: “In the future days, which we seek to make secure, we look forward to a world founded upon four essential human freedoms.
The first is freedom of speech and expression — everywhere in the world.
The second is freedom of every person to worship God in his (sic) own way — everywhere in the world.
The third is freedom from want — which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants — everywhere in the world.
The fourth is freedom from fear — which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbor– anywhere in the world…”
This is an exercise in evaluative thinking. What do you think (about patriotism)? What criteria do you use to think this?
Last week, I started a discussion on inappropriate evaluations. I was using the Fitzpatrick , Sanders , and Worthen text for the discussion (Program Evaluation: Alternative approaches and practical guidelines, 2011. See here.) There were three other examples given in that text which were:
I will cover them today.
First, if the evaluation doesn’t (or isn’t likely to) produce relevant information, don’t do it. If factors like inadequate resources–personnel, funding, time, lack of administrative support, impossible evaluation tasks, or inaccessible data (which are typically outside the evaluator’s control), give it a pass as all of these factors make the likelihood that the evaluation will yield useful, valid information slim. Fitzpatrick, Sanders, and Worthen say, “A bad evaluation is worse than none at all…”.
Then consider the type of evaluation that is requested. Should you do a formative, a summative, or a developmental evaluation? The tryout phase of a program typically demands a formative evaluation and not a summative evaluations despite the need to demonstrate impact. You may not demonstrate an effect at all because of timing. Consider running the program for a while (more than once or twice in a month). Decide if you are going to use the results for only programmatic improvement or for programmatic improvement AND impact.
Finally consider if the propriety of the evaluation is worthwhile. Propriety is the third standard in the Joint Committee Standards . Propriety helps establish evaluation quality by protecting the rights of those involved in the evaluation–the target audience, the evaluators, program staff, and other stakeholders. If you haven’t read the Standards, I recommend that you do.
New Topic (and timely): Comments.
It has been a while since I’ve commented on any feedback I get in the form of comments on blog posts. I read everyone. I get them both here as I write and as an email. Sometimes they are in a language I don’t read or understand and, unfortunately, the on-line translators don’t always make sense. Sometimes they are encouraging comments (keep writing; keep blogging; thank you; etc.). Sometimes there are substantive comments that lead me to think about things evaluation differently. Regardless of what the message is: THANK YOU! For commenting. Remember, I read each one.
Can there be inappropriate use of evaluation studies?
Jody Fitzpatrick¹ and her co-authors Jim Sanders and Blaine Worthen, in Program Evaluation: Alternative Approaches and Practical Guidelines (2011) provide several examples of inappropriate evaluation use. Before they give the examples, they share some wise words from Nick Smith² . Nick says there are two broad categories for declining conducting an evaluation. They are “1) when the evaluation could harm the field of evaluation, or 2) when it would fail to support the social good.” Fitzpatrick, Sanders, and Worthen (2011) go on to say that “these problems may arise when it is likely that the ultimate quality of the evaluation will be questionable, major clients would be alienated or misled concerning what evaluation can do, resources will be inadequate, or ethical principles would be violated” (p. 265).
The examples provided are
When I study these examples (there may be others; I’m quoting Fitzpatrick, Sanders, and Worthen, 2011), I find that these are examples often found in the published literature. As a reviewer, I find “show and tell” evaluations of little value because they produce trivial information. They report a study that has limited or insufficient impact and that has little or no potential for continuation. The cost of conducting a formal evaluation would easily outweigh the value–if monetized–(merit or worth) of the program and would yield little information useful for others in the field. The intention might be well designed; the product is less than ideal. Read the rest of this entry »
A uniquely American holiday (although it is celebrated in other countries as well-Canada, Liberia, The Netherlands, Norfolk Islands),
For me it is an opportunity to to be grateful–and I am, more than words can express. I am especially grateful for my daughters, bright, articulate, and caring children (who are also adults). Read the rest of this entry »
It all depends.
The classic evaluation response. In fact, it is the punch line for one of the few evaluation jokes I can remember (some-timers disease being what it is; if you want to know the joke, ask in your comment).
The response reminds me of something I heard (once again) while I was in Denver. One of the presenters at a session on competencies, certification, credentialing (an indirectly, about accreditation) talked about a criteria for evaluators that is not taught in preparatory programs–the tolerance for ambiguity. (What do you see in this image?)
What is this tolerance? What is ambiguity?
According to Webster’s Seventh, tolerance is the noun form of the verb “to tolerate” and means “…the relative capacity to endure or adapt physiologically to an unfavorable environmental factor…” also defined as “…sympathy or indulgence for beliefs or practices differing from or conflicting with one’s own; the act of allowing something; allowable deviation from a standard…”.
Using the same source, ambiguity (also a noun) means “…the quality or state of being ambiguous in meaning…” OK. Going on to ambiguous (the root of the word), it is an adjective meaning “…doubtful or uncertain especially from obscurity or indistinctness…capable of being understood in two or more possible senses…”. Personally, I find the “capable of being understood in two or more possible senses…” relevant to evaluation and to evaluators.
Yet, I have to ask, What does all that mean? It all depends.