It might be useful for those of you who are interested in evaluation to review a list of evaluation conferences offered around the world this year. Sarah Baughman cited the list offered by Better Evaluation. You could spend all year coming and going. What a way to see the world. That certainly has evaluative opportunities.
2014 beginning question.
One question I am asked by people new to evaluation is How often do I need to conduct an evaluation? How much budget/time/resources do I allot for evaluation? My evaluative answer is “It all depends.”
For new faculty who want to know if their programs are working, (not impact, just working), identify your most important program and evaluate it. Next year, so another program, and so on. If you want to know impact, you will need to wait at least three years, maybe five. Although some programs could show impact after one year. (We are not talking world peace here, only did the program make a difference, does it have merit, value, worth?)
For executive directors, my “it depends answer is still important. They have different needs than program planners and those who implement programs. My friend Stan says executive directors need to know: What is the problem? What caused the problem? How do I solve the problem (in two sentences or less)? Executive directors don’t have a lot of time to devote to evaluation; yet they need to know.
For people who are continuing a program of long standing, I would suggest you answer the question that is most pressing. (It all depends…)
I think these categories mostly cover everybody. If you can think of other situations, let me know. I’ll tell you what I think.
Nelson Mandela died last week (Thursday, actually) at the age of 95. Invictus is the name of a movie which recounts the poem below. While in prison on Robbon Island, he recited this poem to other prisoners and was empowered by the self-mastery message in it. It is a powerful poem. Mandela was a powerful person. We and the world were blessed that he was with us for 95 years; that he was the master of his fate and captain of his soul.
Out of the night that covers me,
Black as the pit from pole to pole,
I thank whatever gods may be
For my unconquerable soul.
In the fell clutch of circumstance
I have not winced nor cried aloud.
Under the bludgeonings of chance
My head is bloody, but unbowed.
Beyond this place of wrath and tears
Looms but the horror of the shade,
And yet the menace of the years
Finds and shall find me unafraid.
It matters not how strait the gate,
How charged with punishments the scroll,
I am the master of my fate:
I am the captain of my soul.
~~William Ernest Henley
When I read this poem and think of Mandela (aka Madiba). I also think of the evaluator’s guiding principles, especially the last three: Honesty/Integrity, Respect for People, and Responsibilities for General and Public Welfare. Mandela could have been an evaluator as even the first two principles could apply (Systematic Inquiry and Competence). He was certainly competent and he did systematic inquiry. He used these principles in an arena other than evaluation. Yet by doing what he did, he was able to determine the merit and worth of what he did. The world was lucky to have him for so long. He was the change he wished to see; and he changed the world.
I keep getting comments about my post “Is this post making a difference” and the subsequent posts related to that. (The survey is closed–has been for a long time.) Although I was looking for some tangible measure of difference (i.e., what is the merit, worth, value of this blog), I find that this post elicits more positive comments than not. Although I still get the random advertising post on that blog, I mostly get thought provoking comments about how it is or can be making a difference. I also get the comment at least weekly that says my blog isn’t making any difference to the reader (perhaps the reader needs to follow the blog across time rather than just reading this post) …sigh…
One comment I received this week said, “I can confirm that the information that you share has: 1) made sense; 2) made a difference; 3) been worthwhile. Keep it up; don’t stop.” Nice. Not specific; still nice. That the blog is being read is important. That people take the time to comment is important. That this venue is valuable to many is important.
I get positive comments on my writing; I get positive comments on my content. I’m an evaluator. I want to know what difference this information is making in your life. This is a program and it needs to be evaluated. And number of page views isn’t the answer.
So I ask you to keep commenting, that way I know the blog is being read. That way I know I am making a difference.
Gene Shackman shared these resources for “best practices” for doing survey research. Since survey methodology is often used frequently and regularly by Extension professionals, these might be of interest. I’m not endorsing any of them; only passing them on to interested individuals. Gene posted them originally as a comment on the Evaluators Group Linkedin page. Linkedin is another evaluator resource.
Survey Research: A Summary of Best Practices
December 31, 2004, Ethics Resource Center 2004, Leslie Altizer
A brief summary
AAPOR (American Association for Public Opinion Research
How to produce a quality survey
Best Practices for Survey Research Reports: A Synopsis for Authors and Reviewers
JoLaine Reierson Draugalis and others
Am J Pharm Educ. v.72(1); Feb 15, 2008
“This article provides a checklist and recommendations for authors and reviewers to use when submitting or evaluating manuscripts reporting survey research that used a questionnaire as the primary data collection tool.”
Achieving Quality Survey Research: Principles of Good Practice
Labour and Immigration Research Center, August 2012
Ithaca College Survey Research Center
Best Practices for Survey Research
“This document provides recommendations on how to plan and administer a survey.”
International Journal for Quality in Health Care
2003; Volume 15, Number 3 pp. 261–266
Methodology Matters. Good practice in the conduct and reporting of survey research
Kate Kelley, Belinda Clark, Vivienne Brown, and John Sitzia
Four weeks ago (January 17, 2013), I asked if this blog was making a difference and asked that y’all post specific examples of how it is making that difference–I was/am looking for change, specifically. I said I would summarize the responses and post a periodic update. This is the first update.
I’ve gotten many (more than 50) posts on that blog. They are interesting. No one has offered me a specific example of how this blog is making a difference. Several agree that page views are NOT an adequate measure of effectiveness. Several (again) agreed that length of time of a visit might be a good indicator. A few are reading the blog for marketing tips; a few are using the blog to entice me to go to their blog–I don’t think so, especially when the response is in another language that I have to translate. (I’m sure this sounds elitist–not my intention, to be sure–rather just a time factor in finding a translator.) Most comments are just encouraging me to keep up the writing because it is 1) clear; 2) quickly loaded; 3) they like /love the blog/blog content; or 4) can be applied to their marketing strategy and their blog (that actually may be a change, only I’d have to do a lot of research to know if their site benefited). Some folks just make a comment that seems to be a non-sequitur.
So I really don’t know. Judging from the comments (random though they may be), people seem to be reading it. I am curious how many people regularly go to this blog–regularly like weekly, not once in a while). If I’m representative, I go to other blogs regularly, though not the same blogs each time, so I’m probably one of those once in a while people–even with evaluation blogs. There are so many out there and the number is growing. What I’ve learned is that the title of an individual blog is what captures the folks. Coming up with catchy titles is difficult; coming up with catchy titles which are maximized in search engines is even harder.
I didn’t post a survey this time; maybe I should. I will post another update in about a month.
I’m an evaluator.
I want to know if something makes a difference; if the change is for the better; if it has value, merit, worth.
After all, the root of evaluation is value.
I haven’t answered individually the numerous comments that have been posted. I just continue to write and see what happens. I’m hoping that some of what I’ve said over the past now over three years has 1) made sense; 2) made a difference; and 3) been worthwhile. I also hope you reader have been able to use some of what you have read here. I don’t know.
Someone is keeping track of my analytic measures; that’s wonderful. Some blogs use that as a measure of making a difference; I don’t. I look at what people say. I read every comment even if I don’t respond. A lot of folks say that the information has been interesting; that the blog is well written; that I should continue. No one says how they use the material, or, for that matter, if they do. So, reader, I have a challenge:
Post a comment about how you have used the information you have read here. Post it next week when I won’t be blogging (see last week). Let me know. I’ll summarize the responses when I get back. I won’t do this for very long–two, maybe three weeks; a month at most. (When I posted previously a link to a quick on-line survey, I kept the survey open for only two weeks; not long enough for some folks.)
Other blog writers get comments not dissimilar to mine (I read a lot of blogs for ideas). I don’t see that folks are actually giving the writer specific information on what difference the blog has made in the lives of the reader. I must confess, I don’t let them know either. So since this is a new year, and everyone is trying new behaviors, the new behavior I’m asking for here is Tell me what difference this blog has made/is making.
“How far you go in life depends on your being tender with the young, compassionate with the aged, sympathetic with the striving, and tolerant of the weak and the strong — because someday you will have been all of these.”
There is power in this comment by Carver. Are you thinking what does this have to do with evaluation? Considering diversity when one conducts an evaluation is critical. The AEA has built that into its foundation in its guiding principles as “Respect for people.” It is clearly defined in the AEA by-laws. It is addressed in AEA’s Statement on Cultural Competence. One of the Program Evaluation Standards (Propriety) addresses Human Rights and Respect (P3).
Yet diversity goes beyond these the topics covered in these documents.
Daryl G. Smith, Professor at Claremont Graduate University, has developed an informed framework providing a practical and valuable catalyst for considering diversity in terms of the context of individual institutions. I think it has implications for evaluation whether you are at a university or a not-for-profit. I believe it has relevance especially for those of us who work in Extension.
This model was found in the document titled, “Campus Diversity Initiative: Current Status, Anticipating the Future“.(In the fine print is the book from which it is taken and if you want to read the book, copy and paste to your search engine.)
I’ve used this model a lot for helping me see diversity in ways other than gender and race/ethnicity, the usual way diversity is identified in university. For example, urban vs. rural; new to something vs. been at that something for a while; engaged vs. outreached; doing as vs. doing to. There are a wealth of evaluation questions that can be generated when diversity is reconsidered.
Some examples are:
1. How accessible is the program to county officials?
2. What other measures of success could have been used?
3. How have the local economic conditions affected vitality? Would those
conditions affect viability as well?
4. What characteristics were missed by not collecting educational level?
5. How could scholarship be redefined to be relevant to this program?
6. How welcoming and inclusive is this program?
7. How does background and county origin affect participation?
8. What difference does appointed as opposed to elected status make?
9. How accessible is the program to faculty across the Western Region?
10. What measures of success could be used?
11. How have the local economic conditions affected vitality? Would those
conditions affect viability as well? (A question not specifically addressed.)
12. How welcoming and inclusive is this program?
13. How does background and program area affect participation?
Keep in mind that these questions were program specific and are not the specific agenda for program effectiveness. My question is: Should they have been? Probably. At least they needed to be considered in the planning stages.
Today I’m reporting the results of the survey I ran for two weeks.
I asked five questions:
I don’t know how many subscribe (I am a technopeasant, after all) and that I blog at all is close to miraculous, so the results I report may or may not be reflective of what is actually happening.
So what are the results?
1. Of those 22 people responding, 21 people (100% of those responding; one person skipped this question) said that the blog is making a difference in what they do.
2. Of those 22 people responding, 15 (68.2%) said that they get new ideas; 15 (68.2%) said that they get new perspectives; 8 (36.4%) said that they get old information clarified; 13 (59.1%) said that they learn new information; and 11 (50.0%) said they review previously learned information. No one responded that the blog has not made a difference. [Phew...:) ] [Keep in mind that percentages will not add to 100% because multiple responses could be selected.)
3. Everyone who responded (N=22; 100%) said that 500 words is just right in length.
4. When respondents were asked how often the blog was read by them, 4 (18.2%) said weekly, as it is posted; 17 (77.3%) said regularly, depending on topic; 1 (4.5%) said rarely [although they obviously read it to respond...:) ].
5. When asked which topic they would like to see addressed in future blogs, 16 (72.7%) said methodology; 9 (40.9%) said quantitative data analysis; 12 (54.5%) said qualitative data analysis; 10 (45.5%) said data collection methods; 15 (68.2%) said survey development; 8 (36.4%) said program evaluation theory; and 8 (36.4%) said program evaluation models. Three people (13.6%) offered comments. [Keep in mind that percentages will NOT add to 100% because multiple responses could be selected.]
Comments were about the graphics (hard to read); could be eliminated. I’ll talk to my tech people about that. On my reading the graphics are clear. May be the browser; may be something else, says the technopeasant. The other comment was about getting new ideas even though the ideas have not been implemented yet.
So what does this tell me–given the small sample size, I am cautiously optimistic. (If I find out how many are subscribed, I’ll let y’all know.) I will continue to blog. I will figure out other ways to determine if I’m making a difference. And thanks for all of you who took the time to answer the survey and all of you who take the time to read my musings. Of the several things about which I am passionate, evaluation is close to the top. (oh, and no graphics this time…)
The GAO (Government Accounting Office) has a long and respected history of evaluation. Many luminaries at AEA (American Evaluation Association) have spent/are spending their professional careers at GAO. The GAO has just published (January 2012) their handbook on evaluation. It is called, DESIGNING EVALUATIONS 2012 Revision. Nancy Kingsbury, a longtime AEA luminary, wrote the preface. For those of us who receive Federal money in any form (grants, contracts, Extension) this will be a worthwhile read. Fortunately, it is a relatively short read (text is 61 pages plus another 7 pages of appended material). This manuscript explains the “official” federal view of evaluation. It is always good to know what is expected. I highly recommend this read. The worst it could be is good bedtime reading…zzzzzz-zz-z.
Last week, I mentioned that I would address contribution analysis–an approach to exploring cause and effect. Although I had seen the topic appear several times over the last 3 – 4 years, I never pursued it. Recently, though, the issue has come to the forefront of many conversations. I hear Extension faculty saying that their program caused this outcome. This statement is implied when they come to ask how to write “good” impact statements, not acknowledging that the likelih0od of actually having an impact is slim–long term outcomes, maybe. Impact? Probably not. So finding a logical defensible approach to discussing the lack of causality (as in the A caused B of randomized control trials-type of causality) that is inherent in Extension programing is important. John Mayne, an independent advisor on public sector performance, writes articulately on this topic (citations are listed below).
The article I read, and from which this blog entry is based, was written in 2008. Mayne has been writing on this topic since 1999, when he was with the Canadian Office of the Auditor General. For him the question became critical when the use of randomized control trials (RCT) was not appropriate yet program performance needed to be addressed.
In that article, referenced below, he details six iterative steps in contribution analysis:
He loops step six back to step four (the iterative process).
By exploring the contribution the program is making to the observed results, one can address the attribution of the program to the desired results. He goes on to say that (and since I’m quoting, I’m using the Canadian spellings), “Causality is inferred from the following evidence:
He focuses on clearly defining the theory of change; modeling that theory of change, and revisiting that theory of change regularly across the life of the program.
Mayne, J. (1999). Addressing Attribution Through Contribution Analysis: Using Performance Measures Sensibly. Available at: dsp-psd.pwgsc.gc.ca/Collection/FA3-31-1999E.pdf
Mayne, J. (2001). Addressing attribution through contribution analysis: Using performance measures sensibly. Canadian Journal of Program Evaluation, 16: 1 – 24. Available at: http://www.evaluationcanada.ca/secure/16-1-001.pdf
Mayne, J. & Rist, R. (2006). Studies are not enough: The necessary transformation of evaluation. Canadian Journal of Program Evaluation, 21: 93-120. Available at: http://www.evaluationcanada.ca/secure/21-3-093.pdf
Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. Institutional Learning and Change Initiative, Brief 16. Available at: http://www.cgiar-ilac.org/files/publications/briefs/ILAC_Brief16_Contribution_Analysis.pdf