Gene Shackman shared these resources for “best practices” for doing survey research. Since survey methodology is often used frequently and regularly by Extension professionals, these might be of interest. I’m not endorsing any of them; only passing them on to interested individuals. Gene posted them originally as a comment on the Evaluators Group Linkedin page. Linkedin is another evaluator resource.
Survey Research: A Summary of Best Practices
December 31, 2004, Ethics Resource Center 2004, Leslie Altizer
A brief summary
AAPOR (American Association for Public Opinion Research
How to produce a quality survey
Best Practices for Survey Research Reports: A Synopsis for Authors and Reviewers
JoLaine Reierson Draugalis and others
Am J Pharm Educ. v.72(1); Feb 15, 2008
“This article provides a checklist and recommendations for authors and reviewers to use when submitting or evaluating manuscripts reporting survey research that used a questionnaire as the primary data collection tool.”
Achieving Quality Survey Research: Principles of Good Practice
Labour and Immigration Research Center, August 2012
Ithaca College Survey Research Center
Best Practices for Survey Research
“This document provides recommendations on how to plan and administer a survey.”
International Journal for Quality in Health Care
2003; Volume 15, Number 3 pp. 261–266
Methodology Matters. Good practice in the conduct and reporting of survey research
Kate Kelley, Belinda Clark, Vivienne Brown, and John Sitzia
Four weeks ago (January 17, 2013), I asked if this blog was making a difference and asked that y’all post specific examples of how it is making that difference–I was/am looking for change, specifically. I said I would summarize the responses and post a periodic update. This is the first update.
I’ve gotten many (more than 50) posts on that blog. They are interesting. No one has offered me a specific example of how this blog is making a difference. Several agree that page views are NOT an adequate measure of effectiveness. Several (again) agreed that length of time of a visit might be a good indicator. A few are reading the blog for marketing tips; a few are using the blog to entice me to go to their blog–I don’t think so, especially when the response is in another language that I have to translate. (I’m sure this sounds elitist–not my intention, to be sure–rather just a time factor in finding a translator.) Most comments are just encouraging me to keep up the writing because it is 1) clear; 2) quickly loaded; 3) they like /love the blog/blog content; or 4) can be applied to their marketing strategy and their blog (that actually may be a change, only I’d have to do a lot of research to know if their site benefited). Some folks just make a comment that seems to be a non-sequitur.
So I really don’t know. Judging from the comments (random though they may be), people seem to be reading it. I am curious how many people regularly go to this blog–regularly like weekly, not once in a while). If I’m representative, I go to other blogs regularly, though not the same blogs each time, so I’m probably one of those once in a while people–even with evaluation blogs. There are so many out there and the number is growing. What I’ve learned is that the title of an individual blog is what captures the folks. Coming up with catchy titles is difficult; coming up with catchy titles which are maximized in search engines is even harder.
I didn’t post a survey this time; maybe I should. I will post another update in about a month.
I’m an evaluator.
I want to know if something makes a difference; if the change is for the better; if it has value, merit, worth.
After all, the root of evaluation is value.
I haven’t answered individually the numerous comments that have been posted. I just continue to write and see what happens. I’m hoping that some of what I’ve said over the past now over three years has 1) made sense; 2) made a difference; and 3) been worthwhile. I also hope you reader have been able to use some of what you have read here. I don’t know.
Someone is keeping track of my analytic measures; that’s wonderful. Some blogs use that as a measure of making a difference; I don’t. I look at what people say. I read every comment even if I don’t respond. A lot of folks say that the information has been interesting; that the blog is well written; that I should continue. No one says how they use the material, or, for that matter, if they do. So, reader, I have a challenge:
Post a comment about how you have used the information you have read here. Post it next week when I won’t be blogging (see last week). Let me know. I’ll summarize the responses when I get back. I won’t do this for very long–two, maybe three weeks; a month at most. (When I posted previously a link to a quick on-line survey, I kept the survey open for only two weeks; not long enough for some folks.)
Other blog writers get comments not dissimilar to mine (I read a lot of blogs for ideas). I don’t see that folks are actually giving the writer specific information on what difference the blog has made in the lives of the reader. I must confess, I don’t let them know either. So since this is a new year, and everyone is trying new behaviors, the new behavior I’m asking for here is Tell me what difference this blog has made/is making.
“How far you go in life depends on your being tender with the young, compassionate with the aged, sympathetic with the striving, and tolerant of the weak and the strong — because someday you will have been all of these.”
There is power in this comment by Carver. Are you thinking what does this have to do with evaluation? Considering diversity when one conducts an evaluation is critical. The AEA has built that into its foundation in its guiding principles as “Respect for people.” It is clearly defined in the AEA by-laws. It is addressed in AEA’s Statement on Cultural Competence. One of the Program Evaluation Standards (Propriety) addresses Human Rights and Respect (P3).
Yet diversity goes beyond these the topics covered in these documents.
Daryl G. Smith, Professor at Claremont Graduate University, has developed an informed framework providing a practical and valuable catalyst for considering diversity in terms of the context of individual institutions. I think it has implications for evaluation whether you are at a university or a not-for-profit. I believe it has relevance especially for those of us who work in Extension.
This model was found in the document titled, “Campus Diversity Initiative: Current Status, Anticipating the Future“.(In the fine print is the book from which it is taken and if you want to read the book, copy and paste to your search engine.)
I’ve used this model a lot for helping me see diversity in ways other than gender and race/ethnicity, the usual way diversity is identified in university. For example, urban vs. rural; new to something vs. been at that something for a while; engaged vs. outreached; doing as vs. doing to. There are a wealth of evaluation questions that can be generated when diversity is reconsidered.
Some examples are:
1. How accessible is the program to county officials?
2. What other measures of success could have been used?
3. How have the local economic conditions affected vitality? Would those
conditions affect viability as well?
4. What characteristics were missed by not collecting educational level?
5. How could scholarship be redefined to be relevant to this program?
6. How welcoming and inclusive is this program?
7. How does background and county origin affect participation?
8. What difference does appointed as opposed to elected status make?
9. How accessible is the program to faculty across the Western Region?
10. What measures of success could be used?
11. How have the local economic conditions affected vitality? Would those
conditions affect viability as well? (A question not specifically addressed.)
12. How welcoming and inclusive is this program?
13. How does background and program area affect participation?
Keep in mind that these questions were program specific and are not the specific agenda for program effectiveness. My question is: Should they have been? Probably. At least they needed to be considered in the planning stages.
Today I’m reporting the results of the survey I ran for two weeks.
I asked five questions:
I don’t know how many subscribe (I am a technopeasant, after all) and that I blog at all is close to miraculous, so the results I report may or may not be reflective of what is actually happening.
So what are the results?
1. Of those 22 people responding, 21 people (100% of those responding; one person skipped this question) said that the blog is making a difference in what they do.
2. Of those 22 people responding, 15 (68.2%) said that they get new ideas; 15 (68.2%) said that they get new perspectives; 8 (36.4%) said that they get old information clarified; 13 (59.1%) said that they learn new information; and 11 (50.0%) said they review previously learned information. No one responded that the blog has not made a difference. [Phew...:) ] [Keep in mind that percentages will not add to 100% because multiple responses could be selected.)
3. Everyone who responded (N=22; 100%) said that 500 words is just right in length.
4. When respondents were asked how often the blog was read by them, 4 (18.2%) said weekly, as it is posted; 17 (77.3%) said regularly, depending on topic; 1 (4.5%) said rarely [although they obviously read it to respond...:) ].
5. When asked which topic they would like to see addressed in future blogs, 16 (72.7%) said methodology; 9 (40.9%) said quantitative data analysis; 12 (54.5%) said qualitative data analysis; 10 (45.5%) said data collection methods; 15 (68.2%) said survey development; 8 (36.4%) said program evaluation theory; and 8 (36.4%) said program evaluation models. Three people (13.6%) offered comments. [Keep in mind that percentages will NOT add to 100% because multiple responses could be selected.]
Comments were about the graphics (hard to read); could be eliminated. I’ll talk to my tech people about that. On my reading the graphics are clear. May be the browser; may be something else, says the technopeasant. The other comment was about getting new ideas even though the ideas have not been implemented yet.
So what does this tell me–given the small sample size, I am cautiously optimistic. (If I find out how many are subscribed, I’ll let y’all know.) I will continue to blog. I will figure out other ways to determine if I’m making a difference. And thanks for all of you who took the time to answer the survey and all of you who take the time to read my musings. Of the several things about which I am passionate, evaluation is close to the top. (oh, and no graphics this time…)
The GAO (Government Accounting Office) has a long and respected history of evaluation. Many luminaries at AEA (American Evaluation Association) have spent/are spending their professional careers at GAO. The GAO has just published (January 2012) their handbook on evaluation. It is called, DESIGNING EVALUATIONS 2012 Revision. Nancy Kingsbury, a longtime AEA luminary, wrote the preface. For those of us who receive Federal money in any form (grants, contracts, Extension) this will be a worthwhile read. Fortunately, it is a relatively short read (text is 61 pages plus another 7 pages of appended material). This manuscript explains the “official” federal view of evaluation. It is always good to know what is expected. I highly recommend this read. The worst it could be is good bedtime reading…zzzzzz-zz-z.
Last week, I mentioned that I would address contribution analysis–an approach to exploring cause and effect. Although I had seen the topic appear several times over the last 3 – 4 years, I never pursued it. Recently, though, the issue has come to the forefront of many conversations. I hear Extension faculty saying that their program caused this outcome. This statement is implied when they come to ask how to write “good” impact statements, not acknowledging that the likelih0od of actually having an impact is slim–long term outcomes, maybe. Impact? Probably not. So finding a logical defensible approach to discussing the lack of causality (as in the A caused B of randomized control trials-type of causality) that is inherent in Extension programing is important. John Mayne, an independent advisor on public sector performance, writes articulately on this topic (citations are listed below).
The article I read, and from which this blog entry is based, was written in 2008. Mayne has been writing on this topic since 1999, when he was with the Canadian Office of the Auditor General. For him the question became critical when the use of randomized control trials (RCT) was not appropriate yet program performance needed to be addressed.
In that article, referenced below, he details six iterative steps in contribution analysis:
He loops step six back to step four (the iterative process).
By exploring the contribution the program is making to the observed results, one can address the attribution of the program to the desired results. He goes on to say that (and since I’m quoting, I’m using the Canadian spellings), “Causality is inferred from the following evidence:
He focuses on clearly defining the theory of change; modeling that theory of change, and revisiting that theory of change regularly across the life of the program.
Mayne, J. (1999). Addressing Attribution Through Contribution Analysis: Using Performance Measures Sensibly. Available at: dsp-psd.pwgsc.gc.ca/Collection/FA3-31-1999E.pdf
Mayne, J. (2001). Addressing attribution through contribution analysis: Using performance measures sensibly. Canadian Journal of Program Evaluation, 16: 1 – 24. Available at: http://www.evaluationcanada.ca/secure/16-1-001.pdf
Mayne, J. & Rist, R. (2006). Studies are not enough: The necessary transformation of evaluation. Canadian Journal of Program Evaluation, 21: 93-120. Available at: http://www.evaluationcanada.ca/secure/21-3-093.pdf
Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. Institutional Learning and Change Initiative, Brief 16. Available at: http://www.cgiar-ilac.org/files/publications/briefs/ILAC_Brief16_Contribution_Analysis.pdf
Last week, the National Outreach Scholarship Conference was held at Michigan State University campus. There was an impressive array of speakers and presentations. I had the luxury of attending Michael Quinn Patton’s session on Utilization-focused Evaluation. And although the new edition of the book is 600+ pages, Michael distilled the essentials down. He also announced a new book (only 400+ pages) called The Essentials of Utilization Focused Evaluation. . This volume is geared to practitioners as opposed to the classroom or the academic.
One take away message for me was this: “Context changes the focus of ‘use’ “. So if you have a context whereby the reports are only for accounting purposes, the report will look very different from a context whereby the reports are for detailing the difference being made. Now, this sounds very intuitive. Like, DUH, Molly, tell me something I don’t know. Yet this is so important because you, as the evaluator, have the responsibility and the obligation to prepare stakeholders to use data in OTHER ways than as a reporting activity. That responsibility and obligation is tied to the Program Evaluation Standards. The Joint Committee revised the standards after soliciting feedback from multiple sources. This 3rd Ed. addresses with numerous examples and discussion the now five standards. These standards are:
Apparently, there was considerable discussion as the volume was being compiled that Accountability needed to be first. Think about it, folks. If Accountability was first, then evaluations would build on “the responsible use of resources to produce value.” Implementation, improvement, worth, and costs would drive evaluation. By placing utilization first, evaluators have the responsibility and obligation to base judgements “…on the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs…to examine the variety of possible uses for evaluation processes, findings, and products.”
Certainly validates use as defined in Utilization-Focused Evaluation. Take Michael’s workshop. The American Evaluation Association is offering this workshop at its annual meeting, in Anaheim, CA and the workshop is on Wednesday, November 2. Go to eval.org and click on Evaluation Conference. If you can’t join the workshop–Read the book! (either one). It is well worth it.
For the last three weeks, since I posted the history matching game, I’ve not been able to post with images. Every time I go to save the draft, the post vanishes. I’m working with the IT folks. They haven’t given me any alternatives. I posting this today without images to let you know that I am still here, that I still have thoughts, and that I will post something of substance again soon. Please be patient. Thank you.
Independence is an evaluative question.
Think about it while you enjoy the holiday.
Were the folks who fought the Revolutionary War, truly revolutionaries? OR were they terrorists?
Was King George a despot or just a micromanager?
My favorite is this: Was the War Between the States, the last battle of the War of/for Independence?
I’m sure there are other evaluative questions. Got a question that is evaluative? Let me know.