You implement a program. You think it is effective; that it makes a difference; that it has merit and worth. You develop a survey to determine the merit and worth of the program. You send the survey out to the target audience which is an intact population–that is, all of the participants are in the target audience for the survey. You get less than 4o% response rate. What does that mean? Can you use the results to say that the participants saw merit in the program? Do the results indicate that the program has value; that it made a difference if only 40% let you know what they thought.
I went looking for some insights on non-responses and non-responders. Of course, I turned to Dillman (my go to book for surveys…). His bottom line: “…sending reminders is an integral part of minimizing non-response error” (pg. 360).
Dillman (of course) has a few words of advice. For example, on page 360, he says, ” Actively seek means of using follow-up reminders in order to reduce non-response error.” How do you not burden the target audience with reminders, which are “…the most powerful way of improving response rate…” (Dillman, pg. 360). When reminders are sent they need to be carefully worded and relate to the survey being sent. Reminders stress the importance of the survey and the need for responding.
Dillman also says (on page 361) to “…provide all selected respondents with similar amounts and types of encouragement to respond.” Since most of the time incentives are not an option for you the program person, you have to encourage the participants in other ways. So we are back to reminders again.
To explore the topic of non-response further, there is a book (Groves, Robert M., Don A. Dillman, John Eltinge, and Roderick J. A. Little (eds.). 2002. Survey Nonresponse. Wiley-Interscience: New York) that deals with the topic. I don’t have it on my shelf, so I can’t speak to it. I found it while I was looking for information on this topic.
I also went on line to EVALTALK and found this comment which is relevant to evaluators attempting to determine if the program made a difference: “Ideally you want your non-response percents to be small and relatively even-handed across items. If the number of nonresponds is large enough, it does raise questions as to what is going for that particular item, for example, ambiguous wording or a controversial topic. Or, sometimes a respondent would rather not answer a question than respond negatively to it. What you do with such data depends on issues specific to your individual study.” This comment was from Kathy Race of Race & Associates, Ltd., September 9, 2003.
A bottom line I would draw from all this is respond…if it was important to you to participate in the program then it is important for you to provide feedback to the program implementation team/person.