What do you really want to know? What would be interesting to know? What can you forget about?

When you sit down to write survey questions, keep these questions in mind.

• What do you really want to know?

You are doing an evaluation of the impact of your program. This particular project is peripherally related to two other projects you do. You think, “I could capture all projects with just a few more questions.” You are tempted to lump them all together. DON’T.

• What would be interesting to know?

There are many times where I’ve heard investigators and evaluators say something like, “That would be really interesting to see if abc or qrs happens.” Do you really need to know this?  Probably not.  Interesting is not a compelling reason to include a question. So–DON’T ASK.

I always ask the principal investigator, “Is this information necessary or just nice to know?  Do you want to report that finding? Will the answer to that question REALLY add to the measure of impact you are so arduously trying to capture? If the answer is probably not, DON’T ASK.

• What can you forget about?

Do you really want to know the marital status of your participants? Or if possible participants are volunteers in some other program, school teachers,  and students all at the same time? My guess is that this will not affect the overall outcome of the project, nor its impact. If not, FORGET IT!

Statistically significant is a term that is often bandied about. What does it really mean? Why is it important?

First–why is it important?

It is important because it helps the evaluator make decisions based on the data gathered.

That makes sense–evaluators have to make decisions so that the findings can be used. If there  isn’t some way to set the findings apart from the vast morass of information, then it is only  background noise. So those of us who do analysis have learned to look at the probability level (written as a “p” value such as p=0.05). The “p” value helps us determine if something is true, not necessarily that something is important.

Second–what does that number really mean?

Probability level means–can this  (fill in the blank here) happen by chance? If it can occur by chance, say 95 times out of 100, then it is probably not true.  When evaluators look at probability levels, we want really small numbers. Small numbers say that the likelihood that this change occurred by chance (or is untrue) is really unlikely. So even though a really small number occurs (like 0.05) it really means that there is a 95% chance that this change did not occur by chance and that it is really true. You can convert a p value by subtracting it from 100 (100-5=95; the likelihood that this did not occur by chance)

Convention has it that for something to be statistically significant, the value must be at least 0.05. This convention comes from academic research.  Smaller numbers aren’t necessarily better; they just confirm that the likelihood that true change occurs more often. There are software programs (Statxact for example) that can compute the exact probability; so seeing numbers like 0.047 would occur.

Exploratory research (as opposed to confirmatory) may have a higher p value such as p=0.10.This means that the trend is moving in the desired direction.  Some evaluators let the key stakeholders determine if the probability level (p value) is at a level that indicates importance, for example, 0.062. Some would argue that 94 time out of 100 is not that much different from 95 time out of 100 of being true.

.

The question was raised about writing survey questions.

My short answer is Don Dillman’s book, Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method is your best source. It is available from the publisher John Wiley. Or from Amazon. Chapter 4 in the book, “The basics of crafting good questions”, helps to focus your thinking. Dillman (and his co-authors Jolene D. Smyth and Leah Melani Christian) make it clear that how the questions are constructed raise several methodological questions and not attending to those questions can affect on how the question performs.

One (among several) consideration that Dillman et al, suggest to be considered every time is:

• The order of the questions (Chapter 6 has a section on ordering questions).

I only touching briefly on the order of questions using Dillman et al’s guidelines. There are 22 guidelines in Chapter 6, “From Questions to a Questionnaire”; of those 22, five refer to ordering the questions. They are:

1. ” Group related questions that cover similar topics, and begin with questions likely to be salient to nearly all respondents” (pg. 157).  Doing this closely approximates a conversation, the goal in questionnaire development.
2. “Choose the first question carefully” (pg. 158). The first question is the one which will “hook” respondents into answering the survey.
3. “Place sensitive or potentially objectionable questions near the end of the questionnaire” (pg. 159). This placement increases the likelihood that respondents will be engaged in the questionnaire and will, therefore, answer sensitive questions.
4. “Ask questions about events in the order the events occurred” (pg. 159). Ordering the questions most distant to most recent  occurrence, least important to most important activity, presents a logical flow to the respondent.
5. “Avoid unintended question order effects” (pg. 160). Keep in mind that questions do not stand alone, that respondents may use previous questions as a foundation for the following questions. This can create an answer bias.

When constructing surveys, remember to always have other people read your questions–especially people similar to and different from your target audience.

More on survey question development later.

I know it is Monday, not Tuesday or Wednesday. I will not have internet access Tuesday or Wednesday and I wanted to answer a question posed to me by a colleague and long time friend who has just begun her evaluation career.

Her question is:

What are the best methods to collect outcome evaluation data.

Good question.

On what does the collection depend?

If your resources are endless (yeah, right… ), then you can hire people; use all the time you need; and collect a wealth of data. Most folks aren’t this lucky.

If you plan to use your findings to convince someone, you need to think about what will be most convincing. Legislators like the STORY that tugs at the heart strings.

Administrators like, “Just the FACTS, ma’am.” Typically presented in a one-page format with bullets.

Program developers may want a little of both.

Depending on what question you want answered will depend on how you will collect the answer.

My friend, Ellen-Taylor Powell, at the University of Wisconsin Extension Service has developed a handout of data methods (see: Methods for Collecting Information).  This handout is in PDF form and can be downloaded. It is a comprehensive list of different data collection methods that can be adapted to answer your question within your available resources.

She also has a companion handout called Sources of Evaluation Information. I like this handout because it is clear and straight forward. I have found both very useful in the work I do.

Whole books have been written on individual methods. I can recommend some I like–let me know.