A reader asked how to choose a sample for a survey. Good question.

My daughters are both taking statistics (one in college, one in high school) and this question has been mentioned more than once. So I’ll give you my take on sampling. There are a lot of resources out there (you know, references and other sources). My favorite is in Dillman 3rd edition, page 57. 698685_cover.indd

Sampling is easier than most folks make it out to be. Most of the time you are dealing with an entire population. What, you ask, how can that be?

You are dealing with an entire population when you survey the audience of a workshop (population 20, or 30, or 50). You are dealing with a population when you deal with a series of workshops (anything under 100). Typically, workshops are a small number; only happen once or twice; rarely include participants who are there because they have to be there. If you have under 100, you have an entire population. They can all be surveyed.

Now if your workshop is a repeating event with different folks over the offerings, then you will have the opportunity to sample your population because it is over 100 (see Dillman, 3rd edition, page 57). If you have over 100 people to survey AND you have contact information for them, then you want to randomly sample from that population. Random selection (another name for random sampling) is very different from random assignment; I’m talking about random sampling.

Random sampling is a process where everyone gets an identification number (and an equal chance to be selected), sequentially; so 1- 100. Then find a random number table; usually found in statistic books in the back. Close your eyes and let your hand drop onto a number. Let’s say that number is 56997. You know you need numbers between 1 and 100 and you will need (according to Dillman) for a 95% confidence level with a plus or minus 3% margin of error and a 50/50 split at least 92 cases (participants) OR if you want an 80/20 split, you will need 87 cases (participants). So you look at the number and decide which two digit number you will select (56, 69, 99, 0r 97). That is your first number. Let us say you chose 99 that is the third two digit number found in the above random number (56 and 69 being the first two). So participant 99 will be on the randomly selected (random sampling) list. Now you can go down the list, up the list, to the left or the right of the list and identify the next two digit number in the same position. For this example, using the random numbers table from my old Minium (for which I couldn’t find a picture since it is OLD) stat book (the table was copied from the Rand Corporation, A million random digits with 100,000 normal deviates, Glencoe, IL: The Free Press, 1955), the number going right is 41534, I would choose participant number 53. Continuing right, with the number 01953, I would choose participant number 95,  etc. If you come across a number that you have already chosen, go to the next number. Do this process until you get the required number of cases (either 92 or 87). You can select fewer if you want a 10% plus or minus margin of error (49, 38) or a 5% plus or minus margin of error (80, 71). (I always go for the least margin of error, though.) Once you have identified the required number, drafted the survey, and secured IRB approval, you can send out the survey. We will talk about response rates next week.

The question of surveys came up the other day. Again.

I got a query from a fellow faculty member and a query from the readership. (No not a comment; just a query–although I now may be able to figure out why the comments don’t work.)

So surveys; a major part of evaluation work. (My go-to book on surveys is Dillman’s 3rd edition 698685_cover.indd; I understand there is a 4th edition coming later this year.9781118456149.pdf )

After getting a copy of Dillman for your desk, This is what I suggest: Start with what you want to know.

This may be in the form of statements or questions. If the result is complicated, see if you can simplify it by breaking it into more than one statement or question. Recently, I  got a “what we want to know” in the form of complicated research questions. I’m not sure that the resulting survey questions answered the research questions because of the complexity. (I’ll have to look at the research questions and the survey questions side by side to see.) Multiple simple statements/questions are easier to match to your survey questions, easier to see if you have survey questions that answer what you want to know. Remember: if you will not use the answer (data), don’t ask the question. Less can actually be more, in this case, and just because it would be interesting to know doesn’t mean the data will answer your “what you want to know” question.

Evaluators strive for evaluation use . (See: Patton, M. Q. (2008). Utilization Focused Evaluation, 4ed. Thousand Oaks, CA: Sage Publications, Inc.Utilization-Focused Evaluation; AND/OR Patton, M. Q. (2011). Essentials of Utilization-Focused Evaluation. Thousand Oaks, CA: Sage Publications, Inc.Essentials of UFE).  See also the The  Program Evaluation Standards , which lists utility (use) as the first attribute and standard for evaluators. (Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The Program Evaluation Standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.The_Program_Evaluation_Standards_3ed)

Evaluation use is related to stated intention to change about which I’ve previously written. If your statements/questions of what you want to know will lead you to using the evaluation findings, then stating the question in such a way as to promote use will foster use, i.e., intention to change. Don’t do the evaluation for the sake of doing an evaluation. If you want to improve the program, evaluate. If you want to know about the program’s value, merit, and worth, evaluate. Then use. One way to make sure that you will follow-through is to frame your initial statements/questions in a way that will facilitate use. Ask simply.

I’ve just read Ernie House’s book, Regression to the Mean.  house--regression to the meanIt is a NOVEL about evaluation politics.  A publishers review says, “Evaluation politics is one of the most critical, yet least understood aspects of evaluation. To succeed, evaluators must grasp the politics of their situation, lest their work be derailed. This engrossing novel illuminates the politics and ethics of evaluation, even as it entertains. Paul Reeder, an experienced (and all too human) evaluator, must unravel political, ethical, and technical puzzles in a mysterious world he does not fully comprehend. The book captures the complexities of evaluation politics in ways other works do not. Written expressly for learning and teaching, the evaluation novel is an unconventional foray into vital topics rarely explored.”

Many luminaries (Patton, Lincoln, Scriven, Weiss) made pre-publication comments. Although I found the book fascinating, I found the quote that is included attributed to Freud compelling.  That quote is, “The voice of the intellect is a soft one, but it does not rest until it has gained a hearing.  Ultimately, after endless rebuffs, it succeeds.  This is one of the few points in which we can be optimistic about the future of mankind (sic).”  Although Freud wasn’t speaking about evaluation, House contends that this statement applies, and goes on to say, “Sometimes you have to persist against your emotions as well as the emotions of others.  None of us are rational.”

So how does rationality fit into evaluation. I would contend that it doesn’t. Although the intent of evaluation is to be objective, none of us can be because of what I called personal and situational bias; what is known in the literature as cognitive bias. I contend that if one has cognitive bias (and everyone does) then that prevents us from being rational, try as we might. Our emotions get in the way. House’s comment (above) seems fitting to evaluation–evaluators must persist against personal emotions as well as emotions of others. I would add persists against personal and situational bias. I believe it is important to make explicit the personal and situational bias prior to commencing an evaluation. By clarifying assumptions that occur with the stakeholders and the evaluator, surprises are minimized, and the evaluation may be more useful to program people.

Intention to change

I’ve talked about intention to change and how stating that intention out loud and to others makes a difference. This piece of advice is showing up in some unexpected places and here. If you state your goal, there is a higher likelihood that you will be successful. That makes sense. If you confess publicly (or even to a priest), you are more likely to do the penance/make a change. What I find interesting is that this is so evaluation. What difference did the intervention make? How does that difference relate to the merit, worth, value of the program?

Lent started March 5. That is 40 days of discipline–giving up or taking on. That is a program. What difference will it make? Can you go 40 days without chocolate?

New Topic:

I got my last comment in November, 2013. I miss comments. Sure most of them were check out this other web site. Still there were some substantive comments and I’ve read those and archived them. My IT person doesn’t know what was the impetus for this sudden stop. Perhaps Google changed its search engine optimization code and my key words are no longer in the top. So I don’t know if what I write is meaningful; is worthwhile; or is resonating with you the reader in any way. I have been blogging now for over four years…this is no easy task. Comments and/or questions would be helpful, give me some direction.

New Topic:

Chris Lysy cartoons in his blog. This week he blogged about logic models. He only included logic models that are drawn with boxes. What if the logic model is circular? How would it be different? Can it still lead to outcomes? Non-linear thinkers/cultures would say so. How would you draw it? Given that mind mapping may also be a model, how do they relate?

Have a nice weekend. The sun is shining again! sunshine in oregon

 

I’ve been reading about models lately; models that have been developed, models that are being used today, models that may be used tomorrow.

Webster (Seventh New Collegiate) Dictionary has almost two inches about models–I think my favorite definition is the fifth one: an example for imitation or emulation. It seems to be most relevant to evaluation. What do evaluators do if not imitate or emulate others?

To that end, I went looking for evaluation models. Jim Popham’s book Popham, educational evaluationhas a chapter (2, Alternative approaches to educational evaluation) on models. Fitzpatrick, Sanders, and Worthen  fitzpatrick book 2has numerous chapters on “approaches”  (what Popham calls models). (I wonder if this is just semantics?)

Models have appeared in other blogs (not called models, though). In the case of Life in Perpetual Beta (Harold Jarche) provides this view of how organizations have evolved and calls them forms.(The below image is credited to David Ronfeldt.)

TIMN-David Ronfeldt

(Looks like a model to me. I wonder what evaluators could make of this.)

The reading is interesting because it is flexible. It approaches the “if it works, use it” paradigm; the one I use regularly.

I’ll just list the models Popham uses and discuss them over the next several weeks. (FYI-both Popham and Fitzpatrick, et. al., talk about the overlap of models.) Why is a discussion of models important, you may ask? I’ll quote Stufflebeam: “The study of alternative evaluation approaches is important for professionalizing program evaluation and for its scientific advancement and operation” (2001, p. 9).

Popham lists the following models:

  • Goal-Attainment models
  • Judgmental models emphasizing inputs
  • Judgmental models emphasizing outputs
  • Decision-Facilitation models
  • Naturalistic models

Popham does say that the model classification could have been done a different way. You will see that in the Fitzpatrick, Sanders, and Worthen volume  where they talk about the following approaches:

  • Expertise-oriented approaches
  • Consumer-oriented approaches
  • Program-oriented approaches
  • Decision-oriented approaches
  • Participant-oriented approaches

They have a nice table that does a comparative analysis of alternative approaches (Table 10.1, pp. 249-251)

Interesting reading.

References

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Boston, MA: Pearson.

Popham, W. J. (1993). Educational Evaluation (3rd ed.). Boston, MA: Allyn and Bacon.

Stufflebeam, D. L. (2001). Evaluation models. New Directions for Evaluation (89). San Francisco, CA: Jossey-Bass.