I  have six references on case study in my library. Robert K. Yin wrote two seminal books on case studies, one in 1993 (now in a 2nd edition, 1993 was the 1st edition) and the other in 1989 (now in the 4th edition, 1989 was the 1st edition).  I have the 1994 edition (2nd edition of the 1989 book), and in it Yin says that “case studies are increasingly commonplace in evaluation research…are the preferred strategy when “how” and “why” questions are being posed, when the investigator has little control over events, and when the focus in on a contemporary phenomenon within some real-life context.

So what exactly is a case study?

A case study is typically an in-depth study of one or more individuals, institutions, communities, programs, populations. Whatever the “case” it is clearly bounded and what is studied is what is happening and important within those boundaries. Case studies use multiple sources of information to build the case.  For a more detailed review see Wikipedia

There are three types of case studies

  • Explanatory
  • Exploratory
  • Descriptive

Over the years, case method has become more sophisticated.

Brinkerhoff has developed a method, the Success Case Method, as an evaluation approach that “easier, faster, and cheaper than competing approaches, and produces compelling evidence decision-makers can actually use.”  As an evaluation approach, this method is quick and inexpensive and most of all, produces useful results.

Robert E. Stake has taken case study beyond one to many with his recent book, Multiple Case Study Analysis.  It looks at cross-case analysis and can be used when broadly occurring phenomena need to be explored, such as leadership or management.

I’ve mentioned four of the six books, if you want to know the others, let me know.

Extension has consistently used survey as a method for collecting information.

Survey collects information through structured questionnaires resulting in quantitative data.   Don Dillman wrote the book, Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method .  Although mail and individual interviews were once the norm, internet survey software has changed that.

Other ways  are often more expedient, less costly, less resource intensive than survey. When needing to collect information, consider some of these other ways:

  • Case study
  • Interviews
  • Observation
  • Group Assessment
  • Expert or peer review
  • Portfolio reviews
  • Testimonials
  • Tests
  • Photographs, slides, videos
  • Diaries, journals
  • Logs
  • Document analysis
  • Simulations
  • Stories
  • Unobtrusive measures

I’ll talk about these in later posts and provide resources for each of these.

When deciding what information collection method (or methods) to use, remember there are three primary sources of evaluation information. Those sources often dictate the methods of information collection. The Three sources are:

  1. Existing information
  2. People
  3. Pictorial records and observation

When using existing information, developing a systematic approach to LOOKING at the information source is what is important.

When gathering information from people, ASKING them is the approach to use–and how that asking is structured.

When using pictorial records and observations, determine what you are looking for before you collect information

What do you really want to know? What would be interesting to know? What can you forget about?Thought_Bubble_1

When you sit down to write survey questions, keep these questions in mind.

  • What do you really want to know?

You are doing an evaluation of the impact of your program. This particular project is peripherally related to two other projects you do. You think, “I could capture all projects with just a few more questions.” You are tempted to lump them all together. DON’T.

Keep your survey focused. Keep your questions specific. Keep it simple.


  • What would be interesting to know?

survey images 4There are many times where I’ve heard investigators and evaluators say something like, “That would be really interesting to see if abc or qrs happens.” Do you really need to know this?  Probably not.  Interesting is not a compelling reason to include a question. So–DON’T ASK.

I always ask the principal investigator, “Is this information necessary or just nice to know?  Do you want to report that finding? Will the answer to that question REALLY add to the measure of impact you are so arduously trying to capture? If the answer is probably not, DON’T ASK.

Keep your survey focused. Keep your questions specific. Keep it simple.

  • What can you forget about?

Do you really want to know the marital status of your participants? Or if possible participants are volunteers in some other program, school teachers,  and students all at the same time? My guess is that this will not affect the overall outcome of the project, nor its impact. If not, FORGET IT!

Keep your survey focused. Keep your questions specific. Keep it simple.survey image

The question was raised about writing survey questions.

Dillman's book

My short answer is Don Dillman’s book, Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method is your best source. It is available from the publisher John Wiley. Or from Amazon. Chapter 4 in the book, “The basics of crafting good questions”, helps to focus your thinking. Dillman (and his co-authors Jolene D. Smyth and Leah Melani Christian) make it clear that how the questions are constructed raise several methodological questions and not attending to those questions can affect on how the question performs.

One (among several) consideration that Dillman et al, suggest to be considered every time is:

  • The order of the questions (Chapter 6 has a section on ordering questions).survey image 3

I only touching briefly on the order of questions using Dillman et al’s guidelines. There are 22 guidelines in Chapter 6, “From Questions to a Questionnaire”; of those 22, five refer to ordering the questions. They are:

  1. ” Group related questions that cover similar topics, and begin with questions likely to be salient to nearly all respondents” (pg. 157).  Doing this closely approximates a conversation, the goal in questionnaire development.
  2. “Choose the first question carefully” (pg. 158). The first question is the one which will “hook” respondents into answering the survey.
  3. “Place sensitive or potentially objectionable questions near the end of the questionnaire” (pg. 159). This placement increases the likelihood that respondents will be engaged in the questionnaire and will, therefore, answer sensitive questions.
  4. “Ask questions about events in the order the events occurred” (pg. 159). Ordering the questions most distant to most recent  occurrence, least important to most important activity, presents a logical flow to the respondent.
  5. “Avoid unintended question order effects” (pg. 160). Keep in mind that questions do not stand alone, that respondents may use previous questions as a foundation for the following questions. This can create an answer bias.

When constructing surveys, remember to always have other people read your questions–especially people similar to and different from your target audience.

More on survey question development later.

I know it is Monday, not Tuesday or Wednesday. I will not have internet access Tuesday or Wednesday and I wanted to answer a question posed to me by a colleague and long time friend who has just begun her evaluation career.

Her question is:

What are the best methods to collect outcome evaluation data.Data 3 2_2

Good question.

The answer:  It all depends.

On what does the collection depend?

  • Your question.
  • Your use.
  • Your resources.

If your resources are endless (yeah, right…smiley ), then you can hire people; use all the time you need; and collect a wealth of data. Most folks aren’t this lucky.

If you plan to use your findings to convince someone, you need to think about what will be most convincing. Legislators like the STORY that tugs at the heart strings.

Administrators like, “Just the FACTS, ma’am.” Typically presented in a one-page format with bullets.

Program developers may want a little of both.

Depending on what question you want answered will depend on how you will collect the answer.

My friend, Ellen-Taylor Powell, at the University of Wisconsin Extension Service has developed a handout of data methods (see: Methods for Collecting Information).  This handout is in PDF form and can be downloaded. It is a comprehensive list of different data collection methods that can be adapted to answer your question within your available resources.

She also has a companion handout called Sources of Evaluation Information. I like this handout because it is clear and straight forward. I have found both very useful in the work I do.

Whole books have been written on individual methods. I can recommend some I like–let me know.

There are a three topics on which I want to touch today.

  • Focus group participant composition
  • Systems diagrams
  • Evaluation report usePatton's utilization focused evaluation

In reverse order:

Evaluation use: I neglected to mention Michael Quinn Patton’s book on evaluation use. Patton has advocated use before most everyone else.  The title of his book  is Utilization-Focused Evaluation. The 4th edition is available from the publisher (Sage) or from Amazon (and if I knew how to insert links to those sites, I’d do it…another lesson…).

cartoon of systems diagramSystems diagrams: I had the opportunity last week to work with a group of Extension faculty all involved in Watershed Education (called the WE Team). This was an exciting experience for me. I helped them visualize what their concept of the WE Team looked like using the systems tool of drawing a systems diagram. This is an exercise whereby individuals or small groups quickly draw a visualization of a system (in this case the WE Team).  This is not art; it is not realistic; it is only a representation from one perspective.

This is a useful tool for evaluators because it can help evaluators see where there are opportunities for evaluation; where there are opportunities for leverage; and where there there might be resistance to change (force fields). generic systems diagramIt also helps evaluators see relationships and feedback loops. I have done workshops on using systems tools in evaluating multi-site systems (of which a systems diagram is one tool) with Andrea Hegedus for the American Evaluation Association. Although this isn’t the diagram the WE Team created, it is an example of what a system diagram could look like. I used the soft ware called Inspiration to create the WE Team diagram. Inspiration has a free 30- day download  and it is inexpensive (the download  for V. 9 is $69.00).

Focus group participant composition.

The composition of focus groups is very important if you want to get data that you can use AND that answers your study question(s). Focus groups tend to be homogeneous, with variations to allow for differing opinions. Since the purpose of the focus group is to elicit in-depth opinions, it is important to compose the group with similar demographics (depending on your topic) in

  • age
  • occupation
  • use of program
  • gender
  • background

Comfort and use drive the composition. More on this later.

Hi–Today is Wednesday, not Tuesday…I’m still learning about this technology. Thanks for staying with me.

Today I want to talk about a check list called  The Fantastic Five. (Thank you Amy Germuth for bringing this to my attention.)  This checklist  presents five questions against which to judge any/all survey questions you have written. survey image 3

The five questions are:

1. Can the question be consistently understood?

2. Does the question communicate what constitutes a good answer?

3. Do all respondents have access to the information needed to answer the question?

4. Is the question one which all respondents will be willing to answer?

5. Can the question be consistently communicated to respondents?

“Sure,” you say “…all my survey questions do that.”  Do they really? SurveyQns

Let me explain.

1. When you ask about an experience, think about the focus of the question. Is it specific for what you want? I used the question,  “When were you first elected to office?” and got all manner of answers. I wanted the year elected.  I got months (7 months ago), years (both “2 years ago” and  “in 2006”)), words (at the last election). A more specific question would have been “In what year were you first elected to office?”

2. When I asked the election question above, I did not communicate what constitutes a good answer. I wanted a specific year so that I could calculate how long the respondent had been an elected official. Fortunately, I found this out in a pilot test, so the final question gave me answers I wanted.

3. If you are looking for information that is located on supplemental documents (for example, tax forms), let the respondent know that these documents will be needed to answer your survey. Respondents will guess without having the supporting documentation ready, reducing the reliability of your data.

4. Often, we must ask questions that are of a sensitive nature, which could be seen as a violation of privacy. Using Amy’s example, “Have you ever been tested for HIV?” involves a sensitive subject.  Many people will not want to answer that question because of the implications.  Asking instead, “Have you donated blood in the last 15 years?” gets you the same information without violating privacy.  Red Cross began testing blood for HIV in 1985.

5. This point is especially important with mixed mode surveys (paper and interview for example). Again, being as specific as possible is the key. When asking an open ended question, make sure that the options included cover what you want to know. Also, make sure that the method of administration doesn’t affect the answer you want.

This post was based on a presentation by Amy Germuth, President of EvalWorks, LLC at a Coffee Break webinar sponsored by the American Evaluation Association. The full presentation can be found at:

http://comm.eval.org/EVAL/EVAL/Resources/ViewDocument/Default.aspx?DocumentKey=53951031-ef7a-4036-bc8f-3563f3946026.

Dillman's bookThe best source I’ve found for survey development is Don Dillman’s 3rd edition of “Internet, mail, and mixed-mode surveys: The tailored design method” published in 2009 and available from the publisher (Wiley).

Welcome back!   For those of you new to this blog–I post every Tuesday, rain or shine…at least I have for the past 6 weeks…:) I guess that is MY new year’s resolution–write here every week; on Tuesdays…now to today’s post…

What one thing are you going to learn this year about evaluation?

Something about survey design?

OR logic modeling?

OR program planning?

OR focus groups?focusgroups

OR…(fill in the blank and let me know…)

A colleague of mine asked me the other day about focus groups.

Specifically, the question was, “What makes a good focus group question?”

I went to Dick Krueger and Mary Anne Casey’s book (Focus Groups, 3rd ed. Sage Publications, 2000).  On page 40, they have a section called “Qualities of Good Questions”. These make sense.They say: Good questions…

focus group book--krueger

  1. …sound conversational
  2. …use words participants would use.
  3. …are easy to say.
  4. …are clear.
  5. …are short.
  6. …are open-ended.
  7. …are one dimensional.
  8. …include good directions.

Let’s explore these a bit.

  1. Since focus groups are a social experience (albeit, a data gathering one), conversational questions help set an informal tone.
  2. If participants don’t/can’t understand your questions (because you use jargon, technical terms, etc.), you won’t get good information. Without good information, your focus group will not help answer your inquiry.
  3. You don’t want to stumble over the words, so avoid complicated sentences.
  4. Make sure your participants know what you are asking. Long introductions can be confusing, not clarifying. Messages may be mixed and thus interpreted in different ways. All this results information that doesn’t answer your inquiry.
  5. Like being clear, short questions tend to avoid ambiguity and yield good data.
  6. To quote Dick and Mary Anne, “Open-ended questions are a hallmark of focus group interviewing.” You want an opinion. You want an explanation. You want rich description. Yes/No doesn’t give you good data.
  7. Using synonyms add richness to questioning–using synonyms confuses the participant. Confused participants yields ambiguous data. Avoid using synonyms–keep questions one-dimensional keeps questions clear.
  8. Participants need clear instructions when asked to do something in the focus group.  “Make a list” needs to have “on the piece of paper in front of you” added.  A list in the participants head may get lost and you loose the data.

Before you convene your focus group, make sure you have several individuals (3 – 5) who are similar to and not included in your target audience review the focus group questions. It is always a good idea to pilot any question you use to gather data.

Ellen Taylor-Powell (at University of Wisconsin Extension) has a Quick Tips sheet on focus groups for more information. To access it go to: http://www.uwex.edu/ces/pdande/resources/pdf/Tipsheet5.pdf