A colleague of mine trying to explain observation to a student said, “Count the number of legs you see on the playground and divide by two. You have observed the number of students on the playground.” That is certainly one one way to look at the topic.

I’d like to be a bit more precise that that, though.  Observation is collecting information through the use of the senses–seeing, hearing, tasting, smelling, feeling.  To gather observations, the evaluator must have a clearly specified protocol–a step-by-step approach to what data are to be collected and how. The evaluator typically gets the first exposure to collecting information by observation at a very young age–learning to talk (hearing); learning to feed oneself (feeling); I’m sure you can think of other examples.  When the evaluator starts school and studies science,   when the teacher asks the student to “OBSERVE” the phenomenon and record what is seen, the evaluator is exposed to another approach to the method of observation.

As the process becomes more sophisticated, all manner of instruments may assist the evaluator–thermometers, chronometers, GIS, etc. And for that process to be able to be replicated (for validity), the steps become more and more precise.

Does that mean that looking at the playground and counting the legs and dividing by two has no place? Those who decry data manipulation would say agree that this form of observation yields information of questionable usefulness.   Those who approach observation as an unstructured activity would disagree and say that exploratory observation could result in an emerging premise.

You will see observation as the basis for ethnographic inquiry.  David Fetterman has a small volume (Ethnography: Step by step) published by Sage that explains how ethnography is used in field work.  Take simple ethnography a step up and one can read about meta-ethnography by George W. Noblit and R. Dwight Hare. I think my anthropology friends would say that observation is a tool used extensively by anthropologists. It is a tool that can be used by evaluators as well.


How many time have you been interviewed?

How many times have you conducted an interview?

Did you notice any similarities?  Probably.

My friend and colleague, Ellen Taylor-Powell has defined interviews as a method for collecting information by talking with and listening to people–a conversation if you will.  These conversations traditionally happen over the phone or face to face–with social media, they could also happen via chat, IM, or some other technology-based approach. A resource I have found  useful is the Evaluation Cookbook.

Interviews can be structured (not unlike a survey with discrete responses) or unstructured (not unlike a conversation).  You might also hear interviews consisting of closed-ended questions and open-ended questions.

Perhaps the most common place for interviews is in the hiring process (seen in personnel evaluation).

Another place for the use of interviews is in the performance review process (seen in performance evaluation).

Unless the evaluator conducting personnel/performance  evaluations,  the most common place for interviews to occur when survey methodology is employed.

Dillman (I’ve mentioned him in previous posts) has sections in his second (pg. 140 – 148) and third (pg. 311-314) editions that talk about the use of interviews in survey construction.  He makes a point in his third edition that I think is important for evaluators to remember and that is the issue of social desirability bias (pg. 313).  Social desirability bias is the possibility that the respondent would answer with what s/he thinks the person asking the questions would want/hope to hear.  Dillman  goes on to say, “Because of the interaction with another person, interview surveys are more likely to produce socially desirable answers for sensitive questions, particularly for questions about potentially embarrassing behavior…”

Expect social desirability response bias with interviewing (and expect differences in social desirability when part of the interview is self-report and part is face-to-face).  Social desirability responses could (and probably will) occur when questions do not appear particularly sensitive to the interviewer; the respondent may have a different cultural perspective which increases sensitivity.  That same cultural difference could also manifest in increased agreement with interview questions often called acquiescence.

Interviews take time; cost more; and often yield a lot of data which may be difficult to analyze.  Sometimes, as with a pilot program, interviews are worth it.  Interviews can be used for formative and summative  evaluations.  Consider if interviews are the best source of evaluation data for the program in question.

I  have six references on case study in my library. Robert K. Yin wrote two seminal books on case studies, one in 1993 (now in a 2nd edition, 1993 was the 1st edition) and the other in 1989 (now in the 4th edition, 1989 was the 1st edition).  I have the 1994 edition (2nd edition of the 1989 book), and in it Yin says that “case studies are increasingly commonplace in evaluation research…are the preferred strategy when “how” and “why” questions are being posed, when the investigator has little control over events, and when the focus in on a contemporary phenomenon within some real-life context.

So what exactly is a case study?

A case study is typically an in-depth study of one or more individuals, institutions, communities, programs, populations. Whatever the “case” it is clearly bounded and what is studied is what is happening and important within those boundaries. Case studies use multiple sources of information to build the case.  For a more detailed review see Wikipedia

There are three types of case studies

  • Explanatory
  • Exploratory
  • Descriptive

Over the years, case method has become more sophisticated.

Brinkerhoff has developed a method, the Success Case Method, as an evaluation approach that “easier, faster, and cheaper than competing approaches, and produces compelling evidence decision-makers can actually use.”  As an evaluation approach, this method is quick and inexpensive and most of all, produces useful results.

Robert E. Stake has taken case study beyond one to many with his recent book, Multiple Case Study Analysis.  It looks at cross-case analysis and can be used when broadly occurring phenomena need to be explored, such as leadership or management.

I’ve mentioned four of the six books, if you want to know the others, let me know.

Extension has consistently used survey as a method for collecting information.

Survey collects information through structured questionnaires resulting in quantitative data.   Don Dillman wrote the book, Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method .  Although mail and individual interviews were once the norm, internet survey software has changed that.

Other ways  are often more expedient, less costly, less resource intensive than survey. When needing to collect information, consider some of these other ways:

  • Case study
  • Interviews
  • Observation
  • Group Assessment
  • Expert or peer review
  • Portfolio reviews
  • Testimonials
  • Tests
  • Photographs, slides, videos
  • Diaries, journals
  • Logs
  • Document analysis
  • Simulations
  • Stories
  • Unobtrusive measures

I’ll talk about these in later posts and provide resources for each of these.

When deciding what information collection method (or methods) to use, remember there are three primary sources of evaluation information. Those sources often dictate the methods of information collection. The Three sources are:

  1. Existing information
  2. People
  3. Pictorial records and observation

When using existing information, developing a systematic approach to LOOKING at the information source is what is important.

When gathering information from people, ASKING them is the approach to use–and how that asking is structured.

When using pictorial records and observations, determine what you are looking for before you collect information