The question was raised recently: From whom am I not hearing?

Hearing from key stakeholders is important.  Having as many perspectives as possible, as time and money will allow, enhances the evaluation.

How often do you only target the recipients of the program in your evaluation, needs assessment, or focus groups?

If only voices heard in planning the evaluation are the program team, what information will you miss? What valuable information is not being communicated?

I was the evaluator on a recovery program for cocaine abusing moms and their children.  The PI was a true academic and had all sorts of standardized measures to use to determine that the program was successful.  The PI had not thought to ask individuals like the recipients of the program what they thought.  When we brought members of the program’s target audience to the table and asked them, after explaining the proposed program, “How will you know that the program has been worked; has been successful?”, their answers did not include the standardized measures proposed by the PI. The evaluation was revised to include their comments and suggestions. Fortunately, this happened early in the planning stages, before the implementation and we were able to capture important information.

Ask yourself, “How can I seek those voices that will capture the key perspectives of this evaluation?” Then figure out a way to include those stakeholders in the evaluation planning. Participatory-evaluation at its best.

Spring break has started.

sunshine on the beach in oreagon imagesThe sun is shining.

The sky is blue.

Daphne is heady. Daphne-Odora-Shrub

All of this is evaluative.

Will be on holiday next week.  Enjoy!

tools of the tradeHaving spent the last week reviewing two manuscripts for a journal editor, it became clear to me that writing is an evaluative activity.

How so?

The criteria for good writing is meeting the 5 Cs: Clarity, Coherence, Conciseness, Correctness, and Consistency.

Evaluators write–they write survey questions, summaries of findings, reports, journal manuscripts. If they do not employ the 5 Cs to communicate to a naive audience  what is important, then the value (remember the root for evaluation is value) of their writing is lost, often never to be reclaimed.

In a former life, I taught scientific/professional writing to medical students, residents, junior professors, and other graduate students. I found many sources that were useful and valuable to me. The conclusion to which I came is that taking a scientific/professional (or non-fiction) writing course is an essential tool to have as an evaluator. So I set about collecting useful (and, yes, valuable) resources. I offer them here.strunk and white 4th edstrunk and white 3rd ed

Probably the single resource that every evaluator needs to have on hand is Strunk and White’s slim volume called “The Elements of Style”. It is in the 4th edition–I still use the 3rd. Recently, a 50th anniversary edition was published that is a fancy version of the 4th edition.  Amazon has the 50th anniversary edition as well as the 4th edition–the 3rd ed is out of print.

APA style guideYou also need the style guide (APA, MLA, Biomedical Editors, Chicago) that is used by the journal to which you are submitting your manuscript. Choose one. Stick with it. I have the 6th edition of the APA guide on my desk. It is on line as well.

Access to a dictionary and a thesaurus (now conveniently available on line and through computer software) is essential. I prefer the hard copy Webster’s (I love the feel of books), yet would recommend the on-line version of the Oxford English Dictionary.

There are a number of helpful writing books (in no particular order or preference):

  • Turabian, K. L. (2007).    A manual for writers of research papers, theses, and dissertations. Chicago: The University of Chicago Press.
  • Thyer, B. A. (1994). Successful publishing in scholarly journals. Thousand Oaks, CA: Sage.
  • Berger, A. A. (1993). Improving writing skills. Thousand Oaks, CA: Sage.
  • Silvia, P. J. (2007). How to write a lot. Washington DC: American Psychological Association.
  • Zeiger, M. (1999). Essentials of writing biomedical research papers. NY: McGraw-Hill.

I will share Will Safire’s 17 lighthearted looks at grammar and good usage another day.


Last Friday, I had the opportunity to talk with a group of graduate students about graduatestudents2007evaluation as I have seen it (for the past now almost 30 years) and currently see it.

The previous day, I  finished an in depth, three-day professional development session on differences.  Now, I would guess you are wondering what do these two activities have in common and how they relate to evaluation.  All three are tied together through an individual’s perspective.  I was looking for a teachable moment and I found one.

A response often given by evaluators when asked a question about the merit and worth of something (program, process, product, policy, personnel, etc.) is, “It all depends.”

And you wonder, “Depends on what?”

The answer is:  PERSPECTIVE.

Diversity_WheelYour experiences place you in a unique and original place.  Your view point is influenced by those experiences; as are your attitudes, your behaviors, your biases; your understanding of differences; your approach to problem solving; your view of inquiry.  All this is perspective.  And when you make decisions about something, those experiences (i.e., your perspective) affect your decisions.  Various dimensions of experience and birth (the diversity wheel to the left lists the dimensions of difference) affect what choices you make; affect how you approach a problem; affect what questions you ask; affect your interpretation of a situation.

The graduate students came from different employment backgrounds; were of different ages, genders, marital status, ethnicity, appearance, educational background, health status, income, geographic location and probably other differences I couldn’t tell from looking or listening.  Their view of evaluation was different.  They asked different questions.  The answer to which was “It all depends.”  And even that (It all depends)  is an evaluative activity–not unlike talking to graduate students, understanding perspective, or doing evaluation.

I was asked about the need for an evaluation plan to be reviewed by the institutional review board (IRB) office.  In pausing to answer, the atrocities that have occurred and are occurring throughout the world428px-Durer_Revelation_Four_Riders registered once again with me…the Inquisition, the Crusades, Cortes, Auschwitz, Nuremberg trials, Sudan, to name only a few.  Now, although I know there is little or no evaluation in most of these situations, humans were abused in the guise of finding the truth.  (I won’t capitalize truth, although some would argue that Truth was the impetus for these acts.)

So what responsibility DO evaluators have for protecting individuals who participate in the inquiry we call evaluation?  The American Evaluation Association has developed and endorsed for all evaluators a set of Guiding Principals.  There are five principals–Systematic Inquiry, Competence, Integrity/Honesty, Respect for People, and Responsibilities for the General and Public Welfare.  An evaluator must perform the systematic inquiry competently with integrity, respecting the individuals participating and recognizing diversity of police imagespublic interests and values.  This isn’t a mandated code; there are no evaluation police; an evaluator will not be sanctioned if these principals are not followed (the evaluator may not get repeat work, though).  These guiding principals were established to “guide” the evaluator to do the best work possible within the limitations of the job.

IRB imagesThe IRB is there to protect the participant first and foremost; then the investigator and the institution.  So although there is not a direct congruence with the IRB principals of voluntary participation, confidentiality, and minimal risk, to me, evaluators following the guiding principals will be able to assure participants that they will be respected and that the inquiry will be conducted with integrity.  No easy task…and a lot of work.

I think evaluators have a responsibility embedded in the guiding principals to assure individuals participating in evaluations that participants engage voluntarily, that they provide information that will remain confidential, and that what is expected of them involves minimal risk.  Securing IRB approval will assure participants that this is so.

Although two different monitoring systems (one federal; one professional), I think it is important meet both sets of expectations.

What do you really want to know? What would be interesting to know? What can you forget about?Thought_Bubble_1

When you sit down to write survey questions, keep these questions in mind.

  • What do you really want to know?

You are doing an evaluation of the impact of your program. This particular project is peripherally related to two other projects you do. You think, “I could capture all projects with just a few more questions.” You are tempted to lump them all together. DON’T.

Keep your survey focused. Keep your questions specific. Keep it simple.


  • What would be interesting to know?

survey images 4There are many times where I’ve heard investigators and evaluators say something like, “That would be really interesting to see if abc or qrs happens.” Do you really need to know this?  Probably not.  Interesting is not a compelling reason to include a question. So–DON’T ASK.

I always ask the principal investigator, “Is this information necessary or just nice to know?  Do you want to report that finding? Will the answer to that question REALLY add to the measure of impact you are so arduously trying to capture? If the answer is probably not, DON’T ASK.

Keep your survey focused. Keep your questions specific. Keep it simple.

  • What can you forget about?

Do you really want to know the marital status of your participants? Or if possible participants are volunteers in some other program, school teachers,  and students all at the same time? My guess is that this will not affect the overall outcome of the project, nor its impact. If not, FORGET IT!

Keep your survey focused. Keep your questions specific. Keep it simple.survey image

Statistically significant is a term that is often bandied about. What does it really mean? Why is it important?

First–why is it important?

It is important because it helps the evaluator make decisions based on the data gathered.600px-FUDGE_4dF_probability.svg

That makes sense–evaluators have to make decisions so that the findings can be used. If there  isn’t some way to set the findings apart from the vast morass of information, then it is only  background noise. So those of us who do analysis have learned to look at the probability level (written as a “p” value such as p=0.05). The “p” value helps us determine if something is true, not necessarily that something is important.

probability of occurringSecond–what does that number really mean?

Probability level means–can this  (fill in the blank here) happen by chance? If it can occur by chance, say 95 times out of 100, then it is probably not true.  When evaluators look at probability levels, we want really small numbers. Small numbers say that the likelihood that this change occurred by chance (or is untrue) is really unlikely. So even though a really small number occurs (like 0.05) it really means that there is a 95% chance that this change did not occur by chance and that it is really true. You can convert a p value by subtracting it from 100 (100-5=95; the likelihood that this did not occur by chance)

Convention has it that for something to be statistically significant, the value must be at least 0.05. This convention comes from academic research.  Smaller numbers aren’t necessarily better; they just confirm that the likelihood that true change occurs more often. There are software programs (Statxact for example) that can compute the exact probability; so seeing numbers like 0.047 would occur.2007-01-02-borat-visits-probability

Exploratory research (as opposed to confirmatory) may have a higher p value such as p=0.10.This means that the trend is moving in the desired direction.  Some evaluators let the key stakeholders determine if the probability level (p value) is at a level that indicates importance, for example, 0.062. Some would argue that 94 time out of 100 is not that much different from 95 time out of 100 of being true.

.

The question was raised about writing survey questions.

Dillman's book

My short answer is Don Dillman’s book, Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method is your best source. It is available from the publisher John Wiley. Or from Amazon. Chapter 4 in the book, “The basics of crafting good questions”, helps to focus your thinking. Dillman (and his co-authors Jolene D. Smyth and Leah Melani Christian) make it clear that how the questions are constructed raise several methodological questions and not attending to those questions can affect on how the question performs.

One (among several) consideration that Dillman et al, suggest to be considered every time is:

  • The order of the questions (Chapter 6 has a section on ordering questions).survey image 3

I only touching briefly on the order of questions using Dillman et al’s guidelines. There are 22 guidelines in Chapter 6, “From Questions to a Questionnaire”; of those 22, five refer to ordering the questions. They are:

  1. ” Group related questions that cover similar topics, and begin with questions likely to be salient to nearly all respondents” (pg. 157).  Doing this closely approximates a conversation, the goal in questionnaire development.
  2. “Choose the first question carefully” (pg. 158). The first question is the one which will “hook” respondents into answering the survey.
  3. “Place sensitive or potentially objectionable questions near the end of the questionnaire” (pg. 159). This placement increases the likelihood that respondents will be engaged in the questionnaire and will, therefore, answer sensitive questions.
  4. “Ask questions about events in the order the events occurred” (pg. 159). Ordering the questions most distant to most recent  occurrence, least important to most important activity, presents a logical flow to the respondent.
  5. “Avoid unintended question order effects” (pg. 160). Keep in mind that questions do not stand alone, that respondents may use previous questions as a foundation for the following questions. This can create an answer bias.

When constructing surveys, remember to always have other people read your questions–especially people similar to and different from your target audience.

More on survey question development later.

I know it is Monday, not Tuesday or Wednesday. I will not have internet access Tuesday or Wednesday and I wanted to answer a question posed to me by a colleague and long time friend who has just begun her evaluation career.

Her question is:

What are the best methods to collect outcome evaluation data.Data 3 2_2

Good question.

The answer:  It all depends.

On what does the collection depend?

  • Your question.
  • Your use.
  • Your resources.

If your resources are endless (yeah, right…smiley ), then you can hire people; use all the time you need; and collect a wealth of data. Most folks aren’t this lucky.

If you plan to use your findings to convince someone, you need to think about what will be most convincing. Legislators like the STORY that tugs at the heart strings.

Administrators like, “Just the FACTS, ma’am.” Typically presented in a one-page format with bullets.

Program developers may want a little of both.

Depending on what question you want answered will depend on how you will collect the answer.

My friend, Ellen-Taylor Powell, at the University of Wisconsin Extension Service has developed a handout of data methods (see: Methods for Collecting Information).  This handout is in PDF form and can be downloaded. It is a comprehensive list of different data collection methods that can be adapted to answer your question within your available resources.

She also has a companion handout called Sources of Evaluation Information. I like this handout because it is clear and straight forward. I have found both very useful in the work I do.

Whole books have been written on individual methods. I can recommend some I like–let me know.

There are a three topics on which I want to touch today.

  • Focus group participant composition
  • Systems diagrams
  • Evaluation report usePatton's utilization focused evaluation

In reverse order:

Evaluation use: I neglected to mention Michael Quinn Patton’s book on evaluation use. Patton has advocated use before most everyone else.  The title of his book  is Utilization-Focused Evaluation. The 4th edition is available from the publisher (Sage) or from Amazon (and if I knew how to insert links to those sites, I’d do it…another lesson…).

cartoon of systems diagramSystems diagrams: I had the opportunity last week to work with a group of Extension faculty all involved in Watershed Education (called the WE Team). This was an exciting experience for me. I helped them visualize what their concept of the WE Team looked like using the systems tool of drawing a systems diagram. This is an exercise whereby individuals or small groups quickly draw a visualization of a system (in this case the WE Team).  This is not art; it is not realistic; it is only a representation from one perspective.

This is a useful tool for evaluators because it can help evaluators see where there are opportunities for evaluation; where there are opportunities for leverage; and where there there might be resistance to change (force fields). generic systems diagramIt also helps evaluators see relationships and feedback loops. I have done workshops on using systems tools in evaluating multi-site systems (of which a systems diagram is one tool) with Andrea Hegedus for the American Evaluation Association. Although this isn’t the diagram the WE Team created, it is an example of what a system diagram could look like. I used the soft ware called Inspiration to create the WE Team diagram. Inspiration has a free 30- day download  and it is inexpensive (the download  for V. 9 is $69.00).

Focus group participant composition.

The composition of focus groups is very important if you want to get data that you can use AND that answers your study question(s). Focus groups tend to be homogeneous, with variations to allow for differing opinions. Since the purpose of the focus group is to elicit in-depth opinions, it is important to compose the group with similar demographics (depending on your topic) in

  • age
  • occupation
  • use of program
  • gender
  • background

Comfort and use drive the composition. More on this later.