The question was raised recently: From whom am I not hearing?

Hearing from key stakeholders is important.  Having as many perspectives as possible, as time and money will allow, enhances the evaluation.

How often do you only target the recipients of the program in your evaluation, needs assessment, or focus groups?

If only voices heard in planning the evaluation are the program team, what information will you miss? What valuable information is not being communicated?

I was the evaluator on a recovery program for cocaine abusing moms and their children.  The PI was a true academic and had all sorts of standardized measures to use to determine that the program was successful.  The PI had not thought to ask individuals like the recipients of the program what they thought.  When we brought members of the program’s target audience to the table and asked them, after explaining the proposed program, “How will you know that the program has been worked; has been successful?”, their answers did not include the standardized measures proposed by the PI. The evaluation was revised to include their comments and suggestions. Fortunately, this happened early in the planning stages, before the implementation and we were able to capture important information.

Ask yourself, “How can I seek those voices that will capture the key perspectives of this evaluation?” Then figure out a way to include those stakeholders in the evaluation planning. Participatory-evaluation at its best.

Spring break has started.

sunshine on the beach in oreagon imagesThe sun is shining.

The sky is blue.

Daphne is heady. Daphne-Odora-Shrub

All of this is evaluative.

Will be on holiday next week.  Enjoy!

tools of the tradeHaving spent the last week reviewing two manuscripts for a journal editor, it became clear to me that writing is an evaluative activity.

How so?

The criteria for good writing is meeting the 5 Cs: Clarity, Coherence, Conciseness, Correctness, and Consistency.

Evaluators write–they write survey questions, summaries of findings, reports, journal manuscripts. If they do not employ the 5 Cs to communicate to a naive audience  what is important, then the value (remember the root for evaluation is value) of their writing is lost, often never to be reclaimed.

In a former life, I taught scientific/professional writing to medical students, residents, junior professors, and other graduate students. I found many sources that were useful and valuable to me. The conclusion to which I came is that taking a scientific/professional (or non-fiction) writing course is an essential tool to have as an evaluator. So I set about collecting useful (and, yes, valuable) resources. I offer them here.strunk and white 4th edstrunk and white 3rd ed

Probably the single resource that every evaluator needs to have on hand is Strunk and White’s slim volume called “The Elements of Style”. It is in the 4th edition–I still use the 3rd. Recently, a 50th anniversary edition was published that is a fancy version of the 4th edition.  Amazon has the 50th anniversary edition as well as the 4th edition–the 3rd ed is out of print.

APA style guideYou also need the style guide (APA, MLA, Biomedical Editors, Chicago) that is used by the journal to which you are submitting your manuscript. Choose one. Stick with it. I have the 6th edition of the APA guide on my desk. It is on line as well.

Access to a dictionary and a thesaurus (now conveniently available on line and through computer software) is essential. I prefer the hard copy Webster’s (I love the feel of books), yet would recommend the on-line version of the Oxford English Dictionary.

There are a number of helpful writing books (in no particular order or preference):

  • Turabian, K. L. (2007).    A manual for writers of research papers, theses, and dissertations. Chicago: The University of Chicago Press.
  • Thyer, B. A. (1994). Successful publishing in scholarly journals. Thousand Oaks, CA: Sage.
  • Berger, A. A. (1993). Improving writing skills. Thousand Oaks, CA: Sage.
  • Silvia, P. J. (2007). How to write a lot. Washington DC: American Psychological Association.
  • Zeiger, M. (1999). Essentials of writing biomedical research papers. NY: McGraw-Hill.

I will share Will Safire’s 17 lighthearted looks at grammar and good usage another day.


Last Friday, I had the opportunity to talk with a group of graduate students about graduatestudents2007evaluation as I have seen it (for the past now almost 30 years) and currently see it.

The previous day, I  finished an in depth, three-day professional development session on differences.  Now, I would guess you are wondering what do these two activities have in common and how they relate to evaluation.  All three are tied together through an individual’s perspective.  I was looking for a teachable moment and I found one.

A response often given by evaluators when asked a question about the merit and worth of something (program, process, product, policy, personnel, etc.) is, “It all depends.”

And you wonder, “Depends on what?”

The answer is:  PERSPECTIVE.

Diversity_WheelYour experiences place you in a unique and original place.  Your view point is influenced by those experiences; as are your attitudes, your behaviors, your biases; your understanding of differences; your approach to problem solving; your view of inquiry.  All this is perspective.  And when you make decisions about something, those experiences (i.e., your perspective) affect your decisions.  Various dimensions of experience and birth (the diversity wheel to the left lists the dimensions of difference) affect what choices you make; affect how you approach a problem; affect what questions you ask; affect your interpretation of a situation.

The graduate students came from different employment backgrounds; were of different ages, genders, marital status, ethnicity, appearance, educational background, health status, income, geographic location and probably other differences I couldn’t tell from looking or listening.  Their view of evaluation was different.  They asked different questions.  The answer to which was “It all depends.”  And even that (It all depends)  is an evaluative activity–not unlike talking to graduate students, understanding perspective, or doing evaluation.

I was asked about the need for an evaluation plan to be reviewed by the institutional review board (IRB) office.  In pausing to answer, the atrocities that have occurred and are occurring throughout the world428px-Durer_Revelation_Four_Riders registered once again with me…the Inquisition, the Crusades, Cortes, Auschwitz, Nuremberg trials, Sudan, to name only a few.  Now, although I know there is little or no evaluation in most of these situations, humans were abused in the guise of finding the truth.  (I won’t capitalize truth, although some would argue that Truth was the impetus for these acts.)

So what responsibility DO evaluators have for protecting individuals who participate in the inquiry we call evaluation?  The American Evaluation Association has developed and endorsed for all evaluators a set of Guiding Principals.  There are five principals–Systematic Inquiry, Competence, Integrity/Honesty, Respect for People, and Responsibilities for the General and Public Welfare.  An evaluator must perform the systematic inquiry competently with integrity, respecting the individuals participating and recognizing diversity of police imagespublic interests and values.  This isn’t a mandated code; there are no evaluation police; an evaluator will not be sanctioned if these principals are not followed (the evaluator may not get repeat work, though).  These guiding principals were established to “guide” the evaluator to do the best work possible within the limitations of the job.

IRB imagesThe IRB is there to protect the participant first and foremost; then the investigator and the institution.  So although there is not a direct congruence with the IRB principals of voluntary participation, confidentiality, and minimal risk, to me, evaluators following the guiding principals will be able to assure participants that they will be respected and that the inquiry will be conducted with integrity.  No easy task…and a lot of work.

I think evaluators have a responsibility embedded in the guiding principals to assure individuals participating in evaluations that participants engage voluntarily, that they provide information that will remain confidential, and that what is expected of them involves minimal risk.  Securing IRB approval will assure participants that this is so.

Although two different monitoring systems (one federal; one professional), I think it is important meet both sets of expectations.