tools of the tradeHaving spent the last week reviewing two manuscripts for a journal editor, it became clear to me that writing is an evaluative activity.

How so?

The criteria for good writing is meeting the 5 Cs: Clarity, Coherence, Conciseness, Correctness, and Consistency.

Evaluators write–they write survey questions, summaries of findings, reports, journal manuscripts. If they do not employ the 5 Cs to communicate to a naive audience  what is important, then the value (remember the root for evaluation is value) of their writing is lost, often never to be reclaimed.

In a former life, I taught scientific/professional writing to medical students, residents, junior professors, and other graduate students. I found many sources that were useful and valuable to me. The conclusion to which I came is that taking a scientific/professional (or non-fiction) writing course is an essential tool to have as an evaluator. So I set about collecting useful (and, yes, valuable) resources. I offer them here.strunk and white 4th edstrunk and white 3rd ed

Probably the single resource that every evaluator needs to have on hand is Strunk and White’s slim volume called “The Elements of Style”. It is in the 4th edition–I still use the 3rd. Recently, a 50th anniversary edition was published that is a fancy version of the 4th edition.  Amazon has the 50th anniversary edition as well as the 4th edition–the 3rd ed is out of print.

APA style guideYou also need the style guide (APA, MLA, Biomedical Editors, Chicago) that is used by the journal to which you are submitting your manuscript. Choose one. Stick with it. I have the 6th edition of the APA guide on my desk. It is on line as well.

Access to a dictionary and a thesaurus (now conveniently available on line and through computer software) is essential. I prefer the hard copy Webster’s (I love the feel of books), yet would recommend the on-line version of the Oxford English Dictionary.

There are a number of helpful writing books (in no particular order or preference):

  • Turabian, K. L. (2007).    A manual for writers of research papers, theses, and dissertations. Chicago: The University of Chicago Press.
  • Thyer, B. A. (1994). Successful publishing in scholarly journals. Thousand Oaks, CA: Sage.
  • Berger, A. A. (1993). Improving writing skills. Thousand Oaks, CA: Sage.
  • Silvia, P. J. (2007). How to write a lot. Washington DC: American Psychological Association.
  • Zeiger, M. (1999). Essentials of writing biomedical research papers. NY: McGraw-Hill.

I will share Will Safire’s 17 lighthearted looks at grammar and good usage another day.


Welcome back.  It is Tuesday.

Some folks have asked me–now that I’ve pointed out that all of you are evaluators–where will I take this column.  That was food for thought…and although I’ve got a topic ready to go, I’m wondering if jumping off into working evaluation is the best place to go next.  One place I did go is to update the “About” tab on this page…

Maybe thinking some more about what you evaluated today; maybe thinking about a bigger picture of evaluation; maybe just being glad that the sun is shining is enough (although the subfreezing temperatures remind me of Minnesota without the snow).   The saying goes, “Minnesota has two seasons–green and white.”  Maybe Oregon has two seasons–dry and wet.   That is an evaluative question, by-the-way.  Hmmm…thinking about evaluation, having an evaluation question of the week sounds like a good idea.  What’s yours—small or large?  I may not have an answer, I will have an idea.

Ok–so now that I’ve dealt with the evaluative question of the day–I think it is time to go to more substance, like “what exactly IS program evaluation?”  Good question–if we are going to have this conversation, then we need to be using the same language.

First, let me address why the 489px-Wikipedia-logo-en-big link to Wikipedia is on the far right in my Blogroll list.  I’ve learned Wikipedia is a readily available, general reference that gets folks started understanding a subject.  It is NOT the definitive word on that subject.  Wikipedia (see link on the right) describes program evaluation as “…a systematic method for collecting, analyzing, and using information to answer basic questions about projects, policies, and programs.”  Well…yes…except that Wikipedia seems to be defining evaluation when it includes “projects and policies.”  Program evaluation deals with programs.  Wikipedia does have an entry for evaluation as well as an entry for program evaluation. Read both.

Evaluation can be applied to projects, policies, personnel, processes, performances, proposals, products AND programs. I won’t talk much about personnel, performances, proposals, or products. Projects may be another word for program; policies usually result in programs; and processes are often part of programs, so they may be talked about sometimes.

Most of what thisScriven book cover blog will address is program evaluation because most of you (including me) have programs that need evaluating.  When I talk about program evaluation I am talking about “…determining the merit, worth, or value of (a program).” (Michael Scriven uses this definition in his book, Evaluation Thesaurus, 1991, Sage Publications.)

It is available at the following site (or through the publisher).

So for me and what you need to know about program evaluation is this:

  • The root of evaluation is value (OED lists the etymology as [a.  Fr.  évaluation, f.  évaluer, f.  é- =es- (: L.  ex) out + value])
  • Program evaluation IS systematic.
  • Program evaluation DOES collect, analyze, and utilize information.
  • Program evaluation ATTEMPTS to determine the merit, worth, or value of a program.
  • Program evaluation ANSWERS this question:

“What difference does this program make in the lives and well being of (fill in the blank here—citizens of Oregon, my 4-H club, residents of the watershed, you get the idea).”

NOTE: I talk about “lives and well-being” because most programs are delivered to individuals who will be CHANGED as a result of participating in the program, i.e., experience a difference.

For those of us in Extension, when we do evaluation we are trying to determine if we “improved something;” we are not trying to “prove” that what we did accomplished something.  Peter Bloom always said, “Extension is out to improve something (attribution), not prove something (causation).”  We are looking for attribution not causation.

Many references exist that talk more about what is program evaluation.  My favorite reference is by Jody Fitzpatrick, Jim Sanders, and Blaine Worthen and is called, Program Evaluation: Alternative Approaches and Practical Guidelines (2004, Pearson Education)Fitzpatrick book cover_

It is available at the following site (or through the publisher).