Harold Jarche says in his April 21 post, “What I’ve learned about blogging is that you have to do it for yourself. Most of my posts are just thoughts that I want to capture.”  What an interesting way to look at blogging.  Yes, there is content; yes, there is substance.  What there is most are captured thoughts.  Thoughts committed to “paper” before they fly away.  How many times have you said to yourself–if only…because you don’t remember what you were thinking; where you were going.  It may be a function of age; it may be a function of the times; it may be a function of other things as well (too little sleep, too much information, lack of f0cus).

When I blog on evaluation, I want to provide content that is meaningful.  I want to provide substance (as I understand it) in the field of evaluation.  Most of all, I want to capture what I’m thinking at the moment (like now).  Last week was a good example of capturing thoughts.  I wasn’t making up the rubric content; it is real.  All evaluation needs to have criteria against which the “program” is judged for merit and worth.  How else can you determine  the value of something?  So I ask you:  What criteria do you use in the moment you decide?  (and a true evaluator will say, “It depends…”)

A wise man (Elie Wiesel) said, “A man’s (sic) life, really, is not made up of years but of moments, all of which are fertile and unique.”  Even though he has not laid out explicitly his rubric, it is clear what makes them have merit and worth– “moments which are fertile and unique”.  An interesting way to look at life, eh?

 

Jarche gives us a 10 year update about his experience blogging.  He is asking a question I’ve been asking:  He asks what has changed and what has he learned in the past 10 years.  He talks about metrics (spammers and published posts).  I can do that.  He doesn’t talk about analytics (although I’m sure he could) and I don’t  want to talk about analytics, either.  Some comments on my blog suggest that I look at length of time spent on a page…that seems like a reasonable metric.  What I really want to hear is what has changed (Jarche talks about what has changes as being perpetual beta).  Besides the constantly changing frontier of social media, I go back to the comment by Elie Wiesel–moments that are fertile and unique.  How many can you say you’ve had today?  One will make my day–one will get my gratitude.  Today I am grateful for being able to blog.

A rubric is a way to make criteria (or standards) explicit and it does that in writing so that there can be no misunderstanding.  It is found in many evaluative activities especially assessment of classroom work.  (Misunderstanding is still possible because the English language is often not clear–something I won’t get into today; suffice it to say that a wise woman said words are important–keep that in mind when crafting a rubric.)

 

This week there were many events that required rubrics. Rubrics may have been implicit; they certainly were not explicit.  Explicit rubrics were needed.

 

I’ll start with apologies for the political nature of today’s post.

Yesterday’s  activity of the US Senate is an example where a rubric would be valuable.  Gabby  Giffords said it best:  

Certainly, an implicit rubric for this event can be found in this statement:

  Only it was not used.  When there are clear examples of inappropriate behavior; behavior that my daughters’ kindergarten teacher said was mean and not nice, a rubric exists.  Simple rubrics are understood by five year olds (was that behavioir mean OR was that behavior nice).  Obviously 46 senators could only hear the NRA; they didn’t hear that the behavior (school shootings) was mean.

Boston provided us with another example of the mean vs. nice rubric.  Bernstein got the concept of mean vs. nice.

Music is nice; violence is mean.

Helpers are nice; bullying is mean. 

There were lots of rubrics, however implicit, for that event.    The NY Times reported that helpers (my word) ran TOWARD those in need not away from the site of the explosion (violence).   There were many helpers.  A rubric existed, however implicit.

I want to close with another example of a rubric: 

I’m no longer worked up–just determined and for that I need a rubric.  This image may not give me the answer; it does however give me pause.

 

For more information on assessment and rubrics see: Walvoord, B. E. (2004).  Assessment clear and simple.  San Francisco: Jossey-Bass.

 

 

In a conversation with a colleague on the need for IRB when what was being conducted was evaluation not research, I was struck by two things:

  1. I needed to discuss the protections provided by IRB  (the next timely topic??) and
  2. the difference between evaluation and research needed to be made clear.

Leaving number 1 for another time, number 2 is the topic of the day.

A while back, AEA 365 did a post on the difference between evaluation and research (some of which is included below) from a graduate students perspective.  Perhaps providing other resources would be valuable.

To have evaluation grouped with research is at worst a travesty; at best unfair.  Yes, evaluation uses research tools and techniques.  Yes, evaluation contributes to a larger body of knowledge (and in that sense seeks truth, albeit contextual).  Yes, evaluation needs to have institutional review board documentation.  So in many cases, people could be justified in saying evaluation and research are the same.

NOT.

Carol Weiss   (1927-2013, she died in January) has written extensively on this difference and  makes the distinction clearly.  Weiss’s first edition of Evaluation Research  was published in 1972.She revised this volume in 1998 and issued it under the title of Evaluation. (Both have subtitles.)

She says that evaluation applies social science research methods and makes the case that it is intent of the study which makes the difference between evaluation and research.  She lists the following differences (pp 15 – 17, 2nd ed.):

  1. Utility;
  2. Program-driven questions;
  3. Judgmental quality;
  4. Action setting;
  5. Role Conflicts;
  6. Publication; and
  7. Allegiance.

 

(For those of you who are still skeptical, she also lists similarities.)  Understanding and knowing the difference between evaluation and research matters.  I recommend her books.

Gisele Tchamba who wrote the AEA365 post says the following: 

  1. Know the difference.  I came to realize that practicing evaluation does not preclude doing pure research. On the contrary, the methods are interconnected but the aim is different (I think this mirrors Weiss’s concept of intent).
  2. The burden of explaining. Many people in academia vaguely know the meaning of evaluation. Those who think they do mistake evaluation for assessment in education. Whenever I meet with people whose understanding of evaluation is limited to educational assessment, I use Scriven’s definition and emphasis words like “value, merit, and worth”.
  3. Distinguishing between evaluation and social science research.  Theoretical and practical experiences are helpful ways to distinguish between the two disciplines. Extensive reading of evaluation literature helps to see the difference.

She also sites a Trochim definition that is worth keeping in mind as it captures the various unique qualities of evaluation.  Carol Weiss mentioned them all in her list (above):

  •  “Evaluation is a profession that uses formal methodologies to provide useful empirical evidence about public entities (such as programs, products, performance) in decision making contexts that are inherently political and involve multiple often conflicting stakeholders, where resources are seldom sufficient, and where time-pressures are salient”.

Resources:

What do I know that they don’t know?
What do they know that I don’t know?
What do all of us need to know that few of us knows?”

These three questions have buzzed around my head for a while in various formats.

When I attend a conference, I wonder.

When I conduct a program, I wonder, again.

When I explore something new, I am reminded that perhaps someone else has been here and wonder, yet again.

Thinking about these questions, I had these ideas

  • I see the first statement relating to capacity building;
  • The second statement  relating to engagement; and
  • The third statement (relating to statements one and two) relating to cultural competence.

After all, aren’t both of these statements (capacity building and engagement)  relating to a “foreign country” and a different culture?

How does all this relate to evaluation?  Read on…

Premise:  Evaluation is an everyday activity.  You evaluate everyday; all the time; you call it making decisions.  Every time you make a decision, you are building capacity in your ability to evaluate.  Sure, some of those decisions may need to be revised.  Sure, some of those decisions may just yield “negative” results.  Even so, you are building capacity.  AND you share that knowledge–with your children (if you have them), with your friends, with your colleagues, with the random shopper in the (grocery) store.  That is building capacity.  Building capacity can be systematic, organized, sequential.  Sometimes formal, scheduled, deliberate.  It is sharing “What do I know that they don’t know (in the hope that they too will know it and use it).

Premise:  Everyone knows something.  In knowing something, evaluation happens–because people made decisions about what is important and what is not.  To really engage (not just outreach which much of Extension does), one needs to “do as” the group that is being engaged.  To do anything else (“doing to” or “doing with”) is simply outreach and little or no knowledge is exchanged.  Doesn’t mean that knowledge isn’t distributed; Extension has been doing that for years.  Just means that the assumption (and you know what assumptions do) is that only the expert can distribute knowledge.  Who is to say that the group (target audience, participants) aren’t expert in at least part of what is being communicated.  Probably are.  It is the idea that … they know something that I don’t know (and I would benefit from knowing).

Premise:  Everything, everyone is connected.  Being prepared is the best way to learn something.  Being prepared by understanding culture (I’m not talking only about the intersection of race and gender; I’m talking about all the stereotypes you carry with you all the time) reinforces connections.  Learning about other cultures (something everyone can do) helps dis-spell stereotypes and mitigate stereotype threats.  And that is an evaluative task.  Think about it.  I think it captures the What do all of us need to know that few of us knows?” question.

 

 

 

CAVEAT:  This may be too political for some readers.

Sometimes, there are ideas that appear in other blogs that may or may not be directly related to my work in evaluation.  Because I read them, I see evaluative relations and think they are important enough to pass along.  Today is one of those days.  I’ll try to connect the dots  between what I read and share here and evaluation.  (For those of you who are interested in the Connect the Dots, a major event day on climate change and weather on May 5, 2012, go here.)

First, Valerie Williams, AEA365 blog, April18, 2012 says, “…Many environmental education programs struggle with the question of whether environmental education is a means to an end (e.g. increased stewardship) or an end itself. This question has profound implications for how programs are evaluated, and specifically the measures used to determine program success.”

I think that many educational programs (whether environmentally focused or not) struggle with this question.  Is the program a means to an end or the end itself?  I am reminded of programs which are instituted for cost savings and then the program designers want that program evaluated.  Means or end?

Williams also offers comments about evaluability assessment–that evaluation task that helps evaluators decide whether to evaluate a new programs, especially if that new program’s readiness for evaluation is in question. (She provides resources if you are interested.)  She offers reasons for conducting an evaluability assessment.  Specifically:

  • Surfacing disagreements among stakeholders about the program theory, design and/or structure;
  • Highlighting the need for changes in program design; and
  • Clarifying the type of evaluation most helpful to the program.

Evauability assessment is a topic for future discussion.

Second, a colleague offered the following CDC reference and says, “The purpose of this workbook is to help public health program managers, administrators, and evaluators develop an effective evaluation plan in the context of the planning process. It is intended to assist in developing an evaluation plan but is not intended to serve as a complete resource on how to implement program evaluation.”  I offer it here because I know that evaluation plans are often added after the program has been implemented.  Although it has as a focus pubic health programs, one source familiar with this work commented that there is enough in the workbook that can be applied to a variety of settings.  Check it out; the link is below

 

Next, Nigerian novelist Chimamanda Ngozi Adichie is quoted as saying, “The single story creates stereotypes, and the problem with stereotypes is not that they are untrue, but that they are incomplete. They make one story become the only story.”

Given that

  • Extension uses story to evaluate a lot of programs; and
  • Story is used to convince legislators of Extension’s value; and
  • Story, if done right, is a powerful tool;

Then it behooves us all to remember this–are we using the story because it captures the effect or because it is the only story?  If only story, is it promoting a stereotype?  Adichie, though a novelist, may be an evaluator at heart.

Finally, there is this quote, also from an AEA365 blog (Steve Mayer) “There are elements of Justice and Injustice everywhere – in society, in reform efforts, and in the evaluation of reform efforts. The choice of outcomes to be assessed is a political act. “Noticing progress” probably takes us further than “measuring impact,” always being mindful of who benefits.”

We often are stuck on “measuring impact”; after all, isn’t that what everyone wants to know.  If world peace is the ultimate impact, then what is the likelihood of measuring that?  I think that “noticing progress” (i.e., change) will take us much further because of the justice it can capture (or not–and that is telling).  And by capturing “noticing progress”, we can make explicit who benefits.

This runs long today.

 

My oldest daughter graduated from High School Monday.  Now, she is facing the reality of life after high school–the emotional let down, the lack of structure; the loss of focus.  I remember what it was like to commence…another word for beginning.  I think I was depressed for days.  The question becomes evaluative when one thinks of planning, which is what she has to do now.  In planning, she needs to think:  What excites me?  What are my passions?  How will I accomplish the what?  How will I connect again to the what?  How will I know I’m successful?

Ellen Taylor-Powell,  former Distinguished Evaluation Specialist at the University of Wisconsin Extension, talks about planning on the professional development website at UWEX.  (There are many other useful publications on this site…I urge you to check them out.)  This publication has four sections:  focusing the evaluation, collecting the information, using the information, and managing the evaluation.  I want to talk more about focusing the evaluation–because that is key when beginning, whether it is the next step in your life, the next program you want to implement, or the next report you want to write.

This section of the publication asks you to identify what you are going to evaluate, the purpose of the evaluation, who and how they will use the evaluation, what questions you want to answer, what information you need to answer those questions, develop a time-line, and, finally, identify what resources you will need.  I see this as puzzle assembly–one where you do not necessarily have a picture to guide you.  Not unlike a newly commenced graduate–finding a focus is putting together a puzzle.–you won’t know what the picture is, where you are going, until you focus and develop a plan.  For me, that means putting the puzzle together.  It means finding the what and the so what.  It is always the first place to commence.

tools of the tradeHaving spent the last week reviewing two manuscripts for a journal editor, it became clear to me that writing is an evaluative activity.

How so?

The criteria for good writing is meeting the 5 Cs: Clarity, Coherence, Conciseness, Correctness, and Consistency.

Evaluators write–they write survey questions, summaries of findings, reports, journal manuscripts. If they do not employ the 5 Cs to communicate to a naive audience  what is important, then the value (remember the root for evaluation is value) of their writing is lost, often never to be reclaimed.

In a former life, I taught scientific/professional writing to medical students, residents, junior professors, and other graduate students. I found many sources that were useful and valuable to me. The conclusion to which I came is that taking a scientific/professional (or non-fiction) writing course is an essential tool to have as an evaluator. So I set about collecting useful (and, yes, valuable) resources. I offer them here.strunk and white 4th edstrunk and white 3rd ed

Probably the single resource that every evaluator needs to have on hand is Strunk and White’s slim volume called “The Elements of Style”. It is in the 4th edition–I still use the 3rd. Recently, a 50th anniversary edition was published that is a fancy version of the 4th edition.  Amazon has the 50th anniversary edition as well as the 4th edition–the 3rd ed is out of print.

APA style guideYou also need the style guide (APA, MLA, Biomedical Editors, Chicago) that is used by the journal to which you are submitting your manuscript. Choose one. Stick with it. I have the 6th edition of the APA guide on my desk. It is on line as well.

Access to a dictionary and a thesaurus (now conveniently available on line and through computer software) is essential. I prefer the hard copy Webster’s (I love the feel of books), yet would recommend the on-line version of the Oxford English Dictionary.

There are a number of helpful writing books (in no particular order or preference):

  • Turabian, K. L. (2007).    A manual for writers of research papers, theses, and dissertations. Chicago: The University of Chicago Press.
  • Thyer, B. A. (1994). Successful publishing in scholarly journals. Thousand Oaks, CA: Sage.
  • Berger, A. A. (1993). Improving writing skills. Thousand Oaks, CA: Sage.
  • Silvia, P. J. (2007). How to write a lot. Washington DC: American Psychological Association.
  • Zeiger, M. (1999). Essentials of writing biomedical research papers. NY: McGraw-Hill.

I will share Will Safire’s 17 lighthearted looks at grammar and good usage another day.


Merry Christmas–the greeting for the upcoming holiday–Hanukkah ended December 18 (I hope your was very happy–mine was); Solstice was last night (and the sun returned today–a feat in Oregon, in winter, so Solstice was truly blessed); kwanzaa 1

Kwanzaa won’t happen until Dec 26–and the greeting there is Habari Gani (Swahili for “What’s the news?”).

Now, how do I get an evaluation topic from that opening…hmmm…perhaps a gift…yes…a gift.

The gift I give y’all is this:

Think about your blessings.

Think about the richness of your life.

Think about those for whom you care.

And remember…even those thoughts are evaluative because you know how blessed you are; because you know how rich (we are not talking money here…) your life is; because you have people in your life for whom you care AND who care for you.

The light returns regardless of the tradition you follow, and that, too, is evaluative–because you can ask yourself is the light enough–and if it isn’t you CAN figure out how to solve that problem.

newyearresolution1 Next week, I’ll suggest some New Year’s  resolutions–evaluative, of course with no self-deception–you CAN do evaluation!