A colleague asked me yesterday about authenticating anecdotes–you know–those wonderful stories you gather about how what you’ve done has made a difference in someones life?

 

I volunteer service to a non-profit board (two, actually) and the board members are always telling stories about how “X has happened” and how “Y was wonderful” yet,  my evaluator self says, “How do you know?”  This becomes a concern for organizations which do not have evaluation as part of their mission statement.  Evan though many boards hold accountable the Executive Director, few make evaluation explicit.

Dick Krueger  , who has written about focus groups, also writes and studies the use of stories in evaluation and much of what I will share with y’all today is from his work.

First, what is a story?  Creswell (2007, 2 ed.) defines story as “…aspects that surface during an interview in which the participant describes a situation, usually with a beginning, a middle, and an end, so that the researcher can capture a complete idea and integrate it, intact, into the qualitative narrative”.  Krueger elaborates on that definintion by saying that a story “…deals with an experience of an event, program, etc. that has a point or a purpose.”  Story differs from case study in that case study is a story that tries to understand a system, not an individual event or experience; a story deals with an experience that has a point.  Stories provide examples of core philosophies, of significant events.

There are several purposes for stories that can be considered evaluative.  These include depicting the culture, promoting core values, transmitting and reinforcing current culture, providing instruction (another way to transmit culture), and motivating, inspiring, and/or encouraging (people).  Stories can be of the following types:  hero stories, success stories, lesson-learned stories, core value  stories, cultural stories, and teaching stories.

So why tell a story?  Stories make information easier to remember, more believable, and tap into emotion.  For stories to be credible (provide authentication), an evaluator needs to establish criteria for stories.  Krueger suggests five different criteria:

  • Authentic–is it truthful?  Is there truth in the story?  (Remember “truth” depends on how you look at something.)
  • Verifiable–is there a trail of evidence back to the source?  Can you find this story again?
  • Confidential–is there a need to keep the story confidential?
  • Original intent–what is the basis for the story?  What motivated telling the story? and
  • Representation–what does the story represent?  other people?  other locations?  other programs?

Once you have established criteria for the stories collected, there will need to be some way to capture stories.  So develop a plan.  Stories need to be willingly shared, not coerced; documented and recorded; and collected in a positive situation.  Collecting stories is an example where the protections for  humans in research must be considered.  Are the stories collected confidentially?  Does telling the stories result in little or no risk?  Are stories told voluntarily?

Once the stories have been collected, analyzing and reporting those stories is the final step.  Without this, all the previous work  was for naught.  This final step authenticates the story.  Creswell provides easily accessible guidance for analysis.

My oldest daughter graduated from High School Monday.  Now, she is facing the reality of life after high school–the emotional let down, the lack of structure; the loss of focus.  I remember what it was like to commence…another word for beginning.  I think I was depressed for days.  The question becomes evaluative when one thinks of planning, which is what she has to do now.  In planning, she needs to think:  What excites me?  What are my passions?  How will I accomplish the what?  How will I connect again to the what?  How will I know I’m successful?

Ellen Taylor-Powell,  former Distinguished Evaluation Specialist at the University of Wisconsin Extension, talks about planning on the professional development website at UWEX.  (There are many other useful publications on this site…I urge you to check them out.)  This publication has four sections:  focusing the evaluation, collecting the information, using the information, and managing the evaluation.  I want to talk more about focusing the evaluation–because that is key when beginning, whether it is the next step in your life, the next program you want to implement, or the next report you want to write.

This section of the publication asks you to identify what you are going to evaluate, the purpose of the evaluation, who and how they will use the evaluation, what questions you want to answer, what information you need to answer those questions, develop a time-line, and, finally, identify what resources you will need.  I see this as puzzle assembly–one where you do not necessarily have a picture to guide you.  Not unlike a newly commenced graduate–finding a focus is putting together a puzzle.–you won’t know what the picture is, where you are going, until you focus and develop a plan.  For me, that means putting the puzzle together.  It means finding the what and the so what.  It is always the first place to commence.

One of the opportunities I have as a faculty member at OSU is to mentor students.  I get to do this in a variety of ways–sit on committees, provide independent studies, review preliminary proposal, listen…I find it very exciting to see the change and growth in students’ thinking and insights when I work with students.  I get some of my best ideas from them.  Like today’s post…

I just reviewed several chapters of student dissertation proposals.  These students had put a lot of thought and passion into their research questions.  To them, the inquiry was important; it could be the impetus to change.  Yet, the quality of the writing often detracted from the quality of the question; the importance of the inquiry; the opportunity to make a difference.

How does this relate to evaluation?  For evaluations to make a difference, the findings must be used.  This does not mean writing the report and giving it to the funder, the principal investigator, the program leader, or other stakeholders.  Too many reports have gathered dust on someone shelf because they are not used.  In order to be used, the report must be written so that they can be understood.  The report needs to be written to a naive audience; as though the reader knows nothing about the topic.

When I taught technical writing, I used the mnemonic of the 5Cs.  My experience is that if these concepts (all starting with the letter      ) were employed, the report/paper/manuscript would be able to be understood by any reader.

The report needs to be written:

  • Clearly
  • Coherently
  • Concisely
  • Correctly
  • Consistently

Clearly means not using jargon; using simple words; explaining technical words.

Coherently means having the sections of the report hang together; not having any (what I call) quantum leaps.

Concisely means using few words; avoiding long meandering paragraphs; avoiding the over use of prepositions (among other things).

Correctly means making sure that grammar and syntax are correct; subject/verb agreements; remembering that the word “data” is a plural word and takes a plural verb and plural articles.

Consistently means using the same word to describe the parts of your research; participants are participants all through the report, not subjects on page 5, respondents on page 11, and students on page 22.

This little mnemonic has helped many students write better papers; I know it can help many evaluators write better reports.

This is no easy task.  Writing is hard work; using the 5Cs makes it easier.