“Resilience = Not having all of your eggs in one basket.

Abundance = having enough eggs.”

Borrowed from and appearing in the blog by Harold Jarche, Models, flows, and exposure, posted April 28, 2012.

 

In January, John Hagel blogged in  Edge Perspectives:  “If we are not enhancing flow, we will be marginalized, both in our personal and professional life. If we want to remain successful and reap the enormous rewards that can be generated from flows, we must continually seek to refine the designs of the systems that we spend time in to ensure that they are ever more effective in sustaining and amplifying flows.”

That is a powerful message.  Just how do we keep from being marginalized, especially when there is a shifting paradigm?  How does that relate to evaluation?  What exactly do we need to do to keep evaluation skills from being lost in the shift and be marginalized?  Good questions.

The priest at the church I attend is retiring, after 30 years of service.  This is a significant and unprecedented change (at least in my tenure there).  Before he left for summer school in Minnesota, he gave the governing board a pep talk that has relevance to evaluation.  He posited that what we needed to do was not focus on what we needed, rather focus on what strengths and assets we currently have and build on them.  No easy task, to be sure.  And not the  usual approach for an interim.  The usual approach is what do we want; what do we need for this interim.  See the shifting paradigm?  I hope so.

Needs assessment is often the same approach–what do you want; what do you need.  (Notice the use of the word “you” in this sentence; more on that later in another post.)  A well intentioned evaluator recognizes that something is missing or lacking and conducts a needs assessment documenting that need/lack/deficit.  What would happen, do you think, if the evaluator documented what assets existed and developed a program to build that capacity?  Youth leadership development has been building programs to build assets for many years (See citations below).  The approach taken by the youth development professionals is that there are certain skills, or assets, which, if strengthened, build resilience.  Buy building resilience, needs are mitigated; problems solved or avoided; goals met.

So what would happen if, when conducting a “needs” assessment, an evaluator actually conducted an asset assessment and developed programs to benefit the community by building capacity which strengthened assets and built resiliency?  Have you ever tried that approach?

By focusing on strengths and assets instead of weaknesses and liabilities, programs could be built that would benefit more than a vocal minority.  The greater whole could benefit.  Wouldn’t that be novel?  Wouldn’t that be great!

Citations:

1.  Benson, P. L. (1997).  All Kids are Our Kids.  San Francisco:  Jossey-Bass Publishers

2.  Silbereisen, R. K. & Lerner, R. M. (2007).  Approaches to Positive Youth Development. Los Angeles: Sage Publications.

 

A colleague asks, “What is the appropriate statistical analysis test when comparing means of two groups ?”

 

I’m assuming (yes, I know what assuming does) that parametric tests are appropriate for what the colleague is doing.  Parametric tests (i.e., t-test, ANOVA,) are appropriate when the parameters of the population are known.    If that is the case (and non-parametric tests are not being considered), I need to clarify the assumptions underlying the use of parametric tests, which have more stringent assumptions than nonparametric tests.  Those assumptions are the following:

The sample is

  1. randomized (either by assignment or selection).
  2. drawn from a population which has specified parameters.
  3. normally distributed.
  4. demonstrating  equality of variance in each variable.

If those assumptions are met,  the part answer is, “It all depends”.  (I know you have heard that before today.)

I will ask the following questions:

  1. Do you know the parameters (measures of central tendency and variability) for the data?
  2. Are they dependent or independent samples?
  3. Are they intact populations?

Once I know the answers to these questions I can suggest a test.

My current favorite statistics book, Statistics for People Who (Think They) Hate Statistics, by Neil J. Salkind (4th ed.) has a flow chart that helps you by asking if you are looking at differences between the sample and the population and relationships or differences between one or more groups. The flow chart ends with the name of a statistical test.  The caveat is that you are working with a sample from a larger population that meets the above stated assumptions.

How you answer the questions above also depends on what test you can use.  If you do not know the parameters, you will NOT use a parametric test.  If you are using an intact population (and many Extension professionals use intact populations), you will NOT use inferential statistics as you will not be inferring to anything bigger than what you have at hand.  If you have two groups and the groups are related (like a pre-post test or a post-pre test), you will use a parametric or non-parametric test for dependency.  If you have two groups and are they unrelated (like boys and girls), you will use a parametric or non-parametric test for independence.  If you have more than two groups you will use different test yet.

Extension professionals are rigorous in their content material; they need to be just as rigorous in their analysis of the data collected from the content material.  Understanding the what analyses to use when is a good skill to have.

 

 

 

A colleague asked an interesting question, one that I am often asked as an evaluation specialist:  “without a control group is it possible to show that the intervention had anything to do with a skill increase?”  The answer to the question “Do I need a control group to do this evaluation?” is, “It all depends.”

It depends on what question are you asking.  Are you testing a hypothesis–a question posed in a null form of no difference?  Or answering an evaluative question–what difference was made?  The methodology you use depends on what question you are asking.  If you want to know how effective or efficient a program (aka intervention) is, you can determine that without a control group.  Campbell and Stanley in their, now well read, 1963 volume, Experimental and quasi-experimental designs for research, talk about quasi-experimental designs that do not use a control group.   Yes, there are threats to internal validity; yes, there are stronger designs; yes, the controls are not as rigorous as in a double-blind, cross-over design (considered the gold standard by some groups).  We are talking here about evaluation, people, NOT research.  We are not asking questions of efficacy (research); rather we want to know what difference is being made; we want to know the answer to “so what”.  Remember, the root of evaluation is value; not cause.

This is certainly a quandary–how to determine cause for the desired outcome.  John Mayne has recognized this quandary and has approached the question of attributing the outcome to the intervention in his use of contribution analysis.  In community-based work, like what Extension does, attributing cause is difficult at best.  Why–because there are factors which Extension cannot control and identifying a control group may not be ethical, appropriate, or feasible.  Use something else that is ethical, appropriate, and feasible (see Campbell and Stanley).

Using a logic model to guide your work helps to defend your premise of “If I have these resources, then I can do these activities with these participants; if I do these activities with these participants, then I expect (because the literature says so–the research has already been done) that the participants will learn these things; do these things; change these conditions.”  The likelihood of achieving world peace with your intervention is low at best; the likelihood of changing something (learning, practices, conditions)  if you have a defensible model (road map) is high.  Does that mean your program caused that change–probably not.  Can you take credit for the change; most definitely.

I regularly follow Harold Jarche’s blog .

Much of what he writes would not fall under the general topic of evaluation.  Yet his blog for February 18 does.  This blog is titled Why is learning and the sharing of information so important?

I see that intimately related to evaluation, especially given Michael Quinn Patton’s focus on use.  The way I see it, something can’t be used effectively unless one learns about it.  Oh, I know you can use just about anything for anything–and I am reminded of the anecdote of when you have a hammer everything looks like a nail, even if it isn’t. 

That is not the kind of use I’m talking about.

I’m talking about rational, logical, systematic use based on thoughtful inquiry, critical thinking, and problem solving.  I’m talking about making a difference because you have learned something new and valuable (remember the root of evaluation?). In his blog, Jarche cites the Governor-General of Canada, David Johnston and Johnston’s article recently published in the Globe and Mail, a newspaper published in Toronto. What Johnston says makes sense.  Evaluators in this context are diplomats, making learning accessible and sharing knowledge.

Sharing knowledge is what statistics is all about.  If you think the field of statistics is boring, I urge you to check out the video called The Joy of Stats presented by Swedish scholar Hans Rosling  .  I think you will have a whole new appreciation of statistics and the knowledge that can be conveyed.  If you find Hans Rosling compelling (or even if you don’t),  I urge you to check out his TED Talks presentation.  It is an eye-opener.

I think he makes a compelling argument about learning and sharing information.  About making a difference.  That is what evaluation is all about.

 

 

I have a quandary.  Perhaps you have a solution.

I am the evaluator on a program where the funding agency wants clear, measurable, and specific outcomes.  (OK, you say) The funding agency program people were asked to answer the question, “What do you expect to happen as a result of the program?”

These folks responded with a programmatic equivalent of “world peace.”  I virtually rolled my eyes.  IMHO there was no way that this program would end in world peace.  Not even no hunger (a necessary precursor to world peace).  After suggesting that perhaps that goal was unattainable given the resources and activities intended,  they came out of the fantasy world in which they were living and said, realistically, “We don’t know, exactly.”  I probed further.  The sites (several) were all different; the implementation processes (also several) were all different; the resources were all different (depending on site); and the list goes on.  Oh, and the program was to be rolled out soon in another site without an evaluation of the previous sites.  BUT THEY WANTED CLEAR, MEASURABLE, AND SPECIFIC OUTCOMES.

What would you do in this situation?  (I know what I proposed–there was lukewarm response.  I have an idea what would work–although the approach was not mainstream evaluation and these were mainstream folks.)  So I turn to you, Readers.  What would you do?  Let me know.  PLEASE.

 

Oh, and Happy Groundhog’s Day.  I understand there will be six more weeks of winter (there was serious frost this morning in Corvallis OR).

 

 

 

Recently, I’ve been dealing with several different logic models which all use the box format.  You know the one that Ellen Taylor-Powell advocated in her UWEX tutorial.  We are all familiar with this approach.  And all know that this approach helps conceptualize a program; identify program theory; and possible outcomes (maybe even world peace).  Yet, there is much more that can be done with logic models that isn’t in the tutorial.  The tutorial starts us off with this diagram. 

Inputs are what is invested; outputs are what is done; and outcomes are what results/happens.  And we assume (you KNOW what assumptions do, right?) that all the inputs lead to all outputs lead to all outcomes, because that is what the arrows show.  NOT.  One of the best approaches to logic modeling that I’ve seen and learned in the last few years is to make the inputs specific to the outputs and the outputs specific to the outcomes.  It IS possible that volunteers are NOT the input you need to have the outcome you desire (change in social conditions); or it may be. OR volunteers will lead to an entirely different outcome–for example, only change in knowledge, not condition. Connecting the resources specifically helps to clarify for program people what is expected with what will be done and with what resources.

Connecting those points with individual arrows and feedback loops (if appropriate) makes sense.

Jonny Morell suggests that these relationships may be 1:1, 1:many, many:1; many:many; and/or be classified by precedence (which he describes as A before B, A & B simultaneously, and agnostic with respect to procedure.)  If these relationships exist,  and I believe they do, then just filling boxes isn’t a good idea.  (If you want to check out his Power Point presentation at the AEA site, you will have to join  AEA because access this presentation is in the non-public  eLibrary available only to members.  However, I was able to copy and include the slide to which I refer (with permission).



As you can see, it all depends.  Depends on the resources, the planned outputs, the desired outcomes.  Relationships are key.

And you thought logic models were simple.

 

I am reading the book, Eaarth, by Bill McKibben (a NY Times review is here).  He writes about making a difference in the world on which we live.  He provides numerous  examples that have all happened in the 21st century, none of them positive or encouraging. He makes the point that the place in which we live today is not, and never will be again, like the place in which we lived when most of us were born.  He talks about not saving the Earth for our grandchildren but rather how our parents needed to have done things to save the earth for them–that it is too late for the grandchildren.  Although this book is very discouraging, it got me thinking.

 

Isn’t making a difference what we as Extension professionals strive to do?

Don’t we, like McKibben, need criteria to determine what that difference can/could/would be made and look like?

And if we have that criteria well established, won’t we be able to make a difference, hopefully positive (think hand washing here)?  And like this graphic, , won’t that difference be worth the effort we have put into the attempt?  Especially if we thoughtfully plan how to determine what that difference is?

 

We might not be able to recover (according to McKibben, we won’t) the Earth the way it was when most of us were born; I think we can still make a difference–a positive difference–in the lives of the people with whom we work.  That is an evaluative opportunity.

 

 

A colleague asks for advice on handling evaluation stories, so that they don’t get brushed aside as mere anecdotes.  She goes on to say of the AEA365 blog she read, ” I read the steps to take (hot tips), but don’t know enough about evaluation, perhaps, to understand how to apply them.”  Her question raises an interesting topic.  Much of what Extension does can be captured in stories (i.e., qualitative data)  rather than in numbers (i.e., quantitative data).  Dick Krueger, former Professor and Evaluation Leader (read specialist) at the University of Minnesota has done a lot of work in the area of using stories as evaluation.  Today’s post summarizes his work.

 

At the outset, Dick asks the following question:  What is the value of stories?  He provides these three answers:

  1. Stories make information easier to remember
  2. Stories make information more believable
  3. Stories can tap into emotions.

There are all types of stories.  The type we are interested in for evaluation purposes are organizational stories.  Organizational stories can do the following things for an organization:

  1. Depict culture
  2. Promote core values
  3. Transmit and reinforce the culture
  4. Provide instruction to employees
  5. Motivate, inspire, and encourage

He suggests six common types of organizational stories:

  1. Hero stories  (someone in the organization who has done something beyond the normal range of achievement)
  2. Success stories (highlight organizational successes)
  3. Lessons learned stories (what major mistakes and triumphs teach the organization)
  4. “How it works around here” stories (highlight core organizational values reflected in actual practice
  5. “Sacred bundle” stories (a collection of stories that together depict the culture of an organization; core philosophies)
  6. Training and orientation stories (assists new employees in understanding how the organization works)

To use stories as evaluation, the evaluator needs to consider how stories might be used, that is, do they depict how people experience the program?  Do they understand program outcomes?  Do they get insights into program processes?

You (as evaluator) need to think about how the story fits into the evaluation design (think logic model; program planning).  Ask yourself these questions:  Should you use stories alone?  Should you use stories that lead into other forma of inquiry?  Should you use stories that augment/illustrate results from other forms of inquiry?

You need to establish criteria for stories.  Rigor can be applied to story even though the data are narrative.  Criteria include the following:   Is the story authentic–is it truthful?  Is the story verifiable–is there a trail of evidence back to the source of the story?  Is there a need to consider confidentiality?  What was the original intent–purpose behind the original telling?  And finally, what does the story represent–other people or locations?

You will need a plan for capturing the stories.  Ask yourself these questions:  Do you need help capturing the stories?  What strategy will you use for collecting the stories?  How will you ensure documentation and record keeping?  (Sequence the questions; write them down the type–set up; conversational; etc.)  You will also need a plan for analyzing and reporting the stories  as you, the evaluator,  are responsible for finding meaning.

 

A colleague asked me yesterday about authenticating anecdotes–you know–those wonderful stories you gather about how what you’ve done has made a difference in someones life?

 

I volunteer service to a non-profit board (two, actually) and the board members are always telling stories about how “X has happened” and how “Y was wonderful” yet,  my evaluator self says, “How do you know?”  This becomes a concern for organizations which do not have evaluation as part of their mission statement.  Evan though many boards hold accountable the Executive Director, few make evaluation explicit.

Dick Krueger  , who has written about focus groups, also writes and studies the use of stories in evaluation and much of what I will share with y’all today is from his work.

First, what is a story?  Creswell (2007, 2 ed.) defines story as “…aspects that surface during an interview in which the participant describes a situation, usually with a beginning, a middle, and an end, so that the researcher can capture a complete idea and integrate it, intact, into the qualitative narrative”.  Krueger elaborates on that definintion by saying that a story “…deals with an experience of an event, program, etc. that has a point or a purpose.”  Story differs from case study in that case study is a story that tries to understand a system, not an individual event or experience; a story deals with an experience that has a point.  Stories provide examples of core philosophies, of significant events.

There are several purposes for stories that can be considered evaluative.  These include depicting the culture, promoting core values, transmitting and reinforcing current culture, providing instruction (another way to transmit culture), and motivating, inspiring, and/or encouraging (people).  Stories can be of the following types:  hero stories, success stories, lesson-learned stories, core value  stories, cultural stories, and teaching stories.

So why tell a story?  Stories make information easier to remember, more believable, and tap into emotion.  For stories to be credible (provide authentication), an evaluator needs to establish criteria for stories.  Krueger suggests five different criteria:

  • Authentic–is it truthful?  Is there truth in the story?  (Remember “truth” depends on how you look at something.)
  • Verifiable–is there a trail of evidence back to the source?  Can you find this story again?
  • Confidential–is there a need to keep the story confidential?
  • Original intent–what is the basis for the story?  What motivated telling the story? and
  • Representation–what does the story represent?  other people?  other locations?  other programs?

Once you have established criteria for the stories collected, there will need to be some way to capture stories.  So develop a plan.  Stories need to be willingly shared, not coerced; documented and recorded; and collected in a positive situation.  Collecting stories is an example where the protections for  humans in research must be considered.  Are the stories collected confidentially?  Does telling the stories result in little or no risk?  Are stories told voluntarily?

Once the stories have been collected, analyzing and reporting those stories is the final step.  Without this, all the previous work  was for naught.  This final step authenticates the story.  Creswell provides easily accessible guidance for analysis.

One of the opportunities I have as a faculty member at OSU is to mentor students.  I get to do this in a variety of ways–sit on committees, provide independent studies, review preliminary proposal, listen…I find it very exciting to see the change and growth in students’ thinking and insights when I work with students.  I get some of my best ideas from them.  Like today’s post…

I just reviewed several chapters of student dissertation proposals.  These students had put a lot of thought and passion into their research questions.  To them, the inquiry was important; it could be the impetus to change.  Yet, the quality of the writing often detracted from the quality of the question; the importance of the inquiry; the opportunity to make a difference.

How does this relate to evaluation?  For evaluations to make a difference, the findings must be used.  This does not mean writing the report and giving it to the funder, the principal investigator, the program leader, or other stakeholders.  Too many reports have gathered dust on someone shelf because they are not used.  In order to be used, the report must be written so that they can be understood.  The report needs to be written to a naive audience; as though the reader knows nothing about the topic.

When I taught technical writing, I used the mnemonic of the 5Cs.  My experience is that if these concepts (all starting with the letter      ) were employed, the report/paper/manuscript would be able to be understood by any reader.

The report needs to be written:

  • Clearly
  • Coherently
  • Concisely
  • Correctly
  • Consistently

Clearly means not using jargon; using simple words; explaining technical words.

Coherently means having the sections of the report hang together; not having any (what I call) quantum leaps.

Concisely means using few words; avoiding long meandering paragraphs; avoiding the over use of prepositions (among other things).

Correctly means making sure that grammar and syntax are correct; subject/verb agreements; remembering that the word “data” is a plural word and takes a plural verb and plural articles.

Consistently means using the same word to describe the parts of your research; participants are participants all through the report, not subjects on page 5, respondents on page 11, and students on page 22.

This little mnemonic has helped many students write better papers; I know it can help many evaluators write better reports.

This is no easy task.  Writing is hard work; using the 5Cs makes it easier.