Mar
20
Filed Under (program evaluation) by Molly on 20-03-2012

An important question that evaluators ask is, “What difference is this program making?”  Followed quickly with, “How do you know?”

Recently, I happened on a blog called {grow} and the author, Mark Schaefer,  had a post called, “Did this blog make a difference?”  Since this is a question as an evaluator I am always asking, I jumped on the page.  Mr. Schaefer is in marketing and as a marketing expert he says the following, “You’re in marketing for one reason: Grow. Grow your company, reputation, customers, impact, profits. Grow yourself. This is a community that will help. It will stretch your mind, connect you to fascinating people, and provide some fun along the way.”  So I wondered how relevant this blog would be to me and other evaluators whether they blogged or not.

Mr. Schaefer is taking stock of his blog–a good thing to do for a blog that has been posted for a while.  So although he lists four innovations, he asks the reader to “…be the judge if it made a difference in your life, your outlook, and your business.”  The four innovations are

  1. Paid contributing columnists.  He actually paid the folks who contributed to his blog; not something those of us in Extension can do.
  2. {growtoons}. Cartoons designed specifically for the blog that “…adds an element of fun and unique social media commentary.”  Hmmm…
  3. New perspectives. He showcased fresh deserving voices; some that he agreed with and some that he did not.  A possibility.
  4. Video. He did many video blogs and that gave him the opportunity to “…shine the light on some incredible people…”  He interviews folks and posts the short video.  Yet another possibility.

His approach seems really different to what I do.  Maybe it is the content; maybe it is the cohort; maybe it is something else.  Maybe there is something to be learned from what he does.  Maybe this blog is making a difference.  Only I don’t know.  So, I take a clue from Mr. Schaefer and ask you to judge if it has made a difference in what you do–then let me know.  I’ve imbedded a link  to a quick survey that will NOT link to you nor in anyway identify you.  I will only be  using the findings for program improvement.  Please let me know.  Click here to link to the survey.

 

Oh, and I won’t be posting next week–spring break and I’ll be gone.

 

Mar
14
Filed Under (Methodology) by Molly on 14-03-2012

A colleague asks, “What is the appropriate statistical analysis test when comparing means of two groups ?”

 

I’m assuming (yes, I know what assuming does) that parametric tests are appropriate for what the colleague is doing.  Parametric tests (i.e., t-test, ANOVA,) are appropriate when the parameters of the population are known.    If that is the case (and non-parametric tests are not being considered), I need to clarify the assumptions underlying the use of parametric tests, which have more stringent assumptions than nonparametric tests.  Those assumptions are the following:

The sample is

  1. randomized (either by assignment or selection).
  2. drawn from a population which has specified parameters.
  3. normally distributed.
  4. demonstrating  equality of variance in each variable.

If those assumptions are met,  the part answer is, “It all depends”.  (I know you have heard that before today.)

I will ask the following questions:

  1. Do you know the parameters (measures of central tendency and variability) for the data?
  2. Are they dependent or independent samples?
  3. Are they intact populations?

Once I know the answers to these questions I can suggest a test.

My current favorite statistics book, Statistics for People Who (Think They) Hate Statistics, by Neil J. Salkind (4th ed.) has a flow chart that helps you by asking if you are looking at differences between the sample and the population and relationships or differences between one or more groups. The flow chart ends with the name of a statistical test.  The caveat is that you are working with a sample from a larger population that meets the above stated assumptions.

How you answer the questions above also depends on what test you can use.  If you do not know the parameters, you will NOT use a parametric test.  If you are using an intact population (and many Extension professionals use intact populations), you will NOT use inferential statistics as you will not be inferring to anything bigger than what you have at hand.  If you have two groups and the groups are related (like a pre-post test or a post-pre test), you will use a parametric or non-parametric test for dependency.  If you have two groups and are they unrelated (like boys and girls), you will use a parametric or non-parametric test for independence.  If you have more than two groups you will use different test yet.

Extension professionals are rigorous in their content material; they need to be just as rigorous in their analysis of the data collected from the content material.  Understanding the what analyses to use when is a good skill to have.

 

 

 

A colleague asked an interesting question, one that I am often asked as an evaluation specialist:  “without a control group is it possible to show that the intervention had anything to do with a skill increase?”  The answer to the question “Do I need a control group to do this evaluation?” is, “It all depends.”

It depends on what question are you asking.  Are you testing a hypothesis–a question posed in a null form of no difference?  Or answering an evaluative question–what difference was made?  The methodology you use depends on what question you are asking.  If you want to know how effective or efficient a program (aka intervention) is, you can determine that without a control group.  Campbell and Stanley in their, now well read, 1963 volume, Experimental and quasi-experimental designs for research, talk about quasi-experimental designs that do not use a control group.   Yes, there are threats to internal validity; yes, there are stronger designs; yes, the controls are not as rigorous as in a double-blind, cross-over design (considered the gold standard by some groups).  We are talking here about evaluation, people, NOT research.  We are not asking questions of efficacy (research); rather we want to know what difference is being made; we want to know the answer to “so what”.  Remember, the root of evaluation is value; not cause.

This is certainly a quandary–how to determine cause for the desired outcome.  John Mayne has recognized this quandary and has approached the question of attributing the outcome to the intervention in his use of contribution analysis.  In community-based work, like what Extension does, attributing cause is difficult at best.  Why–because there are factors which Extension cannot control and identifying a control group may not be ethical, appropriate, or feasible.  Use something else that is ethical, appropriate, and feasible (see Campbell and Stanley).

Using a logic model to guide your work helps to defend your premise of “If I have these resources, then I can do these activities with these participants; if I do these activities with these participants, then I expect (because the literature says so–the research has already been done) that the participants will learn these things; do these things; change these conditions.”  The likelihood of achieving world peace with your intervention is low at best; the likelihood of changing something (learning, practices, conditions)  if you have a defensible model (road map) is high.  Does that mean your program caused that change–probably not.  Can you take credit for the change; most definitely.

Mar
02
Filed Under (program evaluation) by Molly on 02-03-2012

Last weekend, I was in Florida visiting my daughter at Eckerd College.  The College was offering an Environmental Film Festival and I had the good fortune to see Green Fire, a film about Aldo Leopold and the land ethic.   I had seen it at OSU and was impressed because it was not all doom and gloom; rather it celebrated Aldo Leopold as one of the three leading and  early conservationists  (the other two are John Muir and Henry David Thoreau ).  Dr. Curt Meine, who narrates the film and is a conservation biologist, was leading the discussion again; I had heard him at OSU.  At the showing early, I was able to chat with him about the film and its effects.  I asked him how he knew he was being effective.  His response was to tell me about the new memberships in the Foundation, the number of showings, and the size of the audience seeing the film.  Appropriate responses for my question.  What I really wanted to know was how did he know he was making a difference.  That is a different question; one which talks about change.  Change is what programs like Green Fire is all about.  It is what Aldo Leopold was all about (read Sand County Almanac to understand Leopold’s position.)

 

Change is what evaluation is all about.  But did I ask the right question?  How could I have phrased it differently to get at what change had occurred in the viewers of the film?  Did new memberships in the Foundation demonstrate change?  Knowing what question to ask is important for program planners as well as evaluators.  There are often multiple levels of questions that could be asked–individual, programmatic, organizational, regional, national, global.  Are they all equally important?  Do they provide a means forgathering pertinent data?  How are you going to use these data once you’ve gathered them?  How carefully do you think about the questions you ask when you craft your logic model?  When you draft a survey?  When you construct questions for focus groups?  Asking the right question will yield relevant answers.  It will show you what difference you’ve made in the lives of your target audience.

 

Oh, and if you haven’t see the film, Green Fire, or read the book, Sand County Almanac–I highly recommend them.