Jan
23
Filed Under (Methodology, program evaluation) by Molly on 23-01-2012

Recently, I’ve been dealing with several different logic models which all use the box format.  You know the one that Ellen Taylor-Powell advocated in her UWEX tutorial.  We are all familiar with this approach.  And all know that this approach helps conceptualize a program; identify program theory; and possible outcomes (maybe even world peace).  Yet, there is much more that can be done with logic models that isn’t in the tutorial.  The tutorial starts us off with this diagram. 

Inputs are what is invested; outputs are what is done; and outcomes are what results/happens.  And we assume (you KNOW what assumptions do, right?) that all the inputs lead to all outputs lead to all outcomes, because that is what the arrows show.  NOT.  One of the best approaches to logic modeling that I’ve seen and learned in the last few years is to make the inputs specific to the outputs and the outputs specific to the outcomes.  It IS possible that volunteers are NOT the input you need to have the outcome you desire (change in social conditions); or it may be. OR volunteers will lead to an entirely different outcome–for example, only change in knowledge, not condition. Connecting the resources specifically helps to clarify for program people what is expected with what will be done and with what resources.

Connecting those points with individual arrows and feedback loops (if appropriate) makes sense.

Jonny Morell suggests that these relationships may be 1:1, 1:many, many:1; many:many; and/or be classified by precedence (which he describes as A before B, A & B simultaneously, and agnostic with respect to procedure.)  If these relationships exist,  and I believe they do, then just filling boxes isn’t a good idea.  (If you want to check out his Power Point presentation at the AEA site, you will have to join  AEA because access this presentation is in the non-public  eLibrary available only to members.  However, I was able to copy and include the slide to which I refer (with permission).



As you can see, it all depends.  Depends on the resources, the planned outputs, the desired outcomes.  Relationships are key.

And you thought logic models were simple.

 

Jan
09
Filed Under (program evaluation) by Molly on 09-01-2012

For my new year’s post, I mentioned that AEA is running a series blog posts in aea365 written by evaluators who blog.  Susan Kistler has compiled a schedule of who will be blogging in aea365 when.  This link will take you to the full series and be updated as new posts come online

http://aea365.org/blog/?s=bloggers+series&submit=Go).  The results of Susan’s request is that evaluators who blog will post to aea365 one week a month, starting the last week in December.  January posts will run January 22-27; February posts will run February 12-17; March, the 18th-23; April will run 22-25.

I’ve mentioned aea365 before.  I’ll mention it again.  You can subscribe either by email or RSS feed.  The blogs are archived.  They are not specific to any aspect of evaluation.  Some times they are interesting and helpful; sometimes not.  The variety is rich; the effort tremendous; and the resources useful.  Check it out.

Jan
09
Filed Under (program evaluation) by Molly on 09-01-2012

A colleague made a point last week that I want to bring to your attention.  The comment made it clear that when a planning program it is important to think about how to determine what difference the program is making at the beginning of the program, not at the end.

Over the last two years, I’ve alluded to the fact that retrofitting evaluation, while possible, is not ideal.  Granted, sometimes programs are already in place and it is important to report the difference the program made, so evaluation needs to be retrofitted.  Sometimes programs have been in place a long time and need to show long term outcomes (even if they are called impacts).  In cases like that, yes, evaluation needs to be retrofitted.  What this colleague was talking about was a NEW program; one that has never been presented before.

There are lots of ways to get the answer to the question, “What difference is this program making?”  We are not going to talk about methods today, though.  We are going to talk about programs and how programs relate to evaluation.

When I start to talk about evaluation with a faculty member, I ask them what do they expect to happen.  If they understand the program theory, they can describe what outcome is expected.  This is when I pull out the model below.

This model shows the logical linkage between what is expected (outcomes) and what was done to whom (outputs) with what resources (inputs), if you follow the arrow right to left.  If, however, you follow the arrow left to right, you see what resources you need to conduct what activities to whom to expect what outcomes.  Each box (inputs, outputs, outcomes) has an evaluative activity that accompanies it.  In the situation, a needs assessment is the evaluative activity.  Here you are evaluating how to determine what needs to be changed between what is and what should be.  In the resources, you can do a variety of activities; specifically, you can determine if you had enough.  You can also do a cost analysis (there are several).  You can also do a process evaluation.  In outputs, you can determine if you did what you said you would do in the time you said you would do it and with the target audience.  I have always called this a progress evaluation.  In outcomes, you actually determine what difference the program made in the lives of the target audience–for teaching purposes, I have called this a product evaluation.  Here, you want to know if what they know is different; what they do is different; and what the conditions in which they work, live, and play are different.  You do that by thinking first what will the program do.

 

Now this is all very well and good–if you have some idea about what the specific and  measurable outcomes are.  Sometimes you won’t know this because the program has never been done before in quite the way you are doing it OR because the program is developing as you provide it.  (I’m sure there is a third reason–there always is–only I can’t think of one as I type.)

This is why planning evaluation when you are planning the program is important.