Oct
20
Filed Under (program evaluation, program planning) by Molly on 20-10-2016

Evaluation is political. I am reminded of that fact when I least expect it.

In yesterday’s AEA 365 post, I am reminded that social justice and political activity may be (probably are) linked; are probably sharing many common traits.

In that post the author lists some of the principles she used recently:

  1. Evaluation is a political activity.
  2. Knowledge is culturally, socially, and temporally contingent.
  3. Knowledge should be a resource of and for the people who create, hold, and share it.
  4. There are multiple ways of knowing (and some ways are privileged over others).

Evaluation is a trans-discipline, drawing from many many other ways of thinking. We know that politics (or anything political) is socially constructed. We know that ‘doing to’ is inadequate because ‘doing with’ and ‘doing as’ are ways of sharing knowledge. (I would strive for ‘doing as’.) We also know that there are multiple ways of knowing.

(See Belenky belenky, Clinchy [with Belenky] belenkyclinchy_trimmed, Goldberger nancy_goldberger, and Tarulejill-mattuck-tarule, Basic Books, 1986 as one.)

OR

(See: Gilligan carol-gilligan, Harvard University Press, 1982; among others.)

How does evaluation, social justice, and politics relate?

What if you do not bring representation of the participant groups to the table?

If they are not asked to be at the table or for their opinion?

What if you do not ask the questions that need to be asked of that group?

To whom ARE your are your questions being addressed?

Is that equitable?

Being equitable is one aspect of social justice. There are others.

Evaluation needs to be equitable.

 

I will be in Atlanta next week at the American Evaluation Association conference. atlanta-georgia-metropolitan

Maybe I’ll see you there!

my two cents.

molly.

 

 

 

 

Sheila Robinson has an interesting post which she titled “Outputs are for programs. Outcomes are for people.”  Sounds like a logic model to me.

Evaluating something (a strategic plan, an administrative model, a range management program) can be problematic. Especially if all you do is count. So “Do you want to count?” OR “Do you want to determine what difference you made?” I think it all relates to outputs and outcomes.

 

Logic model

 

The model below explains the difference between outputs and outcomes.

.logicmodel (I tried to find a link on the University of Wisconsin website and UNFORTUNATELY it is no longer there…go figure. Thanks to Sheila, I found this link which talks about outputs and outcomes) I think this model makes clear the  difference between Outputs (activities and participation) and Outcomes-Impact (learning, behavior, and conditions). Read the rest of this entry »

Feb
25
Filed Under (program evaluation, program planning) by Molly on 25-02-2016

mandela-impossible “It always seems impossible until it’s done.” ~Nelson Mandela

How many times have you shaken your head in wonder? in confusion? in disbelief?

Regularly throughout your life, perhaps. (Now if you are a wonder kid, you probably have just ignored the impossible and moved on to something else.) Most of us will have been in awe; uncertainty; incredulity. Most of us will always look at that which seems impossible and then be amazed when it is done. (Mandela nelson mandela 1 was such a remarkable man who had such amazing insights.) Read the rest of this entry »

Oct
23
Filed Under (program evaluation) by Molly on 23-10-2015

My friend and colleague, Patricia Rogers, says of cognitive bias , “It would be good to think through these in terms of systematic evaluation approaches and the extent to which they address these.” This was in response to the article herecognitive bias The article says that the human brain is capable of 10 to the 16th power (a big number) processes per second. Despite being faster than a speeding bullet, etc., the human brain has ” annoying glitches (that) cause us to make questionable decisions and reach erroneous conclusions.”

Bias is something that evaluators deal with all the time. There is desired response bias, non-response bias, recency and immediacy bias, measurement bias, and…need I say more? Isn’t evaluation and aren’t evaluators supposed to be “objective”? That we as evaluators behave in an ethical manner? That we have dealt with potential bias and conflicts of interest. That is where cognitive bias appear. And you might not know it at all. Read the rest of this entry »

Sep
05
Filed Under (Methodology, program evaluation) by Molly on 05-09-2014

I just finished a chapter on needs assessment in the public sector–you know that part of the work environment that provides a service to the community and receives part of its funding from the county/state/federal governments. Most of you know I’ve been an academic for at least 31 years, maybe more (depending on when you start the clock). In that time I’ve worked as an internal evaluator, a program planner, and a classroom teacher. Most of what I’ve done has an evaluative component to it. (I actually earned my doctorate in program evaluation when most people in evaluation came from other disciplines.) During that time I’ve worked on many programs/projects in a variety of situations (individual classroom, community, state, and country). I find it really puzzling that evaluators will take on evaluation without having a firm foundation on which to base those evaluations. (I know I have done this; I can offer all manner of excuses, only not here).

If I had been invited to participate in the evaluation at the beginning of the program, at the conceptualization stage, I would have asked if a needs assessment had been done and what was the outcome of that assessment. Was there really a lack (i.e., a need); or was this “need” contrived to do something else (bring in grant money, further a career, make a stakeholder happy, etc.)? Read the rest of this entry »

Jun
27
Filed Under (Methodology, program evaluation) by Molly on 27-06-2014

unintended-consequencesA colleague asked, “How do you design an evaluation that can identify unintended consequences?” This was based on a statement about methodologies that “only measure the extent to which intended results have been achieved and are not able to capture unintended outcomes (see AEA365). (The cartoon is attributed to Rob Cottingham.)

Really good question. Unintended consequences are just that–outcomes which are not what you think will happen with the program you are implementing. This is where program theory comes into play. When you model the program, you think of what you want to happen. What you want to happen is usually supported by the literature, not your gut (intuition may be useful for unintended, however). A logic model lists as outcome the “intended” outcomes (consequences). So you run your program and you get something else, not necessarily bad, just not what you expected; the outcome is unintended.

Program theory can advise you that other outcomes could happen. How do you design your evaluation so that you can capture those. Mazmanian in his 1998 study on intention to change had an unintended outcome; one that has applications to any adult learning experience (1). So what method do you use to get at these? A general question, open ended? Perhaps. Many (most?) people won’t respond to open ended questions–takes too much time. OK. I can live with that. So what do you do instead? What does the literature say could happen? Even if you didn’t design the program for that outcome. Ask that question. Along with the questions about what you expect to happen.

How would you represent this in your logic model–by the ubiquitous “other”? Perhaps. Certainly easy that way. Again, look at program theory. What does it say? Then use what is said there. Or use “other”–then you are getting back to the open ended questions and run the risk of not getting a response. If you only model “other”–do you really know what that “other” is?

I know that I won’t be able to get to world peace, so I look for what I can evaluate and since I doubt I’ll have enough money to actually go and observe behaviors (certainly the ideal), I have to ask a question. In your question asking, you want a response right? Then ask the specific question. Ask it in a way that elicits program influence–how confident the respondent is that X happened? How confident the respondent is that they can do X? How confident is the respondent that this outcome could have happened? You could ask if X happened (yes/no) and then ask the confidence questions (confidence questions are also known as self-efficacy). Bandura will be proud. See Bandure social cognitive theory  OR Bandura social learning theory  OR   Bandura self-efficacy (for discussions of self-efficacy and social learning).

mytwo cents

molly.

1. Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., Kantrowitz, M. P. (1998). Information about barriers to planned change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine 73(8), 882-886.

Feb
13
Filed Under (program evaluation) by Molly on 13-02-2013

One of the outcomes of learning about evaluation is informational literacy.

Think about it.  How does what is happening in the world affect your program?  Your outcomes?  Your goals?

When was the last time you applied that peripheral knowledge to what you are doing.  Informational literacy is being aware of what is happening in the world.  Knowing this information, even peripherally, adds to your evaluation capacity.

Now, this is not advocating that you need to read the NY Times daily (although I’m sure they would really like to increase their readership); rather it is advocating that you recognize that none of your programs (whether little p or big P) occur in isolation. What your participants know affects how the program is implemented.  What you know affects how the programs are planned.  That knowledge also affects the data collection, data analysis, and reporting.  This is especially true for programs developed and delivered in the community, as are Extension programs.

Let me give you a real life example.  I returned from Tucson, AZ and the capstone event for an evaluation capacity program I was leading.  The event was an outstanding success–not only did it identify what was learned and what needed to be learned, it also demonstrated the value of peer learning.  I was psyched.  I was energized.  I was in an automobile accident 24 hours after returning home.  (The car was totaled–I no longer have a car; my youngest daughter and I experienced no serious injuries.)  The accident was published in the local paper the following day.  Several people saw the announcement; those same several people expressed their concern; some of those several people asked how they could help.  Now this is a very small local event that had a serious effect on me and my work.   (If I hadn’t had last week’s post already written, I don’t know if I could have written it.)  Solving simple problems takes twice as long (at least).  This informational literacy influenced those around me.  Their knowing changed their behavior to me.  Think of what September 11, 2001 did to people’s behavior; think about what the Pope’s RESIGNATION is doing to people’s behavior.  Informational literacy.  It is all evaluative.  Think about it.

 

Graphic URL: http://www.otterbein.edu/resources/library/information_literacy/index.htm

Aug
29
Filed Under (program evaluation) by Molly on 29-08-2012

The topic of complexity has appeared several times over the last few weeks.  Brian Pittman wrote about it in an AEA365; Charles Gasper used it as a topic for his most recent blog.  Much food for thought, especially as it relates to the work evaluators do.

Simultaneously, Harold Jarche talks about connections.  To me connections and complexity are two side of the same coin. Something which is complex typically has multiple parts.  Something which has multiple parts is connected to the other parts.  Certainly the work done by evaluators has multiple parts; certainly those parts are connected to each other.  The challenge we face is  logically defending those connections and in doing so, make explicit the parts.  Sound easy?  Its not.

 

That’s why I stress modeling your project before you implement it.  If the project is modeled, often the model leads you to discover that what you thought would happen because of what you do, won’t.  You have time to fix the model, fix the program, and fix the evaluation protocol.  If your model is defensible and logical, you still may find out that the program doesn’t get you where you want to go.  Jonny Morell writes about this in his book, Evaluation in the face of uncertaintyThere are worse things than having to fix the program or fix the evaluation protocol before implementation.  Keep in mind that connections are key; complexity is everywhere.  Perhaps you’ll have an Aha! moment.

 

I’ll be on holiday and there will not be a post next week.  Last week was an odd week–an example of complexity and connections leading to unanticipated outcomes.

 

Nov
09
Filed Under (program evaluation) by Molly on 09-11-2011

Ellen Taylor-Powell, UWEX Evaluation Specialist Emeritus, presented via webinar from Rome to the WECT (say west) cohorts today.  She talked about program planning and logic modeling.  The logic model format that Ellen developed was picked up by USDA, now NIFA, and disseminated across Extension.  That dissemination had an amazing effect on Extension, so much so that most Extension faculty know the format and can use it for their programs.

 

Ellen went further today than those resources located through hyperlinks on the UWEX website.  She cited the work by Sue Funnell and Patricia J. Rogers, Purposeful program theory: Effective use of theories of change and logic models  . It was published in March, 2011.  Here is what the publisher (Jossey-Bass, an imprint of Wiley) says:

Between good intentions and great results lies a program theory—not just a list of tasks but a vision of what needs to happen, and how. Now widely used in government and not-for-profit organizations, program theory provides a coherent picture of how change occurs and how to improve performance. Purposeful Program Theory shows how to develop, represent, and use program theory thoughtfully and strategically to suit your particular situation, drawing on the fifty-year history of program theory and the authors’ experiences over more than twenty-five years.

Two reviewers who I have mentioned before, Michael Quinn Patton and E. Jane Davidson, say the following:

“From needs assessment to intervention design, from implementation to outcomes evaluation, from policy formulation to policy execution and evaluation, program theory is paramount. But until now no book has examined these multiple uses of program theory in a comprehensive, understandable, and integrated way. This promises to be a breakthrough book, valuable to practitioners, program designers, evaluators, policy analysts, funders, and scholars who care about understanding why an intervention works or doesn’t work.” —Michael Quinn Patton, author, Utilization-Focused Evaluation

“Finally, the definitive guide to evaluation using program theory! Far from the narrow ‘one true way’ approaches to program theory, this book provides numerous practical options for applying program theory to fulfill different purposes and constraints, and guides the reader through the sound critical thinking required to select from among the options. The tour de force of the history and use of program theory is a truly global view, with examples from around the world and across the full range of content domains. A must-have for any serious evaluator.” —E. Jane Davidson, PhD, Real Evaluation Ltd.

Jane is the author of the book, Evaluation Methodology Basics: The nuts and bolts of sound evaluation, published by Sage..  This book “…provides a step-by-step guide for doing a real evaluation.  It focuses on the main kinds of “big picture” questions that evaluators usually need to answer, and how the nature of such questions is linked to evaluation methodology choices.”  And although Ellen didn’t specfically mention this book, it is a worthwhile resource for nascent evaluators.

Two other resources that were mentioned today were Jonny Morell’s book, Evaluation in the face of uncertainty:  Anticipating surprise and responding to the inevitable. This volume was published by Guilford Press..  Ellen also mentioned John Mayne and his work in contribution analysis.  A quick web search provided this reference:  Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. ILAC Brief No. 16. Rome, Italy: Institutional Learning and Change (ILAC) Initiative.  I’ll talk more about contribution analysis next week in TIMELY TOPICS.

 

If those of you who listened to Ellen remember other sources that she mentioned, let me know and I’ll put them here next week.

 

Apr
19
Filed Under (program evaluation) by Molly on 19-04-2011

Hi everyone–it is the third week in April and time for a TIMELY TOPIC!  (I was out of town last week.)

Recently, I was asked: Why should I plan my evaluation strategy in the program planning stage? Isn’t it good enough to just ask participants if they are satisfied with the program?

Good question.  This is the usual scenario:  You have something to say to your community.  The topic has research support and is timely.  You think it would make a really good new program (or a revision of a current program).  So you plan the program. 

Do you plan the evaluation at the same time? The keyed response is YES.  The usual response is something like, “Are you kidding?”  No, not kidding.  When you plan your program is the time to plan your evaluation.

Unfortunately, my experience is that many (most) faculty when planning or revising a program fail to think about evaluating that program at the planning stage.  Yet, it is at the planning stage that you can clearly and effectively identify what you think will happen and what will indicate that your program has made a difference. Remember the evaluative question isn’t, “Did the participants like the program?”  The evaluative question is, “What difference did my program make in the lives of your participants–and if possible in the economic, environmental, and social conditions in which they live.” That is the question you need to ask yourself when you plan your program.  It also happens to be the evaluative question for the long term outcomes in a logic model.

If you ask this question before you implement your program, you may find that you can not gather data to answer it.  This allows you to look at what change (or changes) can you measure.  Can you measure changes in behavior?  This answers the question, “What difference did this program make in the way the participants act in the context presented in the program?” Or perhaps,  “What change occurred in what the participants know about the program topic?”  These are the evaluative questions for the short and intermediate term outcomes in a logic model.  (As an a side, there are evaluative questions that can be asked at every stage of a logic model.)

By thinking about and planning for evaluation at the PROGRAM PLANNING STAGE,you avoid an evaluation that gives you data that cannot be used to support your program.  A program you can defend with good evaluation data is a program that has staying power.  You also avoid having to retrofit your evaluation to your program.  Retrofits, though often possible,  may miss important data that could only be gathered by thinking of your outcomes ahead of the implementation.

Years ago (back when we beat on hollow logs), evaluations typically asked questions that measured participant satisfaction.  You probably still want to know if participants are satisfied with your program.  Satisfaction questionnaires may be necessary; they are no longer sufficient.  They do not answer the evaluative question, “What difference did this program make?”