Intention to change

I’ve talked about intention to change and how stating that intention out loud and to others makes a difference. This piece of advice is showing up in some unexpected places and here. If you state your goal, there is a higher likelihood that you will be successful. That makes sense. If you confess publicly (or even to a priest), you are more likely to do the penance/make a change. What I find interesting is that this is so evaluation. What difference did the intervention make? How does that difference relate to the merit, worth, value of the program?

Lent started March 5. That is 40 days of discipline–giving up or taking on. That is a program. What difference will it make? Can you go 40 days without chocolate?

New Topic:

I got my last comment in November, 2013. I miss comments. Sure most of them were check out this other web site. Still there were some substantive comments and I’ve read those and archived them. My IT person doesn’t know what was the impetus for this sudden stop. Perhaps Google changed its search engine optimization code and my key words are no longer in the top. So I don’t know if what I write is meaningful; is worthwhile; or is resonating with you the reader in any way. I have been blogging now for over four years…this is no easy task. Comments and/or questions would be helpful, give me some direction.

New Topic:

Chris Lysy cartoons in his blog. This week he blogged about logic models. He only included logic models that are drawn with boxes. What if the logic model is circular? How would it be different? Can it still lead to outcomes? Non-linear thinkers/cultures would say so. How would you draw it? Given that mind mapping may also be a model, how do they relate?

Have a nice weekend. The sun is shining again! sunshine in oregon

 

The US elections are over; the analysis is mostly done;  the issues are still issues.  Well come, the next four years.  As Dickens said, It is the best of times; it is the worst of times.  Which? you ask–it all depends and that is the evaluative question of the day.

So what do you need to know now?  You need to help someone answer the question, Is it effective?  OR (maybe) Did it make a difference?

The Canadian Evaluation Society, the Canadian counter part to the American Evaluation Association has put together a series (six so far) of pamphlets for new evaluators.  This week, I’ve decided to go back to the beginning and promote evaluation as a profession.

Gene Shackman (no picture could be found) originally organized these brief pieces and is willing to share them.  Gene is an applied sociologist and director of the Global Social Change Research Project.  His first contribution was in December 2010; the most current, November 2012.

Hope these help.

Although this was the CES fourth post (in July, 2011), I believe it is something that evaluators  and those who woke up and found out they were evaluators need before any of the other booklets. Even though there will probably be strange and unfamiliar words in the booklet, it provides a foundation.  Every evaluator will know some of these words; some will be new; some will be context specific.   Every evaluator needs to have a comprehensive glossary of terminology. The glossary was compiled originally by the International Development Evaluation Association.  It is available for down load in English, French, and Arabic and is 65 pages.

CES is also posting a series (five as of this post) that Gene Shackman put together.  The first booklet, posted by CES in December, 2010 is called “What is program evaluation?” and is a 17 page booklet introducing program evaluation.  Shackman tells us that “this guide is available as a set of smaller pamphlets…” here.

In January, 2011, CES published the second of these booklets.  Evaluation questions addresses the key questions about program evaluation and is three pages long.

CES posted the third booklet in April, 2011.  It is called “What methods to use” and can be found here.  Shackman discusses briefly the benefits and limitations of qualitative and quantitative methods, the two main categories of answering evaluation questions.  A third approach that has gained credibility is mixed methods.

The next booklet, posted by CES in October 2012, is on surveys.  It “…explains what they are, what they are usually used for, and what typical questions are asked… as well as the pros and cons of different sampling methods.

The most recent booklet just posted (November, 2012) is about qualitative methods such as focus groups and interviews.

One characteristic of these five booklets is the additional resources that Shackman lists for each of the topics.  I have my favorites (and I’ve mentioned them from time to tine; those new to the field need to develop favorite sources.

What is important is that you embrace the options…this is  only one way to look at evaluation.

 

 

 

 

 

 

 

What is the difference between need to know and nice to know?  How does this affect evaluation?  I got a post this week on a blog I follow (Kirkpatrick) that talks about how much data does a trainer really need?  (Remember that Don Kirkpatrick developed and established an evaluation model for professional training back in the 1954 that still holds today.)

Most Extension faculty don’t do training programs per se, although there are training elements in Extension programs.  Extension faculty are typically looking for program impacts in their program evaluations.  Program improvement evaluations, although necessary, are not sufficient.  Yes, they provide important information to the program planner; they don’t necessarily give you information about how effective your program has been (i.e., outcome information). (You will note that I will use the term “impacts” interchangeably with “outcomes” because most Extension faculty parrot the language of reporting impacts.)

OK.  So how much data do you really need?  How do you determine what is nice to have and what is necessary (need) to have?  How do you know?

  1. Look at your logic model.  Do you have questions that reflect what you expect to have happen as a result of your program?
  2. Review your goals.  Review your stated goals, not the goals you think will happen because you “know you have a good program”.
  3. Ask yourself, How will I USE these data?  If the data will not be used to defend your program, you don’t need it.
  4. Does the question describe your target audience?  Although not demonstrating impact, knowing what your target audience looks like is important.  Journal articles and professional presentations want to know this.
  5. Finally, ask yourself, Do I really need to know the answer to this question or will it burden the participant.  If it is a burden, your participants will tend to not answer, then you  have a low response rate; not something you want.

Kirkpatrick also advises to avoid redundant questions.  That means questions asked in a number of ways and giving you the same answer; questions written in positive and negative forms.  The other question that I always include because it will give me a way to determine how my program is making a difference is a question on intention including a time frame.  For example, “In the next six months do you intend to try any of the skills you learned to day?  If so, which one.”  Mazmaniam has identified the best predictor of behavior change (a measure of making a difference) is stated intention to change.  Telling someone else makes the participant accountable.  That seems to make the difference.

 

Reference:

Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowits, M. P. (1998).   Information about barriers to planned change: A Randomized controlled trail involving continuing medical education lectures and commitment to change.  Academic Medicine, 73(8).

 

P.S.  No blog next week; away on business.

 

 

 

Creativity is not an escape from disciplined thinking. It is an escape with disciplined thinking.” – Jerry Hirschberg – via @BarbaraOrmsby

The above quote was in the September 7 post of Harold Jarche’s blog.  I think it has relevance to the work we do as evaluators.  Certainly, there is a creative part to evaluation; certainly there is a disciplined thinking part to evaluation.  Remembering that is sometimes a challenge.

So where in the process do we see creativity and where do we see disciplined thinking?

When evaluators construct a logic model, you see creativity; you also see disciplined thinking

When evaluators develop an implementation plan, you see creativity; you also see disciplined thinking.

When evaluators develop a methodology and a method, you see creativity; you also see disciplined thinking.

When evaluators present the findings for use, you see creativity; you also see disciplined thinking.

So the next time you say “give me a survey for this program”,  think–Is a survey the best approach to determining if this program is effective; will it really answer my questions?

Creativity and disciplined thinking are companions in evaluation.

 

The topic of complexity has appeared several times over the last few weeks.  Brian Pittman wrote about it in an AEA365; Charles Gasper used it as a topic for his most recent blog.  Much food for thought, especially as it relates to the work evaluators do.

Simultaneously, Harold Jarche talks about connections.  To me connections and complexity are two side of the same coin. Something which is complex typically has multiple parts.  Something which has multiple parts is connected to the other parts.  Certainly the work done by evaluators has multiple parts; certainly those parts are connected to each other.  The challenge we face is  logically defending those connections and in doing so, make explicit the parts.  Sound easy?  Its not.

 

That’s why I stress modeling your project before you implement it.  If the project is modeled, often the model leads you to discover that what you thought would happen because of what you do, won’t.  You have time to fix the model, fix the program, and fix the evaluation protocol.  If your model is defensible and logical, you still may find out that the program doesn’t get you where you want to go.  Jonny Morell writes about this in his book, Evaluation in the face of uncertaintyThere are worse things than having to fix the program or fix the evaluation protocol before implementation.  Keep in mind that connections are key; complexity is everywhere.  Perhaps you’ll have an Aha! moment.

 

I’ll be on holiday and there will not be a post next week.  Last week was an odd week–an example of complexity and connections leading to unanticipated outcomes.

 

A colleague asked an interesting question, one that I am often asked as an evaluation specialist:  “without a control group is it possible to show that the intervention had anything to do with a skill increase?”  The answer to the question “Do I need a control group to do this evaluation?” is, “It all depends.”

It depends on what question are you asking.  Are you testing a hypothesis–a question posed in a null form of no difference?  Or answering an evaluative question–what difference was made?  The methodology you use depends on what question you are asking.  If you want to know how effective or efficient a program (aka intervention) is, you can determine that without a control group.  Campbell and Stanley in their, now well read, 1963 volume, Experimental and quasi-experimental designs for research, talk about quasi-experimental designs that do not use a control group.   Yes, there are threats to internal validity; yes, there are stronger designs; yes, the controls are not as rigorous as in a double-blind, cross-over design (considered the gold standard by some groups).  We are talking here about evaluation, people, NOT research.  We are not asking questions of efficacy (research); rather we want to know what difference is being made; we want to know the answer to “so what”.  Remember, the root of evaluation is value; not cause.

This is certainly a quandary–how to determine cause for the desired outcome.  John Mayne has recognized this quandary and has approached the question of attributing the outcome to the intervention in his use of contribution analysis.  In community-based work, like what Extension does, attributing cause is difficult at best.  Why–because there are factors which Extension cannot control and identifying a control group may not be ethical, appropriate, or feasible.  Use something else that is ethical, appropriate, and feasible (see Campbell and Stanley).

Using a logic model to guide your work helps to defend your premise of “If I have these resources, then I can do these activities with these participants; if I do these activities with these participants, then I expect (because the literature says so–the research has already been done) that the participants will learn these things; do these things; change these conditions.”  The likelihood of achieving world peace with your intervention is low at best; the likelihood of changing something (learning, practices, conditions)  if you have a defensible model (road map) is high.  Does that mean your program caused that change–probably not.  Can you take credit for the change; most definitely.

Ellen Taylor-Powell, UWEX Evaluation Specialist Emeritus, presented via webinar from Rome to the WECT (say west) cohorts today.  She talked about program planning and logic modeling.  The logic model format that Ellen developed was picked up by USDA, now NIFA, and disseminated across Extension.  That dissemination had an amazing effect on Extension, so much so that most Extension faculty know the format and can use it for their programs.

 

Ellen went further today than those resources located through hyperlinks on the UWEX website.  She cited the work by Sue Funnell and Patricia J. Rogers, Purposeful program theory: Effective use of theories of change and logic models  . It was published in March, 2011.  Here is what the publisher (Jossey-Bass, an imprint of Wiley) says:

Between good intentions and great results lies a program theory—not just a list of tasks but a vision of what needs to happen, and how. Now widely used in government and not-for-profit organizations, program theory provides a coherent picture of how change occurs and how to improve performance. Purposeful Program Theory shows how to develop, represent, and use program theory thoughtfully and strategically to suit your particular situation, drawing on the fifty-year history of program theory and the authors’ experiences over more than twenty-five years.

Two reviewers who I have mentioned before, Michael Quinn Patton and E. Jane Davidson, say the following:

“From needs assessment to intervention design, from implementation to outcomes evaluation, from policy formulation to policy execution and evaluation, program theory is paramount. But until now no book has examined these multiple uses of program theory in a comprehensive, understandable, and integrated way. This promises to be a breakthrough book, valuable to practitioners, program designers, evaluators, policy analysts, funders, and scholars who care about understanding why an intervention works or doesn’t work.” —Michael Quinn Patton, author, Utilization-Focused Evaluation

“Finally, the definitive guide to evaluation using program theory! Far from the narrow ‘one true way’ approaches to program theory, this book provides numerous practical options for applying program theory to fulfill different purposes and constraints, and guides the reader through the sound critical thinking required to select from among the options. The tour de force of the history and use of program theory is a truly global view, with examples from around the world and across the full range of content domains. A must-have for any serious evaluator.” —E. Jane Davidson, PhD, Real Evaluation Ltd.

Jane is the author of the book, Evaluation Methodology Basics: The nuts and bolts of sound evaluation, published by Sage..  This book “…provides a step-by-step guide for doing a real evaluation.  It focuses on the main kinds of “big picture” questions that evaluators usually need to answer, and how the nature of such questions is linked to evaluation methodology choices.”  And although Ellen didn’t specfically mention this book, it is a worthwhile resource for nascent evaluators.

Two other resources that were mentioned today were Jonny Morell’s book, Evaluation in the face of uncertainty:  Anticipating surprise and responding to the inevitable. This volume was published by Guilford Press..  Ellen also mentioned John Mayne and his work in contribution analysis.  A quick web search provided this reference:  Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. ILAC Brief No. 16. Rome, Italy: Institutional Learning and Change (ILAC) Initiative.  I’ll talk more about contribution analysis next week in TIMELY TOPICS.

 

If those of you who listened to Ellen remember other sources that she mentioned, let me know and I’ll put them here next week.

 

A colleague asked me what I considered an output in a statewide program we were discussing.  This is a really good example of assumptions and how they can blind side an individual–in this case me.  Once I (figuratively) picked myself up, I proceeded to explain how this terminology applied to the program under discussion.  Once the meeting concluded, I realized that perhaps a bit of a refresher was in order.  Even the most seasoned evaluators can benefit from a reminder every so often.

 

So OK–inputs, outputs, outcomes.

As I’ve mentioned before, Ellen Taylor-Powell, former UWEX Evaluation specialist has a marvelous tutorial on logic modeling.  I recommend you go there for your own refresher.  What I offer you here is a brief (very) overview of these terms.

Logic models whether linear or circular are composed of various focus points.  Those focus points include (in addition to those mention in the title of this post) the situation, assumptions, and external factors.  Simply put, the situation is a what is going on–the priorities, the needs, the problems that led to the program you are conducting–that is program with a small p (we can talk about sub and supra models later).

Inputs are those resources you need to conduct the program. Typically, they are lumped into personnel, time, money, venue, equipment.  Personnel covers staff, volunteers, partners, any stakeholder.  Time is not just your time–also the time needed for implementation, evaluation, analysis, and reporting.  Money (speaks for itself).  Venue is where the program will be held.  Equipment is what stuff you will need–technology, materials, gear, etc.

Outputs are often classified into two parts–first, participants (or target audience) and the second part, activities that are conducted.  Typically (although not always), those activities are counted and are called bean counts..  In the example that started this post, we would be counting the number of students who graduated high school; the number of students who matriculated to college (either 2 or 4 year); the number of students who transferred from 2 year to 4 year colleges; the number of students who completed college in 2 or 4 years; etc.  This bean  count could also be the number of classes offered; the number of brochures distributed; the number of participants in the class; the number of  (fill in the blank).  Outputs are necessary and not sufficient to determine if a program is being effective.  The field of evaluation started with determining bean counts and satisfactions.

Outcomes can be categorized as short term, medium/intermediate term, or long term.  Long term outcomes are often called impacts.  (There are those in the field who would classify impacts as something separate from an outcome–a discussion for another day.)  Whatever you choose to call the effects of your program, be consistent–don’t use the terms interchangeably; it confuses the reader.  What you are looking for as an outcome is change–in learning; in behavior; in conditions.  This change is measured in the target audience–individuals, groups, communities, etc.

I’ll talk about assumptions and external factors another day.  Have a wonderful holiday weekend…the last vestiges of summer–think tomatoes, corn-on-the-cob , state fair, and  a tall cool drink.

Hi everyone–it is the third week in April and time for a TIMELY TOPIC!  (I was out of town last week.)

Recently, I was asked: Why should I plan my evaluation strategy in the program planning stage? Isn’t it good enough to just ask participants if they are satisfied with the program?

Good question.  This is the usual scenario:  You have something to say to your community.  The topic has research support and is timely.  You think it would make a really good new program (or a revision of a current program).  So you plan the program. 

Do you plan the evaluation at the same time? The keyed response is YES.  The usual response is something like, “Are you kidding?”  No, not kidding.  When you plan your program is the time to plan your evaluation.

Unfortunately, my experience is that many (most) faculty when planning or revising a program fail to think about evaluating that program at the planning stage.  Yet, it is at the planning stage that you can clearly and effectively identify what you think will happen and what will indicate that your program has made a difference. Remember the evaluative question isn’t, “Did the participants like the program?”  The evaluative question is, “What difference did my program make in the lives of your participants–and if possible in the economic, environmental, and social conditions in which they live.” That is the question you need to ask yourself when you plan your program.  It also happens to be the evaluative question for the long term outcomes in a logic model.

If you ask this question before you implement your program, you may find that you can not gather data to answer it.  This allows you to look at what change (or changes) can you measure.  Can you measure changes in behavior?  This answers the question, “What difference did this program make in the way the participants act in the context presented in the program?” Or perhaps,  “What change occurred in what the participants know about the program topic?”  These are the evaluative questions for the short and intermediate term outcomes in a logic model.  (As an a side, there are evaluative questions that can be asked at every stage of a logic model.)

By thinking about and planning for evaluation at the PROGRAM PLANNING STAGE,you avoid an evaluation that gives you data that cannot be used to support your program.  A program you can defend with good evaluation data is a program that has staying power.  You also avoid having to retrofit your evaluation to your program.  Retrofits, though often possible,  may miss important data that could only be gathered by thinking of your outcomes ahead of the implementation.

Years ago (back when we beat on hollow logs), evaluations typically asked questions that measured participant satisfaction.  You probably still want to know if participants are satisfied with your program.  Satisfaction questionnaires may be necessary; they are no longer sufficient.  They do not answer the evaluative question, “What difference did this program make?”

My wishes to you:  Blessed Solstice.  Merry Christmas.  Happy Kwanzaa. and the Very Best Wishes for the New Year!

A short post today.

Ellen Taylor-Powell, my counterpart at University of Wisconsin Extension, has posted the following to the Extension Education Evaluation TIG list serve.  I think it is important enough to share here.

When you down load this PDF to save a copy, think of where your values come into the model; where others values can affect the program, and how you can modify the model to balance those values.

Ellen says:  “I just wanted to let everyone know that the online logic model course, “Enhancing Program Performance with Logic Models has been produced as a PDF in response to requests from folks without easy or affordable internet access or with different learning needs.  The PDF version (216 pages, 3.35MB) is available at:

http://www.uwex.edu/ces/pdande/evaluation/pdf/lmcourseall.pdf

Please note that no revisions or updates have been made to the original 2003 online course.

Happy Holidays!

Ellen”