Like many people, I find change hard. In fact, I really don’t like change. I think this is the result of a high school experience; one-third of my classmates left each year. (I was a military off-spring; we changed assignments every three years.)

Yet, in today’s world change is probably the only constant. Does that make it fun? Not necessarily. Does that make it easy? Nope. Does that make it necessary? Yep.

Evaluators deal with change regularly. New programs are required; those must be evaluated. Old programs are revised; those must be evaluated. New approaches are developed and presented to the field. (When I first became an evaluator, there wasn’t a systems approach to evaluation; there wasn’t developmental evaluation; I could continue.) New technologies are available and must be used even if the old one wasn’t broken (even for those of us who are techno-peasants).

I just finished a major qualitative evaluation that involved real-time virtual focus groups.virtual focus group When I researched this topic (virtual focus groups), I found a lot of information about non-synchronous focus groups, focus groups using a conferencing software, even synchronous focus groups without pictures. online focus groups I didn’t find anything about using real-time synchronous virtual focus groups. Unfortunately, we didn’t have much money even though there are services available. Continue reading

It has been about  five/chinese_symbols_number fiveyears since I started this blog (more or less–my anniversary is actually in early December) .

Because I am an evaluator, I have asked several time is this blog making a difference. And those posts, the ones in which I ask “is this blog making a difference”, are the ones which get the most comments.  Now, truly, most comments are often either about marketing some product, inviting me to view another blog, mirroring comments made previously, or comments in a language which I cannot read (even with an online translator). Yet, there must be something about “making a difference” that engages viewers and then engages them to make a comment.

Today, I read a comment Continue reading

What? So what? Now what?

Sounds like an evaluation problem.

King and Stevahn (in press) tells us the first query requires thoughtful observation of a situation; the second query a discussion of possible options and implications of those options, and the third query calls for the creation of a list of potential next steps.adaptive_action.wiki

Yet these are the key words for “adaptive action” (If you haven’t looked at the web site, I suggest you do.) One quote that is reflective of adaptive action is, “Adaptive Action reveals how we can be proactive in managing today and influencing tomorrow.”( David W. Jamieson, University of St. Thomas). Adaptive action can help you

  • Understand the sources of uncertainty in your chaotic world
  • Explore opportunities for action and their implications as they occur
  • Learn a simple process that cuts through complexity
  • Transform the work of individuals, teams, organizations and communities
  • Take on any challenge—as large as a strategic plan or small as a messy meeting
  • Take action to improve productivity, collaboration and sustainability

Evaluation is a proactive (usually) activity (oh, I know that sometimes evaluation is flying by the seat of your pantsflying-by-the-seat-of-your-pants-Laurence-Musgrove-with-credit-line  and is totally reactive). People are now recognizing that evaluation will benefit them, their programs, and their organizations and that it isn’t personal (although that fear is still out there).

Although the site is directed towards leadership in organizations, the key questions are evaluative. You can’t determine “what” without evidence (data); you can’t determine “so what” unless you have a plan (logic model), and you can’t think about “now what” unless you have an outcome that you can move toward. These questions are evaluative in contemporary times because there are no simple problems any more. (Panarchy approaches similar situations using a similar model  adaptive-cycle Action.) Complex situations are facing program people and evaluators all the time. Using adaptive action may help. Panarchy may help (the book is called Panarchy by Gunderson and Hollings panarchy .)

Just think of adaptive action as another model of evaluation.

mytwo cents

molly.

unintended-consequencesA colleague asked, “How do you design an evaluation that can identify unintended consequences?” This was based on a statement about methodologies that “only measure the extent to which intended results have been achieved and are not able to capture unintended outcomes (see AEA365). (The cartoon is attributed to Rob Cottingham.)

Really good question. Unintended consequences are just that–outcomes which are not what you think will happen with the program you are implementing. This is where program theory comes into play. When you model the program, you think of what you want to happen. What you want to happen is usually supported by the literature, not your gut (intuition may be useful for unintended, however). A logic model lists as outcome the “intended” outcomes (consequences). So you run your program and you get something else, not necessarily bad, just not what you expected; the outcome is unintended.

Program theory can advise you that other outcomes could happen. How do you design your evaluation so that you can capture those. Mazmanian in his 1998 study on intention to change had an unintended outcome; one that has applications to any adult learning experience (1). So what method do you use to get at these? A general question, open ended? Perhaps. Many (most?) people won’t respond to open ended questions–takes too much time. OK. I can live with that. So what do you do instead? What does the literature say could happen? Even if you didn’t design the program for that outcome. Ask that question. Along with the questions about what you expect to happen.

How would you represent this in your logic model–by the ubiquitous “other”? Perhaps. Certainly easy that way. Again, look at program theory. What does it say? Then use what is said there. Or use “other”–then you are getting back to the open ended questions and run the risk of not getting a response. If you only model “other”–do you really know what that “other” is?

I know that I won’t be able to get to world peace, so I look for what I can evaluate and since I doubt I’ll have enough money to actually go and observe behaviors (certainly the ideal), I have to ask a question. In your question asking, you want a response right? Then ask the specific question. Ask it in a way that elicits program influence–how confident the respondent is that X happened? How confident the respondent is that they can do X? How confident is the respondent that this outcome could have happened? You could ask if X happened (yes/no) and then ask the confidence questions (confidence questions are also known as self-efficacy). Bandura will be proud. See Bandure social cognitive theory  OR Bandura social learning theory  OR   Bandura self-efficacy (for discussions of self-efficacy and social learning).

mytwo cents

molly.

1. Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., Kantrowitz, M. P. (1998). Information about barriers to planned change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine 73(8), 882-886.

Once again, it is the whole ‘balance’ thing…(we) live in ordinary life and that ordinary life is really the only life we have…I’ll take it. It has some great moments…

 

These wise words come from the insights of Buddy Stallings, Episcopal priest in charge of a large parish in a large city in the US.  True, I took them out of context; the important thing is that they resonated with me from an evaluation perspective.

Too often, faculty and colleagues come to me and wonder what the impact is of this or that program.  I wonder, What do they mean?  What do they want to know? Are they only using words they have heard–the buzz words?  I ponder how this fits into their ordinary life. Or are they outside their ordinary life, pretending in a foreign country?

A faculty member at Oregon State University equated history to a foreign country.  I was put in a mind that evaluation is a foreign country to many (most) people, even though everyone evaluates every day, whether they know it or not.  Individuals visit that contry because they are required to visit; to gather information; to report what they discovered.  They do this with out any special preparation.  Visiting a foreign country entails preparation (at least it does for me).  A study of customs, mores, foods, language, behavior, tools (I’m sure I’m missing something important in this list) is needed; not just necessary, mandatory.  Because although the foreign country may be exotic and unique and novel to you, it is ordinary life for everyone who lives there.  The same is true for evaluation.  There are customs; students are socialized to think and act in a certain way.  Mores are constantly being called into question; language, behaviors, tools, which not known to you in your ordinary life, present themselves. You are constantly presented with opportunities to be outside your ordinary life.  Yet, I wonder what are you missing by not seeing the ordinary; by pretending that it is extraordinary?  By not doing the preparation to make evaluation part of your ordinary life, something you do without thinking.

So I ask you, What preparation have you done to visit this foreign country called EVALUATION?  What are you currently doing to increase your understanding of this country?  How does this visit change your ordinary life or can you get those great moments by recognizing that this is truly the only life you have?   So I ask you, What are you really asking when you ask, What are the impacts?

 

All of this has significant implications for capacity building.

I came across this quote from Viktor Frankl today (thanks to a colleague)

“…everything can be taken from a man (sic) but one thing: the last of the human freedoms – to choose one’s attitude in any given set of circumstances, to choose one’s own way.” Viktor Frankl (Man’s Search for Meaning – p.104)

I realized that,  especially at this time of year, attitude is everything–good, bad, indifferent–the choice is always yours.

How we choose to approach anything depends upon our previous experiences–what I call personal and situational bias.   Sadler* has three classifications for these biases.  He calls them value inertias (unwanted distorting influences which reflect background experience), ethical compromises (actions for which one is personally culpable), and cognitive limitations (not knowing for what ever reason).

When we approach an evaluation, our attitude leads the way.  If we are reluctant, if we are resistant, if we are excited, if we are uncertain, all these approaches reflect where we’ve been, what we’ve seen, what we have learned, what we have done (or not).  We can make a choice how to proceed.

The America n Evaluation Association (AEA) has long had a history of supporting difference.  That value is imbedded in the guiding principles.  The two principles which address supporting differences are

  • Respect for People:  Evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
  • Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

AEA also has developed a Cultural Competence statement.  In it, AEA affirms that “A culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation.”

Both of these documents provide a foundation for the work we do as evaluators as well as relating to our personal and situational bias. Considering them as we  enter into the choice we make about attitude will help minimize the biases we bring to our evaluation work.  The evaluative question from all this–When has your personal and situational biases interfered with you work in evaluation?

Attitude is always there–and it can change.  It is your choice.

 

 

 

 

Sadler, D. R. (1981). Intuitive data processing as a potential source of bias in naturalistic evaluations.  Education Evaluation and Policy Analysis, 3, 25-31.

Hopefully, the technical difficulties with images is no longer a problem and I will be able to post the answers to the history quiz and the post I had hoped to post last week.  So, as promised, here are the answers to the quiz I posted the week of July 5.  The keyed responses are in BOLD

1.  Michael Quinn Patton, author of Utilization-Focused Evaluation and the new book, Developmental Evaluation and the classic Qualitative Evaluation and Research Methods .

2.   Michael Scriven is best known for his concept of formative and summative evaluation. He has also advocated that evaluation is a transdiscipline.  He is the author of the Evaluation Thesaurus .

3. Hallie Preskill is the co-author (with Darlene Russ-Eft) of Evaluation Capacity Building

4. Robert E. Stake has advanced work in case study and is the author of the book Multiple Case Study and The Art of Case Study Research.

5. David M. Fetterman is best known for his advocacy of empowerment evaluation and the book of that name, Foundations of Empowerment Evaluation .

6. Daniel Stufflebeam developed the CIPP (context input process product) model which is discussed in the book Evaluation Models .

7. James W. Altschuldt is the go-to person for needs assessment.  He is the editor of the Needs Assessment Kit (or everything you wanted to know about needs assessment and didn’t know where to find the answer).  He is also the co-author with Bell Ruth Witkin of two needs assessment books,  and  .

8. Jennifer C. Greene, the current President of the American Evaluation Association, and the author of a book on mixed methods .

9. Ernest R. House is a leader in the work of evaluation policy and is the author of  an evaluation novel,  Regression to the Mean   .

10. Lee J. Cronbach is a pioneer in education evaluation and the reform of that practice.  He co-authored with several associates the book, Toward Reform of Program Evaluation .

11.  Ellen Taylor-Powell, the former Evaluation Specialist at University of Wisconsin Extension Service and is credited with developing the logic model later adopted by the USDA for use by the Extension Service.  To go to the UWEX site, click on the words “logic model”.

12. Yvonna Lincoln, with her husband Egon Guba (see below) co-authored the book Naturalistic Inquiry  . She is the currently co-editor (with Norman K. Denzin) of the Handbook of Qualitative Research .

13.   Egon Guba, with his wife Yvonna Lincoln, is the co-author of 4th Generation Evaluation.

14. Blaine Worthen has championed certification for evaluators.  He wit h Jody L. Fitzpatrick and James
R. Sanders have co-authored Program Evaluation: Alternative Approaches and Practical Guidelines.

15.  Thomas A. Schwandt, a philosopher at heart who started as an auditor, has written extensively on evaluation ethics. He is also the co-author (with Edward S. Halpern) of Linking Auditing and Metaevaluation.

16.   Peter H. Rossi, co-author with Howard E. Freeman and Mark E. Lipsey, wrote Evaluation: A Systematic Approach , and is a pioneer in evaluation research.

17. W. James Popham, a leader in educational evaluation, and authored the volume, Educational Evaluation

18. Jason Millman was a pioneer of teacher evaluation and author of  Handbook of Teacher Evaluation

19.  William R. Shadish co-edited (with Laura C. Leviton and Thomas Cook) of Foundations of Program Evaluation: Theories of Practice . His work in theories of evaluation practice earned him the Paul F. Lazarsfeld Award for Evaluation Theory, from the American Evaluation Association in 1994.

20.   Laura C. Leviton (co-editor with Will Shadish and Tom Cook–see above) of Foundations of Program Evaluation: Theories of Practice has pioneered work in participatory evaluation.

 

 

Although I’ve only list 20 leaders, movers and shakers, in the evaluation field, there are others who also deserve mention:  John Owen, Deb Rog, Mark Lipsey, Mel Mark, Jonathan Morell, Midge Smith, Lois-Ellin Datta, Patricia Rogers, Sue Funnell, Jean King, Laurie Stevahn, John, McLaughlin, Michale Morris, Nick Smith, Don Dillman, Karen Kirkhart, among others.

If you want to meet the movers and shakers, I suggest you attend the American Evaluation Association annual meeting.  In 2011, it will be held in Anaheim CA, November 2 – 5; professional development sessions are being offered October 31, November 1 and 2, and also November 6.  More conference information can be found here.

 

 

What are standard evaluation tools?  What knowledge do you need to conduct an evaluation effectively and efficiently?  For this post and the next two, I’m going to talk about just that.

This post is about planning programs. 

The next one will be about implementing, monitoring, and delivering the evaluation of that program.

The third one will be about utilizing the findings of that program evaluation.

Today–program planning.  How does program planning relate to program evaluation?

A lot of hours goes into planning a program.  Questions that need to be answered among others include:

  • What expertise is needed?
  • What is the content focus?
  • What venue will be utilized?
  • Who is the target audience?
  • How many can you accommodate?
  • What will you charge?
  • And the list of questions goes on…talk to any event planner–they will tell you, planning a program is difficult.

Although you might think that these are planning questions, they are also evaluation questions.  They point the program planner to the outcome of the program in the context in which the program is planned.  Yet, what often happens is that evaluation is often left out of that planning.  It is one detail that gets lost in all the rest–until the end.  Unfortunately, retrofitting an evaluation after the program has already run often results in spurious data, leading to specious results, resulting in unusable findings and unfortunately–a program that can’t be replicated.  What’s an educator to do?

The tools that help in program planning are ones you have seen and probably used before:  logic models, theories of change, and evaluation proposals.

Logic models have already been the topic of this blog.   Theories of change have been mentioned.  Evaluation proposals are a new topic.  More and more, funding agencies want an evaluation plan.  Some provide a template–often a modified logic model; some ask specifically for a program specific logic mode.  Detailing how your program will bring about change and what change is expected is all part of an evaluation proposal.  A review of logic models and theories of  change and the program theory related to your proposed program will help you write an evaluation proposal.

Keep in mind that you may be writing for a naive audience, an audience who isn’t as knowledgeable as you in your subject matter OR in the evaluation process.  A simple evaluation proposal will go a long way to getting and keeping all stakeholders on the same page.

lightening and moountains d775bc83-0fcf-487a-b17f-56c97384c8e9Hi! It’s Tuesday, again.

I was thinking–If evaluation is an everyday activity, why does it FEEL so monumental–you know–over whelming, daunting, aversive even

I can think of several reasons for that feeling:

  • You don’t know how.
  • You don’t want to (do evaluation).
  • You have too much else to do.
  • You don’t like to (do evaluation).
  • Evaluation  isn’t important.
  • Evaluation limits your passion for your program.

All those are good reasons. Yet, in today’s world you have to show your programs are making a difference. You have to provide evidence of impact. To do that (show impact, making a difference) you must evaluate your program.

How do you make your evaluation manageable? How do you make it an everyday activity? Here are several ways.

Utilization-Focused Evaluation

  • Set boundaries around what you evaluate.
  • Limit the questions to ones you must know. Michael Patton says only collect data you are going to use, then use it. (To read more about evaluation and use,  see Patton’s book, Utilization-Focused Evaluation).
  • Evaluate key programs, not every program you conduct.
  • Identify where your passion lies and focus your evaluation efforts there.
  • Start small. You probably won’t be able to demonstrate that your program ensured  world peace; you will be able to know that your target audience has made an important change in the desired direction.

We can talk more about the how, later. Now it is enough to know that evaluation isn’t as monumental as you thought.