Jul
30

What? So what? Now what?

Sounds like an evaluation problem.

King and Stevahn (in press) tells us the first query requires thoughtful observation of a situation; the second query a discussion of possible options and implications of those options, and the third query calls for the creation of a list of potential next steps.adaptive_action.wiki

Yet these are the key words for “adaptive action” (If you haven’t looked at the web site, I suggest you do.) One quote that is reflective of adaptive action is, “Adaptive Action reveals how we can be proactive in managing today and influencing tomorrow.”( David W. Jamieson, University of St. Thomas). Adaptive action can help you

  • Understand the sources of uncertainty in your chaotic world
  • Explore opportunities for action and their implications as they occur
  • Learn a simple process that cuts through complexity
  • Transform the work of individuals, teams, organizations and communities
  • Take on any challenge—as large as a strategic plan or small as a messy meeting
  • Take action to improve productivity, collaboration and sustainability

Evaluation is a proactive (usually) activity (oh, I know that sometimes evaluation is flying by the seat of your pantsflying-by-the-seat-of-your-pants-Laurence-Musgrove-with-credit-line  and is totally reactive). People are now recognizing that evaluation will benefit them, their programs, and their organizations and that it isn’t personal (although that fear is still out there).

Although the site is directed towards leadership in organizations, the key questions are evaluative. You can’t determine “what” without evidence (data); you can’t determine “so what” unless you have a plan (logic model), and you can’t think about “now what” unless you have an outcome that you can move toward. These questions are evaluative in contemporary times because there are no simple problems any more. (Panarchy approaches similar situations using a similar model  adaptive-cycle Action.) Complex situations are facing program people and evaluators all the time. Using adaptive action may help. Panarchy may help (the book is called Panarchy by Gunderson and Hollings panarchy .)

Just think of adaptive action as another model of evaluation.

mytwo cents

molly.

Feb
10
Filed Under (program evaluation) by Molly on 10-02-2014

Warning:  This post may contain information that is controversial .

Schools (local public schools) were closed (still are).

The University (which never closes) was closed for four days (now open).

The snow kept falling and falling and falling.  Snow in corvallis February 2014.jpg (Thank you Sandra Thiesen for the photo.)

Eighteen inches.  Then freezing rain.  It is a mess (although as I write this, the sun is shining, and it is 39F and supposed to get to 45F by this afternoon).

This is a complex messy system (thank you Dave Bella).  It isn’t getting better.  This is the second snow Corvallis has experienced in the same number of months, with increasing amounts.

It rains in the valley in Oregon; IT DOES NOT SNOW.

Another example of a complex messy system is what is happening in the UK

These are examples extreme events; examples of climate chaos.

Evaluating complex messy systems is not easy.  There are many parts.  If you hold constant one part, what happens to the others?  If you don’t hold constant one part, what happens to the rest of the system?.  Systems thinking and systems evaluation has come of age with the 21st century; there were always people who viewed the world as a system; one part linked to another, indivisible.  Soft systems theory dates back to at least von Bertalanffy who developed general systems theory and published the book by the same name in 1968general systems theory (ISBN 0-8076-0453-4).

One way to view systems is in this photo (compliments of Wikipedia) Systems_thinking_about_the_society.svg.

Evaluating systems is complicated and complex.

Bob Williams, along with Iraj Imam, edited the volume Systems Concepts in EvaluationSystems_Concepts in evaluation_pb (2007), and along with Richard Hummelbrunner,   wrote the volume Systems Concepts in Action: A Practitioner’s Toolkit  systems concepts--tool kit (2010).  He is a leader in systems and evaluation.

These two books relate to my political statement at the beginning and complex messy systems.  According to Amazon, the second book “explores the application of systems ideas to investigate, evaluate, and intervene in complex and messy situations”.

If you think your program works in isolation, think again.  If you think your program doesn’t influence other programs, individuals, stakeholders, think again.  You work in a complex messy system. Because you work in a complex messy system, you might want to simplify the situation (I know I do); only you can’t.  You have to work within the system.

Might be worth while to get von Bertalanffy’s book; might be worth while to get Williams books; might be worth while to get  a copy of Gunderson and Holling book  Panarchy: Understanding Transformations in Systems of Humans and Nature.panarchy

After all, nature is a complex messy system.

Jul
02
Filed Under (program evaluation) by Molly on 02-07-2013

This Thursday, the U.S. celebrates THE national holiday. independence-2   I am reminded of all that comprises that holiday.  No, not barbeque and parades; fireworks and leisure.  Rather all the work that has gone on to assure that we as citizens CAN celebrate this independence day.  The founding fathers (and yes, they were old [or not so old] white men} took great risks to stand up for what they believed.  They did what I advocate- determined (through a variety of methods) the merit/worth/value of the program, and took a stand.  To me, it is a great example of evaluation as an everyday activity. We now live under that banner of the freedoms for which they stood.   independence

Oh, we may not agree with everything that has come down the pike over the years; some of us are quite vocal about the loss of freedoms because of events that have happened through no real fault of our own.  We just happened to be citizens of the U.S.  Could we have gotten to this place where we have the freedoms, obligations, responsibilities, and limitations without folks leading us?  I doubt it.  Anarchy is rarely, if ever, fruitful.  Because we believe in leaders (even if we don’t agree with who is leading), we have to recognize that as citizens we are interdependent; we can’t do it alone (little red hen notwithstandinglittle red hen).  Yes, the U.S. is known for the  strength that is fostered in the individual (independence).  Yet, if we really look at what a day looks like, we are interdependent on so many others for all that we do, see, hear, smell, feel, taste.  We need to take a moment and thank our farmer, our leaders, our children (if we have them as they will be tomorrow’s leaders), our parents (if we are so lucky to still have parents), and our neighbors for being part of our lives.  For fostering the interdependence that makes the U.S. unique.  Evaluation is an everyday activity; when was the last time you recognized that you can’t do anything alone?

Happy Fourth of July–enjoy your blueberry pie!blueberry pie natural light

Sep
28
Filed Under (Data Analysis, program evaluation) by Molly on 28-09-2012

What is the difference between need to know and nice to know?  How does this affect evaluation?  I got a post this week on a blog I follow (Kirkpatrick) that talks about how much data does a trainer really need?  (Remember that Don Kirkpatrick developed and established an evaluation model for professional training back in the 1954 that still holds today.)

Most Extension faculty don’t do training programs per se, although there are training elements in Extension programs.  Extension faculty are typically looking for program impacts in their program evaluations.  Program improvement evaluations, although necessary, are not sufficient.  Yes, they provide important information to the program planner; they don’t necessarily give you information about how effective your program has been (i.e., outcome information). (You will note that I will use the term “impacts” interchangeably with “outcomes” because most Extension faculty parrot the language of reporting impacts.)

OK.  So how much data do you really need?  How do you determine what is nice to have and what is necessary (need) to have?  How do you know?

  1. Look at your logic model.  Do you have questions that reflect what you expect to have happen as a result of your program?
  2. Review your goals.  Review your stated goals, not the goals you think will happen because you “know you have a good program”.
  3. Ask yourself, How will I USE these data?  If the data will not be used to defend your program, you don’t need it.
  4. Does the question describe your target audience?  Although not demonstrating impact, knowing what your target audience looks like is important.  Journal articles and professional presentations want to know this.
  5. Finally, ask yourself, Do I really need to know the answer to this question or will it burden the participant.  If it is a burden, your participants will tend to not answer, then you  have a low response rate; not something you want.

Kirkpatrick also advises to avoid redundant questions.  That means questions asked in a number of ways and giving you the same answer; questions written in positive and negative forms.  The other question that I always include because it will give me a way to determine how my program is making a difference is a question on intention including a time frame.  For example, “In the next six months do you intend to try any of the skills you learned to day?  If so, which one.”  Mazmaniam has identified the best predictor of behavior change (a measure of making a difference) is stated intention to change.  Telling someone else makes the participant accountable.  That seems to make the difference.

 

Reference:

Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowits, M. P. (1998).   Information about barriers to planned change: A Randomized controlled trail involving continuing medical education lectures and commitment to change.  Academic Medicine, 73(8).

 

P.S.  No blog next week; away on business.

 

 

 

Oct
26
Filed Under (program evaluation) by Molly on 26-10-2011

I’ve long suspected I wasn’t alone in the recognition that the term impact is used inappropriately in most evaluation. 

Terry Smutlyo sings a song about impact during an outcome mapping seminar he conducted.  Terry Smutlyo is the Director, Evaluation International Development Research Development Research Center, Ottawa, Canada.  He ought to know a few things about evaluation terminology.  He has two versions of this song, Impact Blues, on YouTube; his comments speak to this issue.  Check it out.

 

Just a gentle reminder to use your words carefully.  Make sure everyone knows what you mean and that everyone at the table agrees with the meaning you use.

 

This week the post is  short.  Terry says it best.

Next week I’ll be at the American Evaluation Association annual meeting in Anaheim, CA, so no post.  No Disneyland visit either…sigh

 

 

Oct
12
Filed Under (Data Analysis, Methodology, program evaluation) by Molly on 12-10-2011

A colleague asks for advice on handling evaluation stories, so that they don’t get brushed aside as mere anecdotes.  She goes on to say of the AEA365 blog she read, ” I read the steps to take (hot tips), but don’t know enough about evaluation, perhaps, to understand how to apply them.”  Her question raises an interesting topic.  Much of what Extension does can be captured in stories (i.e., qualitative data)  rather than in numbers (i.e., quantitative data).  Dick Krueger, former Professor and Evaluation Leader (read specialist) at the University of Minnesota has done a lot of work in the area of using stories as evaluation.  Today’s post summarizes his work.

 

At the outset, Dick asks the following question:  What is the value of stories?  He provides these three answers:

  1. Stories make information easier to remember
  2. Stories make information more believable
  3. Stories can tap into emotions.

There are all types of stories.  The type we are interested in for evaluation purposes are organizational stories.  Organizational stories can do the following things for an organization:

  1. Depict culture
  2. Promote core values
  3. Transmit and reinforce the culture
  4. Provide instruction to employees
  5. Motivate, inspire, and encourage

He suggests six common types of organizational stories:

  1. Hero stories  (someone in the organization who has done something beyond the normal range of achievement)
  2. Success stories (highlight organizational successes)
  3. Lessons learned stories (what major mistakes and triumphs teach the organization)
  4. “How it works around here” stories (highlight core organizational values reflected in actual practice
  5. “Sacred bundle” stories (a collection of stories that together depict the culture of an organization; core philosophies)
  6. Training and orientation stories (assists new employees in understanding how the organization works)

To use stories as evaluation, the evaluator needs to consider how stories might be used, that is, do they depict how people experience the program?  Do they understand program outcomes?  Do they get insights into program processes?

You (as evaluator) need to think about how the story fits into the evaluation design (think logic model; program planning).  Ask yourself these questions:  Should you use stories alone?  Should you use stories that lead into other forma of inquiry?  Should you use stories that augment/illustrate results from other forms of inquiry?

You need to establish criteria for stories.  Rigor can be applied to story even though the data are narrative.  Criteria include the following:   Is the story authentic–is it truthful?  Is the story verifiable–is there a trail of evidence back to the source of the story?  Is there a need to consider confidentiality?  What was the original intent–purpose behind the original telling?  And finally, what does the story represent–other people or locations?

You will need a plan for capturing the stories.  Ask yourself these questions:  Do you need help capturing the stories?  What strategy will you use for collecting the stories?  How will you ensure documentation and record keeping?  (Sequence the questions; write them down the type–set up; conversational; etc.)  You will also need a plan for analyzing and reporting the stories  as you, the evaluator,  are responsible for finding meaning.

 

Sep
22
Filed Under (program evaluation) by Molly on 22-09-2011

I was talking with a colleague about evaluation capacity building (see last week’s post) and the question was raised about thinking like an evaluator.  Got me thinking about the socialization of professions and what has to happen to build a critical mass of like minded people.

Certainly, preparatory programs in academia conducted by experts, people who have worked in the field a long time–or at least longer than you starts the process.  Professional development helps–you know, attending meetings where evaluators meet (like the upcoming AEA conference, U. S. regional affiliates [there are many and they have conferences and meetings, too], and international organizations [increasing in number–which also host conferences and professional development sessions]–let me know if you want to know more about these opportunities).  Reading new and timely literature  on evaluation provides insights into the language.  AND looking at the evaluative questions in everyday activities.  Questions such as:  What criteria?  What  standards?  Which values?  What worth? Which decisions?

The socialization of evaluators happens because people who are interested in being evaluators look for the evaluation questions in everything they do.  Sometimes, looking for the evaluative question is easy and second nature–like choosing a can of corn at the grocery store; sometimes it is hard and demands collaboration–like deciding on the effectiveness of an educational program.

My recommendation is start with easy things–corn, chocolate chip cookies, wine, tomatoes; move to harder things with more variables–what to wear when and where, or whether to include one group or another .  The choices you make  will all depend upon what criteria is set, what standards have been agreed upon, and what value you place on the outcome or what decision you make.

The socialization process is like a puzzle, something that takes a while to complete, something that is different for everyone, yet ultimately the same.  The socialization is not unlike evaluation…pieces fitting together–criteria, standards, values, decisions.  Asking the evaluative questions  is an ongoing fluid process…it will become second nature with practice.

Feb
23
Filed Under (Methodology) by Molly on 23-02-2011

Although I have been learning about and doing evaluation for a long time, this week I’ve been searching for a topic to talk about.  A student recently asked me about the politics of evaluation–there is a lot that can be said on that topic, which I will save for another day.  Another student asked me about when to do an impact study and how to bound that study.  Certainly a good topic, too, though one that can wait for another post.  Something I read in another blog got me thinking about today’s post.  So, today I want to talk about gathering demographics.

Last week, I mentioned in my TIMELY TOPIC post about the AEA Guiding Principles. Those Principles along with the Program Evaluation Standards make significant contributions in assisting evaluators in making ethical decisions.  Evaluators make ethical decisions with every evaluation.  They are guided by these professional standards of conduct.  There are five Guiding Principles and five Evaluation Standards.  And although these are not proscriptive, they go along way to ensuring ethical evaluations.  That is a long introduction into gathering demographics.

The guiding principle, Integrity/Honesty states thatEvaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process.”  When we look at the entire evaluation process, as evaluators, we must strive constantly to maintain both personal and professional integrity in our decision making.  One decision we must make involves deciding what we need/want to know about our respondents.  As I’ve mentioned before, knowing what your sample looks like is important to reviewers, readers, and other stakeholders.  Yet, if we gather these data in a manner that is intrusive, are we being ethical?

Joe Heimlich, in a recent AEA365 post, says that asking demographic questions “…all carry with them ethical questions about use, need, confidentiality…”  He goes on to say that there are “…two major conditions shaping the decision to include – or to omit intentionally – questions on sexual or gender identity…”:

  1. When such data would further our understanding of the effect or the impact of a program, treatment, or event.
  2. When asking for such data would benefit the individual and/or their engagement in the evaluation process.

The first point relates to gender role issues–for example are gay men more like or more different from other gender categories?  And what gender categories did you include in your survey?  The second point relates to allowing an individual’s voice to be heard clearly and completely and have categories on our forms reflect their full participation in the evaluation.  For example, does marital status ask for domestic partnerships as well as traditional categories and are all those traditional categories necessary to hear your participants?

The next time you develop a questionnaire that includes demographic questions, take a second look at the wording–in an ethical manner.

Feb
10
Filed Under (program evaluation) by Molly on 10-02-2011

Three weeks ago, I promised you a series of posts on related topics–Program planning, Evaluation implementation, monitoring and delivering, and Evaluation utilization.  This is the third one–using the findings of evaluation.

Michael Patton’s book  is my reference.

I’ll try to condense the 400+ page book down to 500+ words for today’s post.  Fortunately, I have the Reader’s Digest version as well (look for Chapter 23 [Utilization-Focused Evaluation] in the following citation: Stufflebeam, D. L., Madaus, G. F. Kellaghan, T. (2000). Evaluation Models: Viewpoints on educational and human services evaluation, 2ed. Boston, MA: Kluwer Academic Publishers).  Patton’s chapter is a good summary–still it is 14 pages.

To start, it is important to understand exactly how the word “evaluation” is used in the context of utilization.  In the Stufflebeam, Madaus, & Kellaghan publication cited above, Patton (2000, p. 426) describes evaluation as, “the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness and/or inform decisions about future programming.  Utilization-focused evaluation (as opposed to program evaluation in general) is evaluation done for and with specific intended primary users for specific, intended uses (emphasis added). ”

There are four different types of use–instrumental, conceptual, persuasive, and process. The interest of potential stakeholders cannot be served well unless the stakeholder(s) whose interests are being served is made explicit.

To understand the types of use,  I will quote from a document titled, “Non-formal Educator Use of Evaluation Findings: Factors of Influence” by Sarah Baughman.

“Instrumental use occurs when decision makers use the findings to change or modify the program in some way (Fleisher & Christie, 2009; McCormick, 1997; Shulha & Cousins, 1997). The information gathered is used in a direct, concrete way or applied to a specific decision (McCormick, 1997).

Conceptual use occurs when the evaluation findings help the program staff or key stakeholders understand the program in a new way (Fleisher & Christie, 2009).

Persuasive use has also been called political use and is not always viewed as a positive type of use (McCormick, 1997). Examples of negative persuasive use include using evaluation results to justify or legitimize a decision that is already made or to prove to stakeholders or other administrative decision makers that the organization values accountability (Fleisher & Christie, 2009). It is sometimes considered a political use of findings with no intention to take the actual findings or the evaluation process seriously (Patton, 2008). Recently persuasive use has not been viewed as negatively as it once was.

Process use is the cognitive, behavioral, program, and organizational changes resulting, either directly or indirectly, from engagement in the evaluation process and learning to think evaluatively” (Patton, 2008, p. 109). Process use results not from the evaluation findings but from the evaluation activities or process.”

Before beginning the evaluation, the question, “Who is the primary intended user of the evaluation?” must not only be asked; it also must be answered.  What stakeholders need to be at the table? Those are the people who have a stake in the evaluation findings and those stakeholders may be different for each evaluation.  They are probably the primary intended users who will determine the evaluations use.

Citations mentioned in the Baughman quotation include:

  • Fleischer, D. N. & Christie, C. A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation Association members. American Journal of Evaluation, 30(2), 158-175
  • McCormick, E. R. (1997). Factors influencing the use of evaluation results. Dissertation Abstracts International: Section A: The Humanities and Social Sciences, 58, 4187 (UMI 9815051).
  • Shula, L. M. & Cousins, J. B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195-208.
  • Patton, M. Q. (2008). Utilization Focused Evaluation (4th ed.). Thousand Oaks: Sage Publications.
Jan
11
Filed Under (program evaluation) by Molly on 11-01-2011

A faculty member asked me to provide evaluation support for a grant application.  Without hesitation, I agreed.

I went to the web site for funding to review what was expected for an evaluation plan.  What was provided was their statement about why evaluation is important.

Although I agree with what is said in that discussion, I think we have a responsibility to go further.  Here is what I know.

Extension professionals evaluate programs because there needs to be some evidence that the imputs for the program–time, money, personnel, materials, facilities, etc.–are being used advantageously, effectively.  Yet, there is more to the question, “Why evaluate” than accountability. (Michael Patton talks about the various uses to which evaluation findings can be put–see his book on Utilization Focused Evaluation.) Programs are evaluated to determine if people are satisfied, if their expectations were met, whether the program was effective in changing something.

This is what I think.  None of what is stated above addresses the  “so what” part of “why evaluate”.  I think that answering this question (or attempting to) is a compelling reason to justify the effort of evaluating.  It is all very well and good to change people’s knowledge of a topic; it is all very well and good to change people’s behavior related to that topic; and it is all very well and good to have people intend to change (after all, stated intention to change is the best predictor of actual change).  Yet, it isn’t enough.  Being able to answer the “so what” question gives you more information.   And doing that–asking and answering the “so what” question–makes evaluation an everyday activity.   And, who knows.  It may even result in world peace.