Last week, the National Outreach Scholarship Conference was held at Michigan State University campus.  There was an impressive array of speakers and presentations.  I had the luxury of attending Michael Quinn Patton’s session on Utilization-focused Evaluation. And although the new edition of the book is 600+ pages, Michael distilled the essentials down.  He also announced a new book (only 400+ pages) called The Essentials of Utilization Focused Evaluation. .  This volume is geared to practitioners as opposed to the classroom or the academic.

 

One take away message for me was this:  “Context changes the focus of ‘use’ “.  So if you have a context whereby the reports are only for accounting purposes, the report will look very different from a context whereby the reports are for detailing the difference being made.  Now, this sounds very intuitive.  Like, DUH, Molly, tell me something I don’t know.  Yet this is so important because you, as the evaluator, have the responsibility and the obligation to prepare stakeholders to use data in OTHER ways than as a reporting activity. That responsibility and obligation is tied to the Program Evaluation Standards.  The Joint Committee revised the standards after soliciting feedback from multiple sources.  This 3rd Ed. addresses with numerous examples and discussion the now five standards.  These standards are:

  1. Utility
  2. Feasibility
  3. Propriety
  4. Accuracy
  5. Accountability.

Apparently, there was considerable discussion as the volume was being compiled that Accountability needed to be first.  Think about it, folks.  If Accountability was first, then evaluations would build on “the responsible use of resources to produce value.”  Implementation, improvement, worth, and costs would drive evaluation.  By placing utilization first, evaluators have the responsibility and obligation to base judgements “…on the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs…to examine the variety of possible uses for evaluation processes, findings, and products.”

Certainly validates use as defined in Utilization-Focused Evaluation.  Take Michael’s workshop.  The American Evaluation Association is offering this workshop at its annual meeting, in Anaheim, CA and the workshop is on Wednesday, November 2.  Go to eval.org and click on Evaluation Conference.  If you can’t join the workshop–Read the book!  (either one).  It is well worth it.

Last week I spoke about thinking like an evaluator by identifying the evaluative questions that you face daily.  They are endless…Yet, doing this is hard, like any new behavior.  Remember when you first learned to ride a bicycle?  You had to practice before you got your balance.  You had to practice a lot.  The same is true for identifying the evaluative questions you face daily.

So you practice, maybe.  You try to think evaluatively.  Something happens along the way; or perhaps you don’t even get to thinking about those evaluative questions.  That something that interferes with thinking or doing is resistance.  Resistance is a Freudian concept that means that you directly or indirectly refuse to change your behavior.   You don’t look for evaluative questions.  You don’t articulate the criteria for value.  Resistance usually occurs with anxiety about a new and strange situation.  A lot of folks are anxious about evaluation–they personalize the process.  And unless it is personnel evaluation, it is never about you.  It is all about the program and the participants in that program.

What is interesting (to me at least) is that there is resistance at many different levels–the evaluator, the participant, the stakeholder  (which may include the other two levels as well).  Resistance may be active or passive.  Resistance may be overt or covert.  I’ve often viewed resistance as a 2×2 diagram.   The rows are active or passive; the columns are overt or covert.  So combining labels, resistance can be active overt, active covert, passive overt, passive covert.  Now I know this is an artificial and  socially constructed idea and may be totally erroneous.  This approach helps me to make sense out of what I see when I go to meetings to help a content team develop their program and try to introduce (or not) evaluation in the process.  I imagine you have seen examples of these types of resistance–maybe you’ve even demonstrated them.  If so, then you are in good company–most people have demonstrated all of these types of resistance.

I bring up the topic of resistance now for two reasons.

1) Because I’ve just started a 17-month long evaluation capacity building program with 38 participants.  Some of those participants were there because they were told to be there, and let me know their feelings about participating–what kind of resistance could they demonstrate?  Some of those participants are there because they are curious and want to know–what kind of resistance could that be?  Some of the participants just sat there–what kind of resistance could that be?  Some of the participants did anything else while sitting in the program–what kind of resistance could that be? and

2) Because I will be delivering a paper on resistance and evaluation at the annual American Evaluation Association meeting in November.  This is helping me organize my thoughts.

I would welcome your thoughts on this complex topic.

I was talking with a colleague about evaluation capacity building (see last week’s post) and the question was raised about thinking like an evaluator.  Got me thinking about the socialization of professions and what has to happen to build a critical mass of like minded people.

Certainly, preparatory programs in academia conducted by experts, people who have worked in the field a long time–or at least longer than you starts the process.  Professional development helps–you know, attending meetings where evaluators meet (like the upcoming AEA conference, U. S. regional affiliates [there are many and they have conferences and meetings, too], and international organizations [increasing in number–which also host conferences and professional development sessions]–let me know if you want to know more about these opportunities).  Reading new and timely literature  on evaluation provides insights into the language.  AND looking at the evaluative questions in everyday activities.  Questions such as:  What criteria?  What  standards?  Which values?  What worth? Which decisions?

The socialization of evaluators happens because people who are interested in being evaluators look for the evaluation questions in everything they do.  Sometimes, looking for the evaluative question is easy and second nature–like choosing a can of corn at the grocery store; sometimes it is hard and demands collaboration–like deciding on the effectiveness of an educational program.

My recommendation is start with easy things–corn, chocolate chip cookies, wine, tomatoes; move to harder things with more variables–what to wear when and where, or whether to include one group or another .  The choices you make  will all depend upon what criteria is set, what standards have been agreed upon, and what value you place on the outcome or what decision you make.

The socialization process is like a puzzle, something that takes a while to complete, something that is different for everyone, yet ultimately the same.  The socialization is not unlike evaluation…pieces fitting together–criteria, standards, values, decisions.  Asking the evaluative questions  is an ongoing fluid process…it will become second nature with practice.

Last week, a colleague and I led two, 20 person cohorts in a two-day evaluation capacity building event.  This activity was the launch (without the benefit of champagne ) of a 17-month long experience where the participants will learn new evaluation skills and then be able to serve as resources for their colleagues in their states.  This training is the brain-child of the Extension Western Region Program Leaders group.  They believe that this approach will be economical and provide significant substantive information about evaluation to the participants.

What Jim and I did last week was work to, hopefully, provide a common introduction to evaluation.   The event was not meant to disseminate the vast array of evaluation information.  We wanted everyone to have a similar starting place.  It was not a train-the-trainer event, so common in Extension.  The participants were at different places in their experience and understanding of program evaluation–some were seasoned, long-time Extension faculty, some were mid-career, some were brand new to Extension and the use of evaluation.  All were Extension faculty from western states.  And although evaluation can involve programs, policies, personnel, products, performance, processes, etc…these two days focused on program evaluation.

 

It occurred to me that  it would be useful to talk about what is evaluation capacity building (ECB) and what resources are available to build capacity.  Perhaps, the best place to start is with the Preskill and Russ-Eft book by the same name, Evaluation Capacity Building.

This volume is filled with summaries of evaluation points and there are activities to reinforce those points.  Although this is a comprehensive resource, it covers key points briefly and there are other resources s that are valuable to understand the field of capacity building.  For example, Don Compton and his colleagues, Michael Baizerman and Stacey Stockdill edited a New Directions in Evaluation volume (No. 93) that addresses the art, craft, and science of ECB.  ECB is often viewed as a context-dependent system of processes and practices that help instill quality evaluation skills in an organization and its members.  The long term outcome of any ECB is the ability to conduct a rigorous evaluation as part of routine practice.  That is our long-term goal–conducting rigorous evaluations as a part of routine practice.

 

Although not exhaustive, below are some ECB resources and some general evaluation resources (some of my favorites, to be sure).

 

ECB resources:

Preskill, H. & Russ-Eft, D. (2005).  Building Evaluation Capacity. Thousand Oaks, CA: Sage

Compton, D. W. Baizerman, M., & Stockdill, S. H. (Ed.).  (2002).  The art, craft, and science of evaluation capacity building. New Directions for Evaluation, No. 93.  San Francisco: Jossey-Bass.

Preskill, H. & Boyle, S. (2008).  A multidisciplinary model of evalluation capacity building.  American Journal of Evaluation, 29 (4), 443-459.

General evaluation resources:

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (4th ed.). (2011).  Program evaluation:  Alternative approaches and practical guidelines. Boston, MA: Pearson.

Scriven, M. (4th ed.).   (1991).  Evaluation Thesaurus. Newbury Park, CA: Sage.

Patton, M. Q. (4th ed.). (2008).  Utilization-focused evaluation. Thousand Oaks, CA: Sage.

Patton, M. Q. (2012).  Essentials of utilization-focused evaluation. Thousand Oaks, CA: Sage

A colleague asked me what I considered an output in a statewide program we were discussing.  This is a really good example of assumptions and how they can blind side an individual–in this case me.  Once I (figuratively) picked myself up, I proceeded to explain how this terminology applied to the program under discussion.  Once the meeting concluded, I realized that perhaps a bit of a refresher was in order.  Even the most seasoned evaluators can benefit from a reminder every so often.

 

So OK–inputs, outputs, outcomes.

As I’ve mentioned before, Ellen Taylor-Powell, former UWEX Evaluation specialist has a marvelous tutorial on logic modeling.  I recommend you go there for your own refresher.  What I offer you here is a brief (very) overview of these terms.

Logic models whether linear or circular are composed of various focus points.  Those focus points include (in addition to those mention in the title of this post) the situation, assumptions, and external factors.  Simply put, the situation is a what is going on–the priorities, the needs, the problems that led to the program you are conducting–that is program with a small p (we can talk about sub and supra models later).

Inputs are those resources you need to conduct the program. Typically, they are lumped into personnel, time, money, venue, equipment.  Personnel covers staff, volunteers, partners, any stakeholder.  Time is not just your time–also the time needed for implementation, evaluation, analysis, and reporting.  Money (speaks for itself).  Venue is where the program will be held.  Equipment is what stuff you will need–technology, materials, gear, etc.

Outputs are often classified into two parts–first, participants (or target audience) and the second part, activities that are conducted.  Typically (although not always), those activities are counted and are called bean counts..  In the example that started this post, we would be counting the number of students who graduated high school; the number of students who matriculated to college (either 2 or 4 year); the number of students who transferred from 2 year to 4 year colleges; the number of students who completed college in 2 or 4 years; etc.  This bean  count could also be the number of classes offered; the number of brochures distributed; the number of participants in the class; the number of  (fill in the blank).  Outputs are necessary and not sufficient to determine if a program is being effective.  The field of evaluation started with determining bean counts and satisfactions.

Outcomes can be categorized as short term, medium/intermediate term, or long term.  Long term outcomes are often called impacts.  (There are those in the field who would classify impacts as something separate from an outcome–a discussion for another day.)  Whatever you choose to call the effects of your program, be consistent–don’t use the terms interchangeably; it confuses the reader.  What you are looking for as an outcome is change–in learning; in behavior; in conditions.  This change is measured in the target audience–individuals, groups, communities, etc.

I’ll talk about assumptions and external factors another day.  Have a wonderful holiday weekend…the last vestiges of summer–think tomatoes, corn-on-the-cob , state fair, and  a tall cool drink.

 

I started this post the third week in July.  Technical difficulties prevented me from completing the post.  Hopefully, those difficulties are now in the past.

A colleague asked me what can we do when we can’t measure actual behavior change in our evaluations.  Most evaluations can capture knowledge change (short term outcomes); some evaluations can capture behavior change (intermediate or medium term outcomes); very few can capture condition change (long term outcomes, often called impacts–though not by me).  I thought about that.  Intention to change behavior can be measured.  Confidence (self-efficacy) to change behavior can be measured.  For me, all evaluations need to address those two points.

Paul Mazmanian, Associate Dean for Continuing Professional Development and Evaluation Studies at Virginia Commonwealth University, has studied changing practice patterns for several years.  One study, conducted in 1998, reported that “…physicians in both study and control groups were significantly more likely to change (47% vs. 7% p< .001) if they indicated intent to change immediately following the lecture” (Academic Medicine. 1998; 73:882-886).   Mazmanian and his co-authors say in their conclusions that “successful change in practice may depend less on clinical and barriers information than on other factors that influence physicians’ performance.  To further develop the commitment-to-change strategy in measuring effects of planned change, it is important to isolate and learn the powers of individual components of the strategy as well as their collective influence on physicians’ clinical behavior.”

 

What are the implications for Extension and other complex organizations?   It makes sense to extrapolate from this information from the continuing medical education literature.  Physicians are adults; most of Extension’s audience are adults.  If stated intention to change is highly predictable  “immediately following the lecture” (i.e., continuing education program) based on stated intention to change, then stated intention to change solicited from participants in Extension programs immediately following the program delivery would increase the likelihood of behavior change.  One of the outcomes Extension wants to see is change in behavior (medium term outcomes).  Measuring those behavior changes directly (through observation, or some other method) is often outside the resources available.  Measuring those intended behavior changes is within the scope of Extension resources.  Using a time frame (such as 6 months) helps bound the anticipated behavior change.  In addition, intention to change can be coupled with confidence to implement the behavior change to provide the evaluator with information about the effect of the program.  The desired effect is high confidence to change and willingness to implement the change within the specified time frame.  If Extension professionals find that result, then it would be safe to say that the program is successful.

REFERENCES

1.  Mazmanian, P.E., Daffron, S. R., Johnson, R. E., Davis, D. A., Kantrowitz, M. P.  (1998).  Information about barriers to planned change:  A Randomized controlled trial involving continuing medical education lectures and commitment to change.  Academic Medicine, 73 (8), 882-886.

2.  Mazmanian, P. E. & Mazmanian, P. M.  (1999).  Commitment to change: Theoretical foundations, methods, and outcomes.  The Journal of Continuing Education in the Health Professions, 19, 200 – 207.

3.  Mazmanian, P. E., Johnson, R. E, Zhang, A. Boothby, J. & Yeatts, E. J. (2001).  Effects of a signature on rates of change: A randomized controlled trial involving continuing medical education and the commitment-to-change model.  Academic Medicine, 76 (6), 642-646.

 

Hopefully, the technical difficulties with images is no longer a problem and I will be able to post the answers to the history quiz and the post I had hoped to post last week.  So, as promised, here are the answers to the quiz I posted the week of July 5.  The keyed responses are in BOLD

1.  Michael Quinn Patton, author of Utilization-Focused Evaluation and the new book, Developmental Evaluation and the classic Qualitative Evaluation and Research Methods .

2.   Michael Scriven is best known for his concept of formative and summative evaluation. He has also advocated that evaluation is a transdiscipline.  He is the author of the Evaluation Thesaurus .

3. Hallie Preskill is the co-author (with Darlene Russ-Eft) of Evaluation Capacity Building

4. Robert E. Stake has advanced work in case study and is the author of the book Multiple Case Study and The Art of Case Study Research.

5. David M. Fetterman is best known for his advocacy of empowerment evaluation and the book of that name, Foundations of Empowerment Evaluation .

6. Daniel Stufflebeam developed the CIPP (context input process product) model which is discussed in the book Evaluation Models .

7. James W. Altschuldt is the go-to person for needs assessment.  He is the editor of the Needs Assessment Kit (or everything you wanted to know about needs assessment and didn’t know where to find the answer).  He is also the co-author with Bell Ruth Witkin of two needs assessment books,  and  .

8. Jennifer C. Greene, the current President of the American Evaluation Association, and the author of a book on mixed methods .

9. Ernest R. House is a leader in the work of evaluation policy and is the author of  an evaluation novel,  Regression to the Mean   .

10. Lee J. Cronbach is a pioneer in education evaluation and the reform of that practice.  He co-authored with several associates the book, Toward Reform of Program Evaluation .

11.  Ellen Taylor-Powell, the former Evaluation Specialist at University of Wisconsin Extension Service and is credited with developing the logic model later adopted by the USDA for use by the Extension Service.  To go to the UWEX site, click on the words “logic model”.

12. Yvonna Lincoln, with her husband Egon Guba (see below) co-authored the book Naturalistic Inquiry  . She is the currently co-editor (with Norman K. Denzin) of the Handbook of Qualitative Research .

13.   Egon Guba, with his wife Yvonna Lincoln, is the co-author of 4th Generation Evaluation.

14. Blaine Worthen has championed certification for evaluators.  He wit h Jody L. Fitzpatrick and James
R. Sanders have co-authored Program Evaluation: Alternative Approaches and Practical Guidelines.

15.  Thomas A. Schwandt, a philosopher at heart who started as an auditor, has written extensively on evaluation ethics. He is also the co-author (with Edward S. Halpern) of Linking Auditing and Metaevaluation.

16.   Peter H. Rossi, co-author with Howard E. Freeman and Mark E. Lipsey, wrote Evaluation: A Systematic Approach , and is a pioneer in evaluation research.

17. W. James Popham, a leader in educational evaluation, and authored the volume, Educational Evaluation

18. Jason Millman was a pioneer of teacher evaluation and author of  Handbook of Teacher Evaluation

19.  William R. Shadish co-edited (with Laura C. Leviton and Thomas Cook) of Foundations of Program Evaluation: Theories of Practice . His work in theories of evaluation practice earned him the Paul F. Lazarsfeld Award for Evaluation Theory, from the American Evaluation Association in 1994.

20.   Laura C. Leviton (co-editor with Will Shadish and Tom Cook–see above) of Foundations of Program Evaluation: Theories of Practice has pioneered work in participatory evaluation.

 

 

Although I’ve only list 20 leaders, movers and shakers, in the evaluation field, there are others who also deserve mention:  John Owen, Deb Rog, Mark Lipsey, Mel Mark, Jonathan Morell, Midge Smith, Lois-Ellin Datta, Patricia Rogers, Sue Funnell, Jean King, Laurie Stevahn, John, McLaughlin, Michale Morris, Nick Smith, Don Dillman, Karen Kirkhart, among others.

If you want to meet the movers and shakers, I suggest you attend the American Evaluation Association annual meeting.  In 2011, it will be held in Anaheim CA, November 2 – 5; professional development sessions are being offered October 31, November 1 and 2, and also November 6.  More conference information can be found here.

 

 

For the last three weeks, since I posted the history matching game, I’ve not been able to post with images.  Every time I go to save the draft, the post vanishes.  I’m working with the IT folks.  They haven’t given me any alternatives.  I posting this today without images to let you know that I am still here, that I still have thoughts, and that I will post something of substance again soon.  Please be patient.  Thank you.

Those of you who read this blog know a little about evaluation.  Perhaps you’d like to know more?  Perhaps not…

I think it would be valuable to know who was instrumental in developing the profession to the point it is today; hence, a little history.  This will be fun for those of you who don’t like history.  It will be a matching game.  Some of these folks have been mentioned in previous posts.  I’ll post the keyed responses next week.

Directions:  Match the  name with the evaluation contribution.  I’ve included photos so you know who is who, who you can put with a name and a contribution.

1.  2.  

3. 4.

5. 6.

7. 8.

9. 10.

11. 12.

13.   14. 15. 

16.   17. 18.

19. 20.  

 

 

A.  Michael Scriven                1.  Empowerment Evaluation

B.  Michael Quinn Patton     2.  Mixed Methods

C.  Blaine Worthen                 3.  Naturalistic Inquiry

D.  David Fetterman              4.  CIPP

E.  Thomas Schwandt            5. Formative/Summative

F.  Jennifer Greene                  6. Needs Assessment

G.  James W. Altschuld          7.  Developmental Evaluation

H.  Ernie House                          8.  Case study

I.   Yvonna Lincoln                    9.  Fourth Generation Evaluation

J.  Egon Guba                            10. Evaluation Capacity Building

K.  Lee J. Cronbach                   11.  Evaluation Research

L.  W. James Popham               12.  Teacher Evaluation

M.  Peter H. Rossi                       13.  Logic Models

N.  Hallie Preskill                       14.  Educational Evaluation

O.  Ellen Taylor-Powell            15.  Foundations of Program Evaluation

P.  Robert Stake                           16. Toward Reform of Program Evaluation

Q.  Dan Stufflebeam                  17. Participatory Evaluation

R.  Jason Millman                      18. Evaluation and Policy

S.  Will Shadish                           19. Evaluation and epistomology

T.  Laura Leviton                        20. Evaluation Certification

 

There are others more recent who have made contributions.These represent the folks who did seminal work that built the profession.  It also includes some more recent thinkers.  Have fun.

Independence is an evaluative question.

Think about it while you enjoy the holiday.

Were the folks who fought the Revolutionary War, truly revolutionaries? OR were they terrorists?

Was King George a despot or just a micromanager?

My favorite is this:  Was the War Between the States, the last battle of the War of/for Independence?

I’m sure there are other  evaluative questions.  Got a question that is evaluative?  Let me know.