Last week I spoke about thinking like an evaluator by identifying the evaluative questions that you face daily.  They are endless…Yet, doing this is hard, like any new behavior.  Remember when you first learned to ride a bicycle?  You had to practice before you got your balance.  You had to practice a lot.  The same is true for identifying the evaluative questions you face daily.

So you practice, maybe.  You try to think evaluatively.  Something happens along the way; or perhaps you don’t even get to thinking about those evaluative questions.  That something that interferes with thinking or doing is resistance.  Resistance is a Freudian concept that means that you directly or indirectly refuse to change your behavior.   You don’t look for evaluative questions.  You don’t articulate the criteria for value.  Resistance usually occurs with anxiety about a new and strange situation.  A lot of folks are anxious about evaluation–they personalize the process.  And unless it is personnel evaluation, it is never about you.  It is all about the program and the participants in that program.

What is interesting (to me at least) is that there is resistance at many different levels–the evaluator, the participant, the stakeholder  (which may include the other two levels as well).  Resistance may be active or passive.  Resistance may be overt or covert.  I’ve often viewed resistance as a 2×2 diagram.   The rows are active or passive; the columns are overt or covert.  So combining labels, resistance can be active overt, active covert, passive overt, passive covert.  Now I know this is an artificial and  socially constructed idea and may be totally erroneous.  This approach helps me to make sense out of what I see when I go to meetings to help a content team develop their program and try to introduce (or not) evaluation in the process.  I imagine you have seen examples of these types of resistance–maybe you’ve even demonstrated them.  If so, then you are in good company–most people have demonstrated all of these types of resistance.

I bring up the topic of resistance now for two reasons.

1) Because I’ve just started a 17-month long evaluation capacity building program with 38 participants.  Some of those participants were there because they were told to be there, and let me know their feelings about participating–what kind of resistance could they demonstrate?  Some of those participants are there because they are curious and want to know–what kind of resistance could that be?  Some of the participants just sat there–what kind of resistance could that be?  Some of the participants did anything else while sitting in the program–what kind of resistance could that be? and

2) Because I will be delivering a paper on resistance and evaluation at the annual American Evaluation Association meeting in November.  This is helping me organize my thoughts.

I would welcome your thoughts on this complex topic.

Hopefully, the technical difficulties with images is no longer a problem and I will be able to post the answers to the history quiz and the post I had hoped to post last week.  So, as promised, here are the answers to the quiz I posted the week of July 5.  The keyed responses are in BOLD

1.  Michael Quinn Patton, author of Utilization-Focused Evaluation and the new book, Developmental Evaluation and the classic Qualitative Evaluation and Research Methods .

2.   Michael Scriven is best known for his concept of formative and summative evaluation. He has also advocated that evaluation is a transdiscipline.  He is the author of the Evaluation Thesaurus .

3. Hallie Preskill is the co-author (with Darlene Russ-Eft) of Evaluation Capacity Building

4. Robert E. Stake has advanced work in case study and is the author of the book Multiple Case Study and The Art of Case Study Research.

5. David M. Fetterman is best known for his advocacy of empowerment evaluation and the book of that name, Foundations of Empowerment Evaluation .

6. Daniel Stufflebeam developed the CIPP (context input process product) model which is discussed in the book Evaluation Models .

7. James W. Altschuldt is the go-to person for needs assessment.  He is the editor of the Needs Assessment Kit (or everything you wanted to know about needs assessment and didn’t know where to find the answer).  He is also the co-author with Bell Ruth Witkin of two needs assessment books,  and  .

8. Jennifer C. Greene, the current President of the American Evaluation Association, and the author of a book on mixed methods .

9. Ernest R. House is a leader in the work of evaluation policy and is the author of  an evaluation novel,  Regression to the Mean   .

10. Lee J. Cronbach is a pioneer in education evaluation and the reform of that practice.  He co-authored with several associates the book, Toward Reform of Program Evaluation .

11.  Ellen Taylor-Powell, the former Evaluation Specialist at University of Wisconsin Extension Service and is credited with developing the logic model later adopted by the USDA for use by the Extension Service.  To go to the UWEX site, click on the words “logic model”.

12. Yvonna Lincoln, with her husband Egon Guba (see below) co-authored the book Naturalistic Inquiry  . She is the currently co-editor (with Norman K. Denzin) of the Handbook of Qualitative Research .

13.   Egon Guba, with his wife Yvonna Lincoln, is the co-author of 4th Generation Evaluation.

14. Blaine Worthen has championed certification for evaluators.  He wit h Jody L. Fitzpatrick and James
R. Sanders have co-authored Program Evaluation: Alternative Approaches and Practical Guidelines.

15.  Thomas A. Schwandt, a philosopher at heart who started as an auditor, has written extensively on evaluation ethics. He is also the co-author (with Edward S. Halpern) of Linking Auditing and Metaevaluation.

16.   Peter H. Rossi, co-author with Howard E. Freeman and Mark E. Lipsey, wrote Evaluation: A Systematic Approach , and is a pioneer in evaluation research.

17. W. James Popham, a leader in educational evaluation, and authored the volume, Educational Evaluation

18. Jason Millman was a pioneer of teacher evaluation and author of  Handbook of Teacher Evaluation

19.  William R. Shadish co-edited (with Laura C. Leviton and Thomas Cook) of Foundations of Program Evaluation: Theories of Practice . His work in theories of evaluation practice earned him the Paul F. Lazarsfeld Award for Evaluation Theory, from the American Evaluation Association in 1994.

20.   Laura C. Leviton (co-editor with Will Shadish and Tom Cook–see above) of Foundations of Program Evaluation: Theories of Practice has pioneered work in participatory evaluation.

 

 

Although I’ve only list 20 leaders, movers and shakers, in the evaluation field, there are others who also deserve mention:  John Owen, Deb Rog, Mark Lipsey, Mel Mark, Jonathan Morell, Midge Smith, Lois-Ellin Datta, Patricia Rogers, Sue Funnell, Jean King, Laurie Stevahn, John, McLaughlin, Michale Morris, Nick Smith, Don Dillman, Karen Kirkhart, among others.

If you want to meet the movers and shakers, I suggest you attend the American Evaluation Association annual meeting.  In 2011, it will be held in Anaheim CA, November 2 – 5; professional development sessions are being offered October 31, November 1 and 2, and also November 6.  More conference information can be found here.

 

 

We recently held Professional Development Days for the Division of Outreach and Engagement.  This is an annual opportunity for faculty and staff in the Division to build capacity in a variety of topics.  The question this training posed was evaluative:

How do we provide meaningful feedback?

Evaluating a conference or a multi-day, multi-session training is no easy task.  Gathering meaningful data is a challenge.  What can you do?  Before you hold the conference (I’m using the word conference to mean any multi-day, multi-session training), decide on the following:

  • Are you going to evaluate the conference?
  • What is the focus of the evaluation?
  • How are you going to use the results?

The answer to the first question is easy:  YES.  If the conference is an annual event (or a regular event), you will want to have participants’ feedback of their experience, so, yes, you will evaluate the conference. Look at a Penn State Tip Sheet 16 for some suggestions.  (If this is a one time event, you may not; though as an evaluator, I wouldn’t recommend ignoring evaluation.)

The second question is more critical.  I’ve mentioned in previous blogs the need to prioritize your evaluation.  Evaluating a conference can be all consuming and result in useless data UNLESS the evaluation is FOCUSED.  Sit down with the planners and ask them what they expect to happen as a result of the conference.  Ask them if there is one particular aspect of the conference that is new this year.  Ask them if feedback in previous years has given them any ideas about what is important to evaluate this year.

This year, the planners wanted to provide specific feedback to the instructors.  The instructors had asked for feedback in previous years.  This is problematic if planning evaluative activities for individual sessions is not done before the conference.  Nancy Ellen Kiernan, a colleague at Penn State, suggests a qualitative approach called a Listening Post.  This approach will elicit feedback from participants at the time of the conference.  This method involves volunteers who attended the sessions and may take more persons than a survey.  To use the Listening Post, you must plan ahead of time to gather these data.  Otherwise, you will need to do a survey after the conference is over and this raises other problems.

The third question is also very important.  If the results are just given to the supervisor, the likelihood of them being used by individuals for session improvement or by organizers for overall change is slim.  Making the data usable for instructors means summarizing the data in a meaningful way, often visually.  There are several way to visually present survey data including graphs, tables, or charts.  More on that another time.  Words often get lost, especially if words dominate the report.

There is a lot of information in the training and development literature that might also be helpful.  Kirkpatrick has done a lot of work in this area.  I’ve mentioned their work in previous blogs.

There is no one best way to gather feedback from conference participants.  My advice:  KISS–keep it simple and straightforward.

Three weeks ago, I promised you a series of posts on related topics–Program planning, Evaluation implementation, monitoring and delivering, and Evaluation utilization.  This is the third one–using the findings of evaluation.

Michael Patton’s book  is my reference.

I’ll try to condense the 400+ page book down to 500+ words for today’s post.  Fortunately, I have the Reader’s Digest version as well (look for Chapter 23 [Utilization-Focused Evaluation] in the following citation: Stufflebeam, D. L., Madaus, G. F. Kellaghan, T. (2000). Evaluation Models: Viewpoints on educational and human services evaluation, 2ed. Boston, MA: Kluwer Academic Publishers).  Patton’s chapter is a good summary–still it is 14 pages.

To start, it is important to understand exactly how the word “evaluation” is used in the context of utilization.  In the Stufflebeam, Madaus, & Kellaghan publication cited above, Patton (2000, p. 426) describes evaluation as, “the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness and/or inform decisions about future programming.  Utilization-focused evaluation (as opposed to program evaluation in general) is evaluation done for and with specific intended primary users for specific, intended uses (emphasis added). ”

There are four different types of use–instrumental, conceptual, persuasive, and process. The interest of potential stakeholders cannot be served well unless the stakeholder(s) whose interests are being served is made explicit.

To understand the types of use,  I will quote from a document titled, “Non-formal Educator Use of Evaluation Findings: Factors of Influence” by Sarah Baughman.

“Instrumental use occurs when decision makers use the findings to change or modify the program in some way (Fleisher & Christie, 2009; McCormick, 1997; Shulha & Cousins, 1997). The information gathered is used in a direct, concrete way or applied to a specific decision (McCormick, 1997).

Conceptual use occurs when the evaluation findings help the program staff or key stakeholders understand the program in a new way (Fleisher & Christie, 2009).

Persuasive use has also been called political use and is not always viewed as a positive type of use (McCormick, 1997). Examples of negative persuasive use include using evaluation results to justify or legitimize a decision that is already made or to prove to stakeholders or other administrative decision makers that the organization values accountability (Fleisher & Christie, 2009). It is sometimes considered a political use of findings with no intention to take the actual findings or the evaluation process seriously (Patton, 2008). Recently persuasive use has not been viewed as negatively as it once was.

Process use is the cognitive, behavioral, program, and organizational changes resulting, either directly or indirectly, from engagement in the evaluation process and learning to think evaluatively” (Patton, 2008, p. 109). Process use results not from the evaluation findings but from the evaluation activities or process.”

Before beginning the evaluation, the question, “Who is the primary intended user of the evaluation?” must not only be asked; it also must be answered.  What stakeholders need to be at the table? Those are the people who have a stake in the evaluation findings and those stakeholders may be different for each evaluation.  They are probably the primary intended users who will determine the evaluations use.

Citations mentioned in the Baughman quotation include:

  • Fleischer, D. N. & Christie, C. A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation Association members. American Journal of Evaluation, 30(2), 158-175
  • McCormick, E. R. (1997). Factors influencing the use of evaluation results. Dissertation Abstracts International: Section A: The Humanities and Social Sciences, 58, 4187 (UMI 9815051).
  • Shula, L. M. & Cousins, J. B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195-208.
  • Patton, M. Q. (2008). Utilization Focused Evaluation (4th ed.). Thousand Oaks: Sage Publications.

My older daughter (I have two–Morgan, the older, and Mersedes, the younger, ) suggested I talk about the evaluative activities around the holidays…hmmm.

Since I’m experiencing serious writers block this week, I thought I’d revisit evaluation as an everyday activity, with a holiday twist.

Keep in mind that the root of evaluation is from the French after the Latin is value (Oxford English Dictionary on line says:  [a. Fr. évaluation, f. évaluer, f. é- =es- (:{em} L. ex) out + value VALUE.]).


Perhaps this is a good time to mention that the theme for Evaluation 2011 put forth by incoming AEA President, Jennifer Greene, is Values and Valuing in Evaluation.  I want to quote from her invitation letter, “…evaluation is inherently imbued with values.  Our work as evaluators intrinsically involves the process of valuing, as our charge is to make judgments (emphasis original) about the “goodness” or the quality, merit, or worth of a program.”

Let us consider the holidays “a program”. The Winter Holiday season starts (at least in the US and the northern hemisphere) with the  Thanksgiving holiday followed shortly thereafter by the first Sunday in Advent.  Typically this period of time includes at least the  following holidays:  St. Nicholas Day, Hanukkah, Winter Solstice, Christmas, Kwanzaa, Boxing Day, New Year’s, and Epiphany (I’m sure there are ones I didn’t list that are relevant).  This list typically takes us through January 6.  (I’m getting to the value part–stay with me…)

When I was a child, I remember the eager expectation of anticipating Christmas–none of the other holidays were even on my radar screen.  (For those of you who know me, you know how long ago that was…)  Then with great expectation (thank you, Charles),   I would go to bed and, as patiently as possible, await the moment when my father would turn on the tree lights, signaling that we children could descend to the living room.  Then poof!  That was Christmas. In 10 minutes it was done. The emotional bath I always took diminished greatly the value of this all important holiday.  Vowing that my children would grow up without the emotional bath of great expectations and dashed hopes, I choose to Celebrate the Season.  In doing so,  found value in the waiting of Advent, the majic of Hanukkah,  sharing of Kwanzaa, the mystery of Christmas and the traditions that come with all of these holidays.  There are other traditions that we revisit yearly, yet we find delight in remembering what the Winter Holiday traditions are and mean; remembering the foods we eat; the times we’ve shared.  From all this we find value in our program.  Do I still experience the emotional bath of childhood during this Holiday Season–not any more–and my children tell me that they like spreading the holidays out over the six week period.  I think this is the time of the year where we can take a second look at our programs (whether they are the holidays, youth development, watershed stewardship, nutrition education, or something else) and look for value in our programs–the part of the program that matters.  Evaluation is the work of capturing that value.  How we do that is what evaluation is all about.

I’ve been writing for almost a year, 50 some columns.  This week, before the Thanksgiving holiday, I want to share evaluation resources I’ve found useful and for which I am thankful.  Although there are probably others with which I am not familiar, these are ones for which I am thankful.

\

My colleagues at UWEX, University of  Wisconsin Extension Service, Ellen Taylor-Powell, and at Penn State Extension Service,

Nancy Ellen Kiernan,



both have resources that are very useful, easily accessed, clearly written.  Ellen’s can be found at the Quick Tips site and Nancy Ellen’s can be found at her Tipsheets index.  Both Nancy Ellen and Ellen have other links that may be useful as well.  Access their sites through the links above.

Last week, I mentioned the American Evaluation Association.     One of the important structures in AEA is the Topical Interest Groups (or TIGs).  Extension has a TIG called the Extension Education Evaluation which helps organize Extension professionals who are interested or involved in evaluation.  There is a wealth of information on the AEA web site.  about the evaluation profession,  access to the AEA elibrary, links to AEA on Facebook, Twitter, and LinkedIn.  You do NOT have to be a member,  to subscribe to blog, AEA365, which as the name suggests, is posted daily by different evaluators.  Susan Kistler, AEA’s executive director, posts every Saturday.  The November 20 post talks about the elibrary–check it out.

Many states and regions have local AEA affiliates.  For example, OPEN, Oregon Program Evaluators Network, serves southern Washington and Oregon.  It has an all volunteer staff who live mostly in Portland and Vancouver WA.  The AEA site lists over 20 affiliates across the country, many with their own website.  Those websites have information about connecting with local evaluators.

In addition to these valuable resources, National eXtension (say e-eXtension) has developed a community of practice devoted to evaluation and Mike Lambur, eXtension Evaluation and Research Leader, who can be reached at mike.lambur@extension.org. According to the web site, National eXtension “…is an interactive learning environment delivering the best, most researched knowledge from the smartest land-grant university minds across America. eXtension connects knowledge consumers with knowledge providers—experts like you who know their subject matter inside out.”

Happy Thanksgiving.  Be safe.

Recently, I attended the American Evaluation Annual (AEA) conference is San Antonio, TX. And although this is a stock photo, the weather (until Sunday) was like it seems in this photo.  The Alamo was crowded–curious adults, tired children, friendly dogs, etc.  What I learned was that  San Antonio is the only site in the US where there are five Spanish missions within 10 miles of each other.  Starting with the Alamo (the formal name is San Antonio de Valero), as you go south out of San Antonio, the visitor will experience the Missions Concepcion, San Juan, San Jose, and Espada, all of which will, at some point in the future, be on the Mission River Walk (as opposed to the Museum River Walk).  The missions (except the Alamo) are National Historic Sites.  For those of you who have the National Park Service Passport, site stamps are available.

AEA is the professional home for evaluators.  The AEA has approximately 6000 members and about 2500 of them attended the conference, called Evaluation 2010.  This year’s president, Leslie Cooksy, identified “Evaluation Quality”

as the theme for the conference.  Leslie says in her welcome letter, “Evaluation quality is an umbrella theme, with room underneath for all kinds of ideas–quality from the perspective of different evaluation approaches, the role of certification in quality assurance, metaevaluation and the standards used to judge quality…”  Listening to the plenary sessions, attending the concurrent sessions, networking with long time colleagues, I got to hear so many different perspectives on quality.

In the closing plenary, Hallie Preskill, 2007 AEA president, was asked to comment on the themes she heard throughout the conference.  She used mind mapping (a systems tool) to quickly and (I think) effectively organize the value of AEA.  She listed seven main themes:

  1. Truth
  2. Perspectives
  3. Context
  4. Design and methods
  5. Representation
  6. Intersections
  7. Relationships

Although she lists, context as a separate theme, I wonder if evaluation quality is really contextual first and then these other things.

Hallie listed sub themes under each of these topics:

  1. What is (truth)?  Whose (truth)?  How much data is enough?
  2. Whose (perspectives)?  Cultural (perspectives).
  3. Cultural (context). Location (context).  Systems (context).
  4. Multiple and mixed (methods).  Multiple case studies.  Stories.  Credible.
  5. Diverse (representation).  Stakeholder (representation).
  6. Linking (intersections).  Interdisciplinary (intersections).
  7. (Relationships) help make meaning.  (Relationships) facilitate quality.   (Relationships) support use.  (Relationships) keep evaluation alive.

Being a member of AEA is all this an more.  Membership is affordable ($80.00, regular; $60.00 for joint membership with the Canadian Evaluation Society; and $30.00 for full time students).  Benefits are worth that and more.  The conference brings together evaluators from all over.  AEA is quality.

A good friend of mine asked me today if I knew of any attributes (which I interpreted to be criteria) of qualitative data (NOT qualitative research).  My friend likened the quest for attributes for qualitative data to the psychometric properties of a measurement instrument–validity and reliability–that could be applied to the data derived from those instruments.

Good question.  How does this relate to program evaluation, you may ask.  That question takes us to an understanding of paradigm.

Paradigm (according to Scriven in Evaluation Thesaurus) is a general concept or model for a discipline that may be influential in shaping the development of that discipline.  They do not (again according to Scriven) define truth; rather they define prima facie truth (i.e., truth on first appearance) which is not the same as truth.  Scriven goes on to say, “…eventually, paradigms are rejected as too far from reality and they are always governed by that possibility[i.e.,that they will be rejected] (page 253).”

So why is it important to understand paradigms.  They frame the inquiry. And evaluators are asking a question, that is, they are inquiring.

How inquiry is framed is based on the components of paradigm:

  • ontology–what is the nature of reality?
  • epistemology–what is the relationship between the known and the knower?
  • methodology–what is done to gain knowledge of reality, i.e., the world?

These beliefs shape how the evaluator sees the world and then guides the evaluator in the use of data, whether those data are derived from records, observations, interviews (i.e., qualitative data) or those data are derived from measurement,  scales,  instruments (i.e., quantitative data).  Each paradigm guides the questions asked and the interpretations brought to the answers to those questions.  This is the importance to evaluation.

Denzin and Lincoln (2005) in their 3rd edition volume of the Handbook of Qualitative Research

list what they call interpretive paradigms. They are described in Chapters 8 – 14 in that volume.  The paradigms are:

  1. Positivist/post positivist
  2. Constructivist
  3. Feminist
  4. Ethnic
  5. Marxist
  6. Cultural studies
  7. Queer theory

They indicate that each of these paradigms have criteria, a form of theory, and have a specific type of narration or report.  If paradigms have criteria, then it makes sense to me that the data derived in the inquiry formed by those paradigms would have criteria.  Certainly, the psychometric properties of validity and reliability (stemming from the positivist paradigm) relate to data, usually quantitative.  It would make sense to me that the parallel, though different, concepts in constructivist paradigm, trustworthiness and credibility,  would apply to data derived from that paradigm–often qualitative.

If that is the case–then evaluators need to be at least knowledgeable about paradigms.

Spring break has started.

sunshine on the beach in oreagon imagesThe sun is shining.

The sky is blue.

Daphne is heady. Daphne-Odora-Shrub

All of this is evaluative.

Will be on holiday next week.  Enjoy!

Welcome back.  It is Tuesday.

Some folks have asked me–now that I’ve pointed out that all of you are evaluators–where will I take this column.  That was food for thought…and although I’ve got a topic ready to go, I’m wondering if jumping off into working evaluation is the best place to go next.  One place I did go is to update the “About” tab on this page…

Maybe thinking some more about what you evaluated today; maybe thinking about a bigger picture of evaluation; maybe just being glad that the sun is shining is enough (although the subfreezing temperatures remind me of Minnesota without the snow).   The saying goes, “Minnesota has two seasons–green and white.”  Maybe Oregon has two seasons–dry and wet.   That is an evaluative question, by-the-way.  Hmmm…thinking about evaluation, having an evaluation question of the week sounds like a good idea.  What’s yours—small or large?  I may not have an answer, I will have an idea.

Ok–so now that I’ve dealt with the evaluative question of the day–I think it is time to go to more substance, like “what exactly IS program evaluation?”  Good question–if we are going to have this conversation, then we need to be using the same language.

First, let me address why the 489px-Wikipedia-logo-en-big link to Wikipedia is on the far right in my Blogroll list.  I’ve learned Wikipedia is a readily available, general reference that gets folks started understanding a subject.  It is NOT the definitive word on that subject.  Wikipedia (see link on the right) describes program evaluation as “…a systematic method for collecting, analyzing, and using information to answer basic questions about projects, policies, and programs.”  Well…yes…except that Wikipedia seems to be defining evaluation when it includes “projects and policies.”  Program evaluation deals with programs.  Wikipedia does have an entry for evaluation as well as an entry for program evaluation. Read both.

Evaluation can be applied to projects, policies, personnel, processes, performances, proposals, products AND programs. I won’t talk much about personnel, performances, proposals, or products. Projects may be another word for program; policies usually result in programs; and processes are often part of programs, so they may be talked about sometimes.

Most of what thisScriven book cover blog will address is program evaluation because most of you (including me) have programs that need evaluating.  When I talk about program evaluation I am talking about “…determining the merit, worth, or value of (a program).” (Michael Scriven uses this definition in his book, Evaluation Thesaurus, 1991, Sage Publications.)

It is available at the following site (or through the publisher).

So for me and what you need to know about program evaluation is this:

  • The root of evaluation is value (OED lists the etymology as [a.  Fr.  évaluation, f.  évaluer, f.  é- =es- (: L.  ex) out + value])
  • Program evaluation IS systematic.
  • Program evaluation DOES collect, analyze, and utilize information.
  • Program evaluation ATTEMPTS to determine the merit, worth, or value of a program.
  • Program evaluation ANSWERS this question:

“What difference does this program make in the lives and well being of (fill in the blank here—citizens of Oregon, my 4-H club, residents of the watershed, you get the idea).”

NOTE: I talk about “lives and well-being” because most programs are delivered to individuals who will be CHANGED as a result of participating in the program, i.e., experience a difference.

For those of us in Extension, when we do evaluation we are trying to determine if we “improved something;” we are not trying to “prove” that what we did accomplished something.  Peter Bloom always said, “Extension is out to improve something (attribution), not prove something (causation).”  We are looking for attribution not causation.

Many references exist that talk more about what is program evaluation.  My favorite reference is by Jody Fitzpatrick, Jim Sanders, and Blaine Worthen and is called, Program Evaluation: Alternative Approaches and Practical Guidelines (2004, Pearson Education)Fitzpatrick book cover_

It is available at the following site (or through the publisher).