Last week, a colleague and I led two, 20 person cohorts in a two-day evaluation capacity building event.  This activity was the launch (without the benefit of champagne ) of a 17-month long experience where the participants will learn new evaluation skills and then be able to serve as resources for their colleagues in their states.  This training is the brain-child of the Extension Western Region Program Leaders group.  They believe that this approach will be economical and provide significant substantive information about evaluation to the participants.

What Jim and I did last week was work to, hopefully, provide a common introduction to evaluation.   The event was not meant to disseminate the vast array of evaluation information.  We wanted everyone to have a similar starting place.  It was not a train-the-trainer event, so common in Extension.  The participants were at different places in their experience and understanding of program evaluation–some were seasoned, long-time Extension faculty, some were mid-career, some were brand new to Extension and the use of evaluation.  All were Extension faculty from western states.  And although evaluation can involve programs, policies, personnel, products, performance, processes, etc…these two days focused on program evaluation.

 

It occurred to me that  it would be useful to talk about what is evaluation capacity building (ECB) and what resources are available to build capacity.  Perhaps, the best place to start is with the Preskill and Russ-Eft book by the same name, Evaluation Capacity Building.

This volume is filled with summaries of evaluation points and there are activities to reinforce those points.  Although this is a comprehensive resource, it covers key points briefly and there are other resources s that are valuable to understand the field of capacity building.  For example, Don Compton and his colleagues, Michael Baizerman and Stacey Stockdill edited a New Directions in Evaluation volume (No. 93) that addresses the art, craft, and science of ECB.  ECB is often viewed as a context-dependent system of processes and practices that help instill quality evaluation skills in an organization and its members.  The long term outcome of any ECB is the ability to conduct a rigorous evaluation as part of routine practice.  That is our long-term goal–conducting rigorous evaluations as a part of routine practice.

 

Although not exhaustive, below are some ECB resources and some general evaluation resources (some of my favorites, to be sure).

 

ECB resources:

Preskill, H. & Russ-Eft, D. (2005).  Building Evaluation Capacity. Thousand Oaks, CA: Sage

Compton, D. W. Baizerman, M., & Stockdill, S. H. (Ed.).  (2002).  The art, craft, and science of evaluation capacity building. New Directions for Evaluation, No. 93.  San Francisco: Jossey-Bass.

Preskill, H. & Boyle, S. (2008).  A multidisciplinary model of evalluation capacity building.  American Journal of Evaluation, 29 (4), 443-459.

General evaluation resources:

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (4th ed.). (2011).  Program evaluation:  Alternative approaches and practical guidelines. Boston, MA: Pearson.

Scriven, M. (4th ed.).   (1991).  Evaluation Thesaurus. Newbury Park, CA: Sage.

Patton, M. Q. (4th ed.). (2008).  Utilization-focused evaluation. Thousand Oaks, CA: Sage.

Patton, M. Q. (2012).  Essentials of utilization-focused evaluation. Thousand Oaks, CA: Sage

A colleague asked me what I considered an output in a statewide program we were discussing.  This is a really good example of assumptions and how they can blind side an individual–in this case me.  Once I (figuratively) picked myself up, I proceeded to explain how this terminology applied to the program under discussion.  Once the meeting concluded, I realized that perhaps a bit of a refresher was in order.  Even the most seasoned evaluators can benefit from a reminder every so often.

 

So OK–inputs, outputs, outcomes.

As I’ve mentioned before, Ellen Taylor-Powell, former UWEX Evaluation specialist has a marvelous tutorial on logic modeling.  I recommend you go there for your own refresher.  What I offer you here is a brief (very) overview of these terms.

Logic models whether linear or circular are composed of various focus points.  Those focus points include (in addition to those mention in the title of this post) the situation, assumptions, and external factors.  Simply put, the situation is a what is going on–the priorities, the needs, the problems that led to the program you are conducting–that is program with a small p (we can talk about sub and supra models later).

Inputs are those resources you need to conduct the program. Typically, they are lumped into personnel, time, money, venue, equipment.  Personnel covers staff, volunteers, partners, any stakeholder.  Time is not just your time–also the time needed for implementation, evaluation, analysis, and reporting.  Money (speaks for itself).  Venue is where the program will be held.  Equipment is what stuff you will need–technology, materials, gear, etc.

Outputs are often classified into two parts–first, participants (or target audience) and the second part, activities that are conducted.  Typically (although not always), those activities are counted and are called bean counts..  In the example that started this post, we would be counting the number of students who graduated high school; the number of students who matriculated to college (either 2 or 4 year); the number of students who transferred from 2 year to 4 year colleges; the number of students who completed college in 2 or 4 years; etc.  This bean  count could also be the number of classes offered; the number of brochures distributed; the number of participants in the class; the number of  (fill in the blank).  Outputs are necessary and not sufficient to determine if a program is being effective.  The field of evaluation started with determining bean counts and satisfactions.

Outcomes can be categorized as short term, medium/intermediate term, or long term.  Long term outcomes are often called impacts.  (There are those in the field who would classify impacts as something separate from an outcome–a discussion for another day.)  Whatever you choose to call the effects of your program, be consistent–don’t use the terms interchangeably; it confuses the reader.  What you are looking for as an outcome is change–in learning; in behavior; in conditions.  This change is measured in the target audience–individuals, groups, communities, etc.

I’ll talk about assumptions and external factors another day.  Have a wonderful holiday weekend…the last vestiges of summer–think tomatoes, corn-on-the-cob , state fair, and  a tall cool drink.

 

I started this post the third week in July.  Technical difficulties prevented me from completing the post.  Hopefully, those difficulties are now in the past.

A colleague asked me what can we do when we can’t measure actual behavior change in our evaluations.  Most evaluations can capture knowledge change (short term outcomes); some evaluations can capture behavior change (intermediate or medium term outcomes); very few can capture condition change (long term outcomes, often called impacts–though not by me).  I thought about that.  Intention to change behavior can be measured.  Confidence (self-efficacy) to change behavior can be measured.  For me, all evaluations need to address those two points.

Paul Mazmanian, Associate Dean for Continuing Professional Development and Evaluation Studies at Virginia Commonwealth University, has studied changing practice patterns for several years.  One study, conducted in 1998, reported that “…physicians in both study and control groups were significantly more likely to change (47% vs. 7% p< .001) if they indicated intent to change immediately following the lecture” (Academic Medicine. 1998; 73:882-886).   Mazmanian and his co-authors say in their conclusions that “successful change in practice may depend less on clinical and barriers information than on other factors that influence physicians’ performance.  To further develop the commitment-to-change strategy in measuring effects of planned change, it is important to isolate and learn the powers of individual components of the strategy as well as their collective influence on physicians’ clinical behavior.”

 

What are the implications for Extension and other complex organizations?   It makes sense to extrapolate from this information from the continuing medical education literature.  Physicians are adults; most of Extension’s audience are adults.  If stated intention to change is highly predictable  “immediately following the lecture” (i.e., continuing education program) based on stated intention to change, then stated intention to change solicited from participants in Extension programs immediately following the program delivery would increase the likelihood of behavior change.  One of the outcomes Extension wants to see is change in behavior (medium term outcomes).  Measuring those behavior changes directly (through observation, or some other method) is often outside the resources available.  Measuring those intended behavior changes is within the scope of Extension resources.  Using a time frame (such as 6 months) helps bound the anticipated behavior change.  In addition, intention to change can be coupled with confidence to implement the behavior change to provide the evaluator with information about the effect of the program.  The desired effect is high confidence to change and willingness to implement the change within the specified time frame.  If Extension professionals find that result, then it would be safe to say that the program is successful.

REFERENCES

1.  Mazmanian, P.E., Daffron, S. R., Johnson, R. E., Davis, D. A., Kantrowitz, M. P.  (1998).  Information about barriers to planned change:  A Randomized controlled trial involving continuing medical education lectures and commitment to change.  Academic Medicine, 73 (8), 882-886.

2.  Mazmanian, P. E. & Mazmanian, P. M.  (1999).  Commitment to change: Theoretical foundations, methods, and outcomes.  The Journal of Continuing Education in the Health Professions, 19, 200 – 207.

3.  Mazmanian, P. E., Johnson, R. E, Zhang, A. Boothby, J. & Yeatts, E. J. (2001).  Effects of a signature on rates of change: A randomized controlled trial involving continuing medical education and the commitment-to-change model.  Academic Medicine, 76 (6), 642-646.

 

Hopefully, the technical difficulties with images is no longer a problem and I will be able to post the answers to the history quiz and the post I had hoped to post last week.  So, as promised, here are the answers to the quiz I posted the week of July 5.  The keyed responses are in BOLD

1.  Michael Quinn Patton, author of Utilization-Focused Evaluation and the new book, Developmental Evaluation and the classic Qualitative Evaluation and Research Methods .

2.   Michael Scriven is best known for his concept of formative and summative evaluation. He has also advocated that evaluation is a transdiscipline.  He is the author of the Evaluation Thesaurus .

3. Hallie Preskill is the co-author (with Darlene Russ-Eft) of Evaluation Capacity Building

4. Robert E. Stake has advanced work in case study and is the author of the book Multiple Case Study and The Art of Case Study Research.

5. David M. Fetterman is best known for his advocacy of empowerment evaluation and the book of that name, Foundations of Empowerment Evaluation .

6. Daniel Stufflebeam developed the CIPP (context input process product) model which is discussed in the book Evaluation Models .

7. James W. Altschuldt is the go-to person for needs assessment.  He is the editor of the Needs Assessment Kit (or everything you wanted to know about needs assessment and didn’t know where to find the answer).  He is also the co-author with Bell Ruth Witkin of two needs assessment books,  and  .

8. Jennifer C. Greene, the current President of the American Evaluation Association, and the author of a book on mixed methods .

9. Ernest R. House is a leader in the work of evaluation policy and is the author of  an evaluation novel,  Regression to the Mean   .

10. Lee J. Cronbach is a pioneer in education evaluation and the reform of that practice.  He co-authored with several associates the book, Toward Reform of Program Evaluation .

11.  Ellen Taylor-Powell, the former Evaluation Specialist at University of Wisconsin Extension Service and is credited with developing the logic model later adopted by the USDA for use by the Extension Service.  To go to the UWEX site, click on the words “logic model”.

12. Yvonna Lincoln, with her husband Egon Guba (see below) co-authored the book Naturalistic Inquiry  . She is the currently co-editor (with Norman K. Denzin) of the Handbook of Qualitative Research .

13.   Egon Guba, with his wife Yvonna Lincoln, is the co-author of 4th Generation Evaluation.

14. Blaine Worthen has championed certification for evaluators.  He wit h Jody L. Fitzpatrick and James
R. Sanders have co-authored Program Evaluation: Alternative Approaches and Practical Guidelines.

15.  Thomas A. Schwandt, a philosopher at heart who started as an auditor, has written extensively on evaluation ethics. He is also the co-author (with Edward S. Halpern) of Linking Auditing and Metaevaluation.

16.   Peter H. Rossi, co-author with Howard E. Freeman and Mark E. Lipsey, wrote Evaluation: A Systematic Approach , and is a pioneer in evaluation research.

17. W. James Popham, a leader in educational evaluation, and authored the volume, Educational Evaluation

18. Jason Millman was a pioneer of teacher evaluation and author of  Handbook of Teacher Evaluation

19.  William R. Shadish co-edited (with Laura C. Leviton and Thomas Cook) of Foundations of Program Evaluation: Theories of Practice . His work in theories of evaluation practice earned him the Paul F. Lazarsfeld Award for Evaluation Theory, from the American Evaluation Association in 1994.

20.   Laura C. Leviton (co-editor with Will Shadish and Tom Cook–see above) of Foundations of Program Evaluation: Theories of Practice has pioneered work in participatory evaluation.

 

 

Although I’ve only list 20 leaders, movers and shakers, in the evaluation field, there are others who also deserve mention:  John Owen, Deb Rog, Mark Lipsey, Mel Mark, Jonathan Morell, Midge Smith, Lois-Ellin Datta, Patricia Rogers, Sue Funnell, Jean King, Laurie Stevahn, John, McLaughlin, Michale Morris, Nick Smith, Don Dillman, Karen Kirkhart, among others.

If you want to meet the movers and shakers, I suggest you attend the American Evaluation Association annual meeting.  In 2011, it will be held in Anaheim CA, November 2 – 5; professional development sessions are being offered October 31, November 1 and 2, and also November 6.  More conference information can be found here.

 

 

Those of you who read this blog know a little about evaluation.  Perhaps you’d like to know more?  Perhaps not…

I think it would be valuable to know who was instrumental in developing the profession to the point it is today; hence, a little history.  This will be fun for those of you who don’t like history.  It will be a matching game.  Some of these folks have been mentioned in previous posts.  I’ll post the keyed responses next week.

Directions:  Match the  name with the evaluation contribution.  I’ve included photos so you know who is who, who you can put with a name and a contribution.

1.  2.  

3. 4.

5. 6.

7. 8.

9. 10.

11. 12.

13.   14. 15. 

16.   17. 18.

19. 20.  

 

 

A.  Michael Scriven                1.  Empowerment Evaluation

B.  Michael Quinn Patton     2.  Mixed Methods

C.  Blaine Worthen                 3.  Naturalistic Inquiry

D.  David Fetterman              4.  CIPP

E.  Thomas Schwandt            5. Formative/Summative

F.  Jennifer Greene                  6. Needs Assessment

G.  James W. Altschuld          7.  Developmental Evaluation

H.  Ernie House                          8.  Case study

I.   Yvonna Lincoln                    9.  Fourth Generation Evaluation

J.  Egon Guba                            10. Evaluation Capacity Building

K.  Lee J. Cronbach                   11.  Evaluation Research

L.  W. James Popham               12.  Teacher Evaluation

M.  Peter H. Rossi                       13.  Logic Models

N.  Hallie Preskill                       14.  Educational Evaluation

O.  Ellen Taylor-Powell            15.  Foundations of Program Evaluation

P.  Robert Stake                           16. Toward Reform of Program Evaluation

Q.  Dan Stufflebeam                  17. Participatory Evaluation

R.  Jason Millman                      18. Evaluation and Policy

S.  Will Shadish                           19. Evaluation and epistomology

T.  Laura Leviton                        20. Evaluation Certification

 

There are others more recent who have made contributions.These represent the folks who did seminal work that built the profession.  It also includes some more recent thinkers.  Have fun.

A colleague asked me yesterday about authenticating anecdotes–you know–those wonderful stories you gather about how what you’ve done has made a difference in someones life?

 

I volunteer service to a non-profit board (two, actually) and the board members are always telling stories about how “X has happened” and how “Y was wonderful” yet,  my evaluator self says, “How do you know?”  This becomes a concern for organizations which do not have evaluation as part of their mission statement.  Evan though many boards hold accountable the Executive Director, few make evaluation explicit.

Dick Krueger  , who has written about focus groups, also writes and studies the use of stories in evaluation and much of what I will share with y’all today is from his work.

First, what is a story?  Creswell (2007, 2 ed.) defines story as “…aspects that surface during an interview in which the participant describes a situation, usually with a beginning, a middle, and an end, so that the researcher can capture a complete idea and integrate it, intact, into the qualitative narrative”.  Krueger elaborates on that definintion by saying that a story “…deals with an experience of an event, program, etc. that has a point or a purpose.”  Story differs from case study in that case study is a story that tries to understand a system, not an individual event or experience; a story deals with an experience that has a point.  Stories provide examples of core philosophies, of significant events.

There are several purposes for stories that can be considered evaluative.  These include depicting the culture, promoting core values, transmitting and reinforcing current culture, providing instruction (another way to transmit culture), and motivating, inspiring, and/or encouraging (people).  Stories can be of the following types:  hero stories, success stories, lesson-learned stories, core value  stories, cultural stories, and teaching stories.

So why tell a story?  Stories make information easier to remember, more believable, and tap into emotion.  For stories to be credible (provide authentication), an evaluator needs to establish criteria for stories.  Krueger suggests five different criteria:

  • Authentic–is it truthful?  Is there truth in the story?  (Remember “truth” depends on how you look at something.)
  • Verifiable–is there a trail of evidence back to the source?  Can you find this story again?
  • Confidential–is there a need to keep the story confidential?
  • Original intent–what is the basis for the story?  What motivated telling the story? and
  • Representation–what does the story represent?  other people?  other locations?  other programs?

Once you have established criteria for the stories collected, there will need to be some way to capture stories.  So develop a plan.  Stories need to be willingly shared, not coerced; documented and recorded; and collected in a positive situation.  Collecting stories is an example where the protections for  humans in research must be considered.  Are the stories collected confidentially?  Does telling the stories result in little or no risk?  Are stories told voluntarily?

Once the stories have been collected, analyzing and reporting those stories is the final step.  Without this, all the previous work  was for naught.  This final step authenticates the story.  Creswell provides easily accessible guidance for analysis.

My oldest daughter graduated from High School Monday.  Now, she is facing the reality of life after high school–the emotional let down, the lack of structure; the loss of focus.  I remember what it was like to commence…another word for beginning.  I think I was depressed for days.  The question becomes evaluative when one thinks of planning, which is what she has to do now.  In planning, she needs to think:  What excites me?  What are my passions?  How will I accomplish the what?  How will I connect again to the what?  How will I know I’m successful?

Ellen Taylor-Powell,  former Distinguished Evaluation Specialist at the University of Wisconsin Extension, talks about planning on the professional development website at UWEX.  (There are many other useful publications on this site…I urge you to check them out.)  This publication has four sections:  focusing the evaluation, collecting the information, using the information, and managing the evaluation.  I want to talk more about focusing the evaluation–because that is key when beginning, whether it is the next step in your life, the next program you want to implement, or the next report you want to write.

This section of the publication asks you to identify what you are going to evaluate, the purpose of the evaluation, who and how they will use the evaluation, what questions you want to answer, what information you need to answer those questions, develop a time-line, and, finally, identify what resources you will need.  I see this as puzzle assembly–one where you do not necessarily have a picture to guide you.  Not unlike a newly commenced graduate–finding a focus is putting together a puzzle.–you won’t know what the picture is, where you are going, until you focus and develop a plan.  For me, that means putting the puzzle together.  It means finding the what and the so what.  It is always the first place to commence.

One of the opportunities I have as a faculty member at OSU is to mentor students.  I get to do this in a variety of ways–sit on committees, provide independent studies, review preliminary proposal, listen…I find it very exciting to see the change and growth in students’ thinking and insights when I work with students.  I get some of my best ideas from them.  Like today’s post…

I just reviewed several chapters of student dissertation proposals.  These students had put a lot of thought and passion into their research questions.  To them, the inquiry was important; it could be the impetus to change.  Yet, the quality of the writing often detracted from the quality of the question; the importance of the inquiry; the opportunity to make a difference.

How does this relate to evaluation?  For evaluations to make a difference, the findings must be used.  This does not mean writing the report and giving it to the funder, the principal investigator, the program leader, or other stakeholders.  Too many reports have gathered dust on someone shelf because they are not used.  In order to be used, the report must be written so that they can be understood.  The report needs to be written to a naive audience; as though the reader knows nothing about the topic.

When I taught technical writing, I used the mnemonic of the 5Cs.  My experience is that if these concepts (all starting with the letter      ) were employed, the report/paper/manuscript would be able to be understood by any reader.

The report needs to be written:

  • Clearly
  • Coherently
  • Concisely
  • Correctly
  • Consistently

Clearly means not using jargon; using simple words; explaining technical words.

Coherently means having the sections of the report hang together; not having any (what I call) quantum leaps.

Concisely means using few words; avoiding long meandering paragraphs; avoiding the over use of prepositions (among other things).

Correctly means making sure that grammar and syntax are correct; subject/verb agreements; remembering that the word “data” is a plural word and takes a plural verb and plural articles.

Consistently means using the same word to describe the parts of your research; participants are participants all through the report, not subjects on page 5, respondents on page 11, and students on page 22.

This little mnemonic has helped many students write better papers; I know it can help many evaluators write better reports.

This is no easy task.  Writing is hard work; using the 5Cs makes it easier.

I was putting together a reading list for an evaluation capacity building program I’ll be leading come September and was reminded about process evaluation.  Nancy Ellen Kiernan has a one page handout on the topic.  It is a good place to start.  Like everything in evaluation, there is so much more to say.  Let’s see what I can say in 440 words or less.

When I first started doing evaluation (back when we beat on hollow logs), I developed a simple approach (call it a model) so I could talk to stakeholders about what I did and what they wanted done.  I called it the P3 model–Process, Progress, Product.  This is a simple approach that answers the following evaluative questions:

  • How did I do what I did? (Process)
  • Did I do what I did in a timely manner? (Progress)
  • Did I get the outcome I wanted (Product)

It is the “how” question I’m going to talk about today.

Scriven, in the 4th ed of the Evaluation Thesaurus, says that a process evaluation “focuses entirely on the variables between input and output”.  It may include input variables.  Knowing this helps you know what the evaluative question is for the input and output parts of a logic model (remember there are evaluative questions/activities for each part of a logic model).

When considering evaluating a program, process evaluation is not sufficient; it may be necessary and still not be sufficient.  An outcome evaluation must accompany a process evaluation.  Evaluating process components of a program involves looking at internal and external communications (think memos, emails, letters, reports, etc.); interface with stakeholders (think meeting minutes); the formative evaluation system of a program (think participant satisfaction); and infrastructure effectiveness (think administrative patterns, implementation steps, corporate responsiveness; instructor availability, etc.).

Scriven provides these examples that suggest the need for program improvement: “…program’s receptionists are rude to most of a random selection of callers; the telephonists are incompetent; the senior staff is unhelpful to evaluators called in by the program to improve it; workers are ignorant about the reasons for procedures that are intrusive to their work patterns;  or the quality control system lacks the power to call a halt to the process when it discerns an emergency.”  Other examples which demonstrate program success are administrators are transparent about organizational structure; program implementation is inclusive; or participants are encouraged to provide ongoing feedback to program managers.  We could then say that a process evaluation assesses the development and actual implementation of a program to determine whether the program was  implemented as planned and whether expected output was actually produced.

Gathering data regarding the program as actually implemented assists program planners in identifying what worked and what did not. Some of the components included in a process evaluation are descriptions of program environment, program design, and program implementation plan.  Data on any changes to the program or program operations and on any intervening events that may have affected the program should also be included.

Quite likely, these data will be qualitative in nature and will need to be coded using one of the many qualitative data analysis methods.

Hi everybody–it is time for another TIMELY TOPIC.  This week’s topic is about using pretest/posttest evaluation or a post-then-pre evaluation.

There are many considerations for using these designs.  You have to look at the end result and decide what is most appropriate for your program.  Some of the key considerations include:

  • the length of your program;
  • the information you want to measure;
  • the factors influencing participants response; and
  • available resources.

Before explaining the above four factors, let me urge you to read on this topic.  There are a couple of resources (yes, print…) I want to pass your way.

  1. Campbell, D. T. & Stanley, J. C. (1963).  Experimental and quasi-experimental designs for research.  Houghton Mifflin Company:  Boston, MA.  (The classic book on research and evaluation designs.)
  2. Rockwell, S. K., & Kohn, H. (1989). Post-then-pre evaluation. Journal of Extension [On-line]. 27(2). Available at: http://www.joe.org/joe/1989summer/a5.htm (A seminal JoE paper explaining post-then-pre test.)
  3. Nimon, K. Zigarami, D. & Allen, J. (2011).  Measures of program effectiveness based on retrospective pretest data:  Are all created equal? American Journal of Evaluation, 32, 8 – 28.  (A 2011 publication with an extensive bibliography.)

Let’s talk about considerations.

Length of program.

For pre/post test, you want a program that is long.  More than a day.  Otherwise you risk introducing a desired response bias and the threats to internal validity that  Campbell and Stanley identify.  Specifically the threats called history, maturation, testing, and instrumentation,  also a possible regression to the mean threat, though that is on a possible source of concern.  These threats to internal validity assume no randomization and a one group design, typical for Extension programs and other educational programs.  Post-then-pre works well for short programs, a day or less, and  tend to control for response shift and desired response bias.  There may still be threats to internal validity.

Information you want to measure.

If you want to know a participants specific knowledge, a post-then-pre cannot provide you with that information because you can not test something you cannot unknow.  The traditional pre/post can focus on specific knowledge, e.g., what food is the highest in Vitamin C in a list that includes apricot, tomato, strawberry cantaloupe. (Answer:  strawberry)  If you are wanting agreement/disagreement with general knowledge (e.g., I know what the key components of strategic planning are), the post-pre works well.  Confidence, behaviors, skills, and attitudes can all be easily measured with a post-then-pre.

Factors influencing participants response.

I mentioned threats to internal validity above.  These factors all influence participants responses.  If there is a long time between the pretest and the post test, participants can be affected by history (a tornado prevents attendance to the program); maturation (especially true with programs with children–they grow up); testing (having taken the pretest, the post test scores will be better);  and instrumentation (the person administering the posttest administers it differently than the pretest was administered).  Participants desire to please the program leader/evaluator, called desired response bias, also affects participants response.

Available resources.

Extension programs (as well as many other educational programs) are affected by the availability of resources (time, money, personnel, venue, etc.).  If you only have a certain amount of time, or a certain number of people who can administer the evaluation, or a set amount of money, you will need to consider which approach to evaluation you will use.

The idea is to get usable, meaningful data that accurately reflects the work that went into the program.