Hopefully, the technical difficulties with images is no longer a problem and I will be able to post the answers to the history quiz and the post I had hoped to post last week.  So, as promised, here are the answers to the quiz I posted the week of July 5.  The keyed responses are in BOLD

1.  Michael Quinn Patton, author of Utilization-Focused Evaluation and the new book, Developmental Evaluation and the classic Qualitative Evaluation and Research Methods .

2.   Michael Scriven is best known for his concept of formative and summative evaluation. He has also advocated that evaluation is a transdiscipline.  He is the author of the Evaluation Thesaurus .

3. Hallie Preskill is the co-author (with Darlene Russ-Eft) of Evaluation Capacity Building

4. Robert E. Stake has advanced work in case study and is the author of the book Multiple Case Study and The Art of Case Study Research.

5. David M. Fetterman is best known for his advocacy of empowerment evaluation and the book of that name, Foundations of Empowerment Evaluation .

6. Daniel Stufflebeam developed the CIPP (context input process product) model which is discussed in the book Evaluation Models .

7. James W. Altschuldt is the go-to person for needs assessment.  He is the editor of the Needs Assessment Kit (or everything you wanted to know about needs assessment and didn’t know where to find the answer).  He is also the co-author with Bell Ruth Witkin of two needs assessment books,  and  .

8. Jennifer C. Greene, the current President of the American Evaluation Association, and the author of a book on mixed methods .

9. Ernest R. House is a leader in the work of evaluation policy and is the author of  an evaluation novel,  Regression to the Mean   .

10. Lee J. Cronbach is a pioneer in education evaluation and the reform of that practice.  He co-authored with several associates the book, Toward Reform of Program Evaluation .

11.  Ellen Taylor-Powell, the former Evaluation Specialist at University of Wisconsin Extension Service and is credited with developing the logic model later adopted by the USDA for use by the Extension Service.  To go to the UWEX site, click on the words “logic model”.

12. Yvonna Lincoln, with her husband Egon Guba (see below) co-authored the book Naturalistic Inquiry  . She is the currently co-editor (with Norman K. Denzin) of the Handbook of Qualitative Research .

13.   Egon Guba, with his wife Yvonna Lincoln, is the co-author of 4th Generation Evaluation.

14. Blaine Worthen has championed certification for evaluators.  He wit h Jody L. Fitzpatrick and James
R. Sanders have co-authored Program Evaluation: Alternative Approaches and Practical Guidelines.

15.  Thomas A. Schwandt, a philosopher at heart who started as an auditor, has written extensively on evaluation ethics. He is also the co-author (with Edward S. Halpern) of Linking Auditing and Metaevaluation.

16.   Peter H. Rossi, co-author with Howard E. Freeman and Mark E. Lipsey, wrote Evaluation: A Systematic Approach , and is a pioneer in evaluation research.

17. W. James Popham, a leader in educational evaluation, and authored the volume, Educational Evaluation

18. Jason Millman was a pioneer of teacher evaluation and author of  Handbook of Teacher Evaluation

19.  William R. Shadish co-edited (with Laura C. Leviton and Thomas Cook) of Foundations of Program Evaluation: Theories of Practice . His work in theories of evaluation practice earned him the Paul F. Lazarsfeld Award for Evaluation Theory, from the American Evaluation Association in 1994.

20.   Laura C. Leviton (co-editor with Will Shadish and Tom Cook–see above) of Foundations of Program Evaluation: Theories of Practice has pioneered work in participatory evaluation.

 

 

Although I’ve only list 20 leaders, movers and shakers, in the evaluation field, there are others who also deserve mention:  John Owen, Deb Rog, Mark Lipsey, Mel Mark, Jonathan Morell, Midge Smith, Lois-Ellin Datta, Patricia Rogers, Sue Funnell, Jean King, Laurie Stevahn, John, McLaughlin, Michale Morris, Nick Smith, Don Dillman, Karen Kirkhart, among others.

If you want to meet the movers and shakers, I suggest you attend the American Evaluation Association annual meeting.  In 2011, it will be held in Anaheim CA, November 2 – 5; professional development sessions are being offered October 31, November 1 and 2, and also November 6.  More conference information can be found here.

 

 

A colleague asked me yesterday about authenticating anecdotes–you know–those wonderful stories you gather about how what you’ve done has made a difference in someones life?

 

I volunteer service to a non-profit board (two, actually) and the board members are always telling stories about how “X has happened” and how “Y was wonderful” yet,  my evaluator self says, “How do you know?”  This becomes a concern for organizations which do not have evaluation as part of their mission statement.  Evan though many boards hold accountable the Executive Director, few make evaluation explicit.

Dick Krueger  , who has written about focus groups, also writes and studies the use of stories in evaluation and much of what I will share with y’all today is from his work.

First, what is a story?  Creswell (2007, 2 ed.) defines story as “…aspects that surface during an interview in which the participant describes a situation, usually with a beginning, a middle, and an end, so that the researcher can capture a complete idea and integrate it, intact, into the qualitative narrative”.  Krueger elaborates on that definintion by saying that a story “…deals with an experience of an event, program, etc. that has a point or a purpose.”  Story differs from case study in that case study is a story that tries to understand a system, not an individual event or experience; a story deals with an experience that has a point.  Stories provide examples of core philosophies, of significant events.

There are several purposes for stories that can be considered evaluative.  These include depicting the culture, promoting core values, transmitting and reinforcing current culture, providing instruction (another way to transmit culture), and motivating, inspiring, and/or encouraging (people).  Stories can be of the following types:  hero stories, success stories, lesson-learned stories, core value  stories, cultural stories, and teaching stories.

So why tell a story?  Stories make information easier to remember, more believable, and tap into emotion.  For stories to be credible (provide authentication), an evaluator needs to establish criteria for stories.  Krueger suggests five different criteria:

  • Authentic–is it truthful?  Is there truth in the story?  (Remember “truth” depends on how you look at something.)
  • Verifiable–is there a trail of evidence back to the source?  Can you find this story again?
  • Confidential–is there a need to keep the story confidential?
  • Original intent–what is the basis for the story?  What motivated telling the story? and
  • Representation–what does the story represent?  other people?  other locations?  other programs?

Once you have established criteria for the stories collected, there will need to be some way to capture stories.  So develop a plan.  Stories need to be willingly shared, not coerced; documented and recorded; and collected in a positive situation.  Collecting stories is an example where the protections for  humans in research must be considered.  Are the stories collected confidentially?  Does telling the stories result in little or no risk?  Are stories told voluntarily?

Once the stories have been collected, analyzing and reporting those stories is the final step.  Without this, all the previous work  was for naught.  This final step authenticates the story.  Creswell provides easily accessible guidance for analysis.

I was putting together a reading list for an evaluation capacity building program I’ll be leading come September and was reminded about process evaluation.  Nancy Ellen Kiernan has a one page handout on the topic.  It is a good place to start.  Like everything in evaluation, there is so much more to say.  Let’s see what I can say in 440 words or less.

When I first started doing evaluation (back when we beat on hollow logs), I developed a simple approach (call it a model) so I could talk to stakeholders about what I did and what they wanted done.  I called it the P3 model–Process, Progress, Product.  This is a simple approach that answers the following evaluative questions:

  • How did I do what I did? (Process)
  • Did I do what I did in a timely manner? (Progress)
  • Did I get the outcome I wanted (Product)

It is the “how” question I’m going to talk about today.

Scriven, in the 4th ed of the Evaluation Thesaurus, says that a process evaluation “focuses entirely on the variables between input and output”.  It may include input variables.  Knowing this helps you know what the evaluative question is for the input and output parts of a logic model (remember there are evaluative questions/activities for each part of a logic model).

When considering evaluating a program, process evaluation is not sufficient; it may be necessary and still not be sufficient.  An outcome evaluation must accompany a process evaluation.  Evaluating process components of a program involves looking at internal and external communications (think memos, emails, letters, reports, etc.); interface with stakeholders (think meeting minutes); the formative evaluation system of a program (think participant satisfaction); and infrastructure effectiveness (think administrative patterns, implementation steps, corporate responsiveness; instructor availability, etc.).

Scriven provides these examples that suggest the need for program improvement: “…program’s receptionists are rude to most of a random selection of callers; the telephonists are incompetent; the senior staff is unhelpful to evaluators called in by the program to improve it; workers are ignorant about the reasons for procedures that are intrusive to their work patterns;  or the quality control system lacks the power to call a halt to the process when it discerns an emergency.”  Other examples which demonstrate program success are administrators are transparent about organizational structure; program implementation is inclusive; or participants are encouraged to provide ongoing feedback to program managers.  We could then say that a process evaluation assesses the development and actual implementation of a program to determine whether the program was  implemented as planned and whether expected output was actually produced.

Gathering data regarding the program as actually implemented assists program planners in identifying what worked and what did not. Some of the components included in a process evaluation are descriptions of program environment, program design, and program implementation plan.  Data on any changes to the program or program operations and on any intervening events that may have affected the program should also be included.

Quite likely, these data will be qualitative in nature and will need to be coded using one of the many qualitative data analysis methods.

We recently held Professional Development Days for the Division of Outreach and Engagement.  This is an annual opportunity for faculty and staff in the Division to build capacity in a variety of topics.  The question this training posed was evaluative:

How do we provide meaningful feedback?

Evaluating a conference or a multi-day, multi-session training is no easy task.  Gathering meaningful data is a challenge.  What can you do?  Before you hold the conference (I’m using the word conference to mean any multi-day, multi-session training), decide on the following:

  • Are you going to evaluate the conference?
  • What is the focus of the evaluation?
  • How are you going to use the results?

The answer to the first question is easy:  YES.  If the conference is an annual event (or a regular event), you will want to have participants’ feedback of their experience, so, yes, you will evaluate the conference. Look at a Penn State Tip Sheet 16 for some suggestions.  (If this is a one time event, you may not; though as an evaluator, I wouldn’t recommend ignoring evaluation.)

The second question is more critical.  I’ve mentioned in previous blogs the need to prioritize your evaluation.  Evaluating a conference can be all consuming and result in useless data UNLESS the evaluation is FOCUSED.  Sit down with the planners and ask them what they expect to happen as a result of the conference.  Ask them if there is one particular aspect of the conference that is new this year.  Ask them if feedback in previous years has given them any ideas about what is important to evaluate this year.

This year, the planners wanted to provide specific feedback to the instructors.  The instructors had asked for feedback in previous years.  This is problematic if planning evaluative activities for individual sessions is not done before the conference.  Nancy Ellen Kiernan, a colleague at Penn State, suggests a qualitative approach called a Listening Post.  This approach will elicit feedback from participants at the time of the conference.  This method involves volunteers who attended the sessions and may take more persons than a survey.  To use the Listening Post, you must plan ahead of time to gather these data.  Otherwise, you will need to do a survey after the conference is over and this raises other problems.

The third question is also very important.  If the results are just given to the supervisor, the likelihood of them being used by individuals for session improvement or by organizers for overall change is slim.  Making the data usable for instructors means summarizing the data in a meaningful way, often visually.  There are several way to visually present survey data including graphs, tables, or charts.  More on that another time.  Words often get lost, especially if words dominate the report.

There is a lot of information in the training and development literature that might also be helpful.  Kirkpatrick has done a lot of work in this area.  I’ve mentioned their work in previous blogs.

There is no one best way to gather feedback from conference participants.  My advice:  KISS–keep it simple and straightforward.

Last week, I spoke about how to questions  and applying them  to program planning, evaluation design, evaluation implementation, data gathering, data analysis, report writing, and dissemination.  I only covered the first four of those topics.  This week, I’ll give you my favorite resources for data analysis.

This list is more difficult to assemble.  This is typically where the knowledge links break down and interest is lost.  The thinking goes something like this.  I’ve conducted my program, I’ve implemented the evaluation, now what do I do?  I know my program is a good program so why do I need to do anything else?

YOU  need to understand your findings.  YOU need to be able to look at the data and be able to rigorously defend your program to stakeholders.  Stakeholders need to get the story of your success in short clear messages.  And YOU need to be able to use the findings in ways that will benefit your program in the long run.

Remember the list from last week?  The RESOURCES for EVALUATION list?  The one that says:

1.  Contact your evaluation specialist.

2.  Listen to stakeholders–that means including them in the planning.

3.  Read

Good.  This list still applies, especially the read part.  Here are the readings for data analysis.

First, it is important to know that there are two kinds of data–qualitative (words) and quantitative (numbers).  (As an aside, many folks think words that describe are quantitative data–they are still words even if you give them numbers for coding purposes, so treat them like words, not numbers).

  • Qualitative data analysis. When I needed to learn about what to do with qualitative data, I was given Miles and Huberman’s book.  (Sadly, both authors are deceased so there will not be a forthcoming revision of their 2nd edition, although the book is still available.)

Citation: Miles, M. B., & Huberman, A. Michael. (1994). Qualitative data analysis: An expanded source book. Thousand Oaks, CA: Sage Publications.

Fortunately, there are newer options, which may be as good.  I will confess, I haven’t read them cover to cover at this point (although they are on my to-be-read pile).

Citation:  Saldana, J.  (2009). The coding manual for qualitative researchers. Los Angeles, CA: Sage.

Bernard, H. R. & Ryan, G. W. (2010).  Analyzing qualitative data. Los Angeles, CA: Sage.

If you don’t feel like tackling one of these resources, Ellen Taylor-Powell has written a short piece  (12 pages in PDF format) on qualitative data analysis.

There are software programs for qualitative data analysis that may be helpful (Ethnograph, Nud*ist, others).  Most people I know prefer to code manually; even if you use a soft ware program, you will need to do a lot of coding manually first.

  • Quantitative data analysis. Quantitative data analysis is just as complicated as qualitative data analysis.  There are numerous statistical books which explain what analyses need to be conducted.  My current favorite is a book by Neil Salkind.

Citation: Salkind, N. J. (2004).  Statistics for people who (think they) hate statistics. (2nd ed. ). Thousand Oaks, CA: Sage Publications.

NOTE:  there is a 4th ed.  with a 2011 copyright available. He also has a version of this text that features Excel 2007.  I like Chapter 20 (The Ten Commandments of Data Collection) a lot.  He doesn’t talk about the methodology, he talks about logistics.  Considering the logistics of data collection is really important.

Also, you need to become familiar with a quantitative data analysis software program–like SPSS, SAS, or even Excel.  One copy goes a long way–you can share the cost and share the program–as long as only one person is using it at a time.  Excel is a program that comes with Microsoft Office.  Each of these has tutorials to help you.

You’ve developed your program.  You think you’ve met a need.  You conduct an evaluation.  Low and behold!  Some of your respondents give you such negative feedback you wonder what program they attended.  Could it really have been your program?

This is the phenomena I call “all of the people all of the time”, which occurs regularly  in evaluating training  programs.  And it has to do with use–what you do with the results of this evaluation.  And you can’t do it–please all of the people all of the time, that is.  There will always be some sour grapes.  In fact, you will probably have more negative comments than positive comments.  People who are upset want you to know; people are happy are just happy.

Now, I’m sure you are really confused.  Good.  At least I’ve got your attention and maybe you’ll read to the end of today’s post.

You have seen this scenario:  You ask the participants for formative data so that you can begin planning the next event or program.  You ask about the venue, the time of year, the length of the conference, the concurrent offerings, the plenary speakers.  Although some of these data are satisfaction data (the first level, called Reaction,  in Don Kirkpatrick’s training model and the Reaction category in Claude Bennett’s TOPs Hierarchy [see diagram]

they are important part of formative evaluation; an important part of program planning.  You are using the evaluation report.  That is important.  You are not asking if the participants learned something.  You are not asking if they intend to change their behavior.  You are not asking about what conditions have changed.  You only want to know about their experience in the program.

What do you do with the sour grapes?  You could make vinegar, only that won’t be very useful and use is what you are after.  Instead, sort the data into those topics over which you have some control and those topics over which you have no control.  For example–you have control over who is invited to be a plenary speaker, if there will be a plenary speaker, how many concurrent sessions, who will teach those concurrent sessions;  you have no control over the air handling at the venue, the chairs at the venue, and probably, the temperature of the venue.

You can CHANGE those topics over which you have control.  Comments say the plenary speaker was terrible.  Do not invite that person to speak again.  Feedback says that the concurrent sessions didn’t provide options for classified staff, only faculty.  Decide the focus of your program and be explicit in the program promotional materials–advertise it explicitly to your target audience.  You get complaints about the venue–perhaps there is another venue; perhaps not.

You can also let your audience know what you decided based on your feedback.  One organization for which I volunteered sent out a white paper with all the concerns and how the organization was addressing them–or not.  It helped the grumblers see that the organization takes their feedback seriously.

And if none of this works…ask yourself: Is it a case of all of the people all of the time?

Although I have been learning about and doing evaluation for a long time, this week I’ve been searching for a topic to talk about.  A student recently asked me about the politics of evaluation–there is a lot that can be said on that topic, which I will save for another day.  Another student asked me about when to do an impact study and how to bound that study.  Certainly a good topic, too, though one that can wait for another post.  Something I read in another blog got me thinking about today’s post.  So, today I want to talk about gathering demographics.

Last week, I mentioned in my TIMELY TOPIC post about the AEA Guiding Principles. Those Principles along with the Program Evaluation Standards make significant contributions in assisting evaluators in making ethical decisions.  Evaluators make ethical decisions with every evaluation.  They are guided by these professional standards of conduct.  There are five Guiding Principles and five Evaluation Standards.  And although these are not proscriptive, they go along way to ensuring ethical evaluations.  That is a long introduction into gathering demographics.

The guiding principle, Integrity/Honesty states thatEvaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process.”  When we look at the entire evaluation process, as evaluators, we must strive constantly to maintain both personal and professional integrity in our decision making.  One decision we must make involves deciding what we need/want to know about our respondents.  As I’ve mentioned before, knowing what your sample looks like is important to reviewers, readers, and other stakeholders.  Yet, if we gather these data in a manner that is intrusive, are we being ethical?

Joe Heimlich, in a recent AEA365 post, says that asking demographic questions “…all carry with them ethical questions about use, need, confidentiality…”  He goes on to say that there are “…two major conditions shaping the decision to include – or to omit intentionally – questions on sexual or gender identity…”:

  1. When such data would further our understanding of the effect or the impact of a program, treatment, or event.
  2. When asking for such data would benefit the individual and/or their engagement in the evaluation process.

The first point relates to gender role issues–for example are gay men more like or more different from other gender categories?  And what gender categories did you include in your survey?  The second point relates to allowing an individual’s voice to be heard clearly and completely and have categories on our forms reflect their full participation in the evaluation.  For example, does marital status ask for domestic partnerships as well as traditional categories and are all those traditional categories necessary to hear your participants?

The next time you develop a questionnaire that includes demographic questions, take a second look at the wording–in an ethical manner.

Sure, you want to know the outcomes resulting from your program.  Sure, you want to know if your program is effective.  Perhaps, you will even attempt to answer the question, “So What?” when you program is effective on some previously identified outcome.  All that is important.

My topic today is something that is often over looked when developing an evaluation–the participant and program characteristics.

Do you know what your participants look like?

Do you know what your program looks like?

Knowing these characteristics may seem unimportant at the outset of the implementation.  As you get to the end, questions will arise–How many females?  How many Asians?  How many over 60?

Demographers typically ask demographic questions as part of the data collection.

Those questions often include the following categories:

  • Gender
  • Age
  • Race/ethnicity
  • Marital status
  • Household income
  • Educational level

Some of those may not be relevant to your program and you may want to include other general characteristic questions instead.  For example, in a long term evaluation of a forestry program where the target audience was individuals with wood lots, asking how many acres were owned was important and marital status did not seem relevant.

Sometimes asking some questions may seem intrusive–for example, household income or age.  In all demographic cases, giving the participant an option to not respond is appropriate.  When these data are reported, report the number of participants who chose not to respond.

When characterizing your program, it is sometimes important to know characteristics of the geographic area where the program is being implemented–rural, suburban, urban, ?  This is especially true when the program is a multisite program.   Local introduces an unanticipated variable that is often not recognized or remembered.

Any variation in the implementation–number of contact hours, for example, or the number of training modules.  The type of intervention is important as well–was the program delivered as a group intervention or individually. The time of the year that the program is implemented may also be important to document.  The time of the year may inadvertently introduce a history bias into the study–what is happening in September is different than what is happening in December.

Documenting these characteristics  and then defining them when reporting the findings helps to understand the circumstances surrounding the program implementation.  If the target audience is large, documenting these characteristics can provide comparison groups–did males do something differently than females?  Did participants over 50 do something different than participants 49 or under?

Keep in mind when collecting participant and program characteristic data, that these data help you and the audience to whom you disseminate the findings understand your outcomes and the effect of your program.

Last week I suggested a few evaluation related resolutions…one I didn’t mention which is easily accomplished is reading and/or contributing to AEA365.  AEA365 is a daily evaluation blog sponsored by the American Evaluation Association.  AEA’s Newsletter says: “The aea365 Tip-a-Day Alerts are dedicated to highlighting Hot Tips, Cool Tricks, Rad Resources, and Lessons Learned by and for evaluators (see the aea365 site here). Begun on January 1, 2010, we’re kicking off our second year and hoping to expand the diversity of voices, perspectives, and content shared during the coming year. We’re seeking colleagues to write one-time contributions of 250-400 words from their own experience. No online writing experience is necessary – you simply review examples on the aea356 Tip-a-Day Alerts site, craft your entry according to the contributions guidelines, and send it to Michelle Baron our blog coordinator. She’ll do a final edit and upload. If you have questions, or want to learn more, please review the site and then contact Michelle at aea365@eval.org. (updated December 2011)”

AEA365 is a valuable site.  I commend it to you.

Now the topic for today: Data sources–the why and the why not (or advantages and disadvantages for the source of information).

Ellen Taylor Powell, Evaluation Specialist at UWEX, has a handout that identifies sources of evaluation data.  These sources are existing information, people, and pictorial records and observations. Each source has advantages and disadvantages.

The source for the information below is the United Way publication, Measuring Program Outcomes (p. 86).

1.  Existing information such as Program Records are

  • Available
  • Accessible
  • Known sources and methods  of data collection

Program records can also

  • Be corrupt because of data collection methods
  • Have missing data
  • Omit post intervention impact data

2. Another form of existing information is Other Agency Records

  • Offer a different perspective
  • May contain impact data

Other agency records may also

  • Be corrupt because of data collection methods
  • Have missing data
  • May be unavailable as a data source
  • Have inconsistent time frames
  • Have case identification difficulties

3.  People are often main data source and include Individuals and General Public and

  • Have unique perspective on experience
  • Are an original source of data
  • General public can provide information when individuals are not accessible
  • Can serve geographic areas or specific population segments

Individuals and the general public  may also

  • Introduce a self-report bias
  • Not be accessible
  • Have limited overall experience

4.  Observations and pictorial records include Trained Observers and Mechanical Measurements

  • Can provide information on behavioral skills and practices
  • Supplement self reports
  • Can be easily quantified and standardized

These sources of data also

  • Are only relevant to physical observation
  • Need data collectors who must be reliably trained
  • Often result in inconsistent data with multiple observers
  • Are affected by the accuracy of testing devices
  • Have limited applicability to outcome measurement

My older daughter (I have two–Morgan, the older, and Mersedes, the younger, ) suggested I talk about the evaluative activities around the holidays…hmmm.

Since I’m experiencing serious writers block this week, I thought I’d revisit evaluation as an everyday activity, with a holiday twist.

Keep in mind that the root of evaluation is from the French after the Latin is value (Oxford English Dictionary on line says:  [a. Fr. évaluation, f. évaluer, f. é- =es- (:{em} L. ex) out + value VALUE.]).


Perhaps this is a good time to mention that the theme for Evaluation 2011 put forth by incoming AEA President, Jennifer Greene, is Values and Valuing in Evaluation.  I want to quote from her invitation letter, “…evaluation is inherently imbued with values.  Our work as evaluators intrinsically involves the process of valuing, as our charge is to make judgments (emphasis original) about the “goodness” or the quality, merit, or worth of a program.”

Let us consider the holidays “a program”. The Winter Holiday season starts (at least in the US and the northern hemisphere) with the  Thanksgiving holiday followed shortly thereafter by the first Sunday in Advent.  Typically this period of time includes at least the  following holidays:  St. Nicholas Day, Hanukkah, Winter Solstice, Christmas, Kwanzaa, Boxing Day, New Year’s, and Epiphany (I’m sure there are ones I didn’t list that are relevant).  This list typically takes us through January 6.  (I’m getting to the value part–stay with me…)

When I was a child, I remember the eager expectation of anticipating Christmas–none of the other holidays were even on my radar screen.  (For those of you who know me, you know how long ago that was…)  Then with great expectation (thank you, Charles),   I would go to bed and, as patiently as possible, await the moment when my father would turn on the tree lights, signaling that we children could descend to the living room.  Then poof!  That was Christmas. In 10 minutes it was done. The emotional bath I always took diminished greatly the value of this all important holiday.  Vowing that my children would grow up without the emotional bath of great expectations and dashed hopes, I choose to Celebrate the Season.  In doing so,  found value in the waiting of Advent, the majic of Hanukkah,  sharing of Kwanzaa, the mystery of Christmas and the traditions that come with all of these holidays.  There are other traditions that we revisit yearly, yet we find delight in remembering what the Winter Holiday traditions are and mean; remembering the foods we eat; the times we’ve shared.  From all this we find value in our program.  Do I still experience the emotional bath of childhood during this Holiday Season–not any more–and my children tell me that they like spreading the holidays out over the six week period.  I think this is the time of the year where we can take a second look at our programs (whether they are the holidays, youth development, watershed stewardship, nutrition education, or something else) and look for value in our programs–the part of the program that matters.  Evaluation is the work of capturing that value.  How we do that is what evaluation is all about.