We just celebrated Thanksgiving , a time in the US when citizens pause and reflect on those things for which we are thankful.  Often those things for which we are thankful are based in our values–things like education, voting, religion/belief systems, honesty, truth, peace.  In thinking about those things, I was reminded that the root word of evaluation is value…I thought this would be a good time to share AEA’s value statement.

 

Are you familiar with AEA’s values statement? What do these values mean to you?

 

AEA’s Values Statement

The American Evaluation Association values excellence in evaluation practice, utilization of evaluation findings, and inclusion and diversity in the evaluation community.

 

i.  We value high quality, ethically defensible, culturally responsive evaluation practices that lead to effective and humane organizations and ultimately to the enhancement of the public good.

ii. We value high quality, ethically defensible, culturally responsive evaluation practices that contribute to decision-making processes, program improvement, and policy formulation.

iii. We value a global and international evaluation community and understanding of evaluation practices.

iv. We value the continual development of evaluation professionals and the development of evaluators from under-represented groups.

v. We value inclusiveness and diversity, welcoming members at any point in their career, from any context, and representing a range of thought and approaches.

vi. We value efficient, effective, responsive, transparent, and socially responsible association operations.

 

See AEA’s Mission, Vision, Values

 

Values enter into all aspects of evaluation–planning, implementing, analyzing, reporting, and use.  Values are all around us.  Have you taken a good look at your values lately.  Review is always beneficial, informative, and insightful.  I encourage it.

The US elections are over; the analysis is mostly done;  the issues are still issues.  Well come, the next four years.  As Dickens said, It is the best of times; it is the worst of times.  Which? you ask–it all depends and that is the evaluative question of the day.

So what do you need to know now?  You need to help someone answer the question, Is it effective?  OR (maybe) Did it make a difference?

The Canadian Evaluation Society, the Canadian counter part to the American Evaluation Association has put together a series (six so far) of pamphlets for new evaluators.  This week, I’ve decided to go back to the beginning and promote evaluation as a profession.

Gene Shackman (no picture could be found) originally organized these brief pieces and is willing to share them.  Gene is an applied sociologist and director of the Global Social Change Research Project.  His first contribution was in December 2010; the most current, November 2012.

Hope these help.

Although this was the CES fourth post (in July, 2011), I believe it is something that evaluators  and those who woke up and found out they were evaluators need before any of the other booklets. Even though there will probably be strange and unfamiliar words in the booklet, it provides a foundation.  Every evaluator will know some of these words; some will be new; some will be context specific.   Every evaluator needs to have a comprehensive glossary of terminology. The glossary was compiled originally by the International Development Evaluation Association.  It is available for down load in English, French, and Arabic and is 65 pages.

CES is also posting a series (five as of this post) that Gene Shackman put together.  The first booklet, posted by CES in December, 2010 is called “What is program evaluation?” and is a 17 page booklet introducing program evaluation.  Shackman tells us that “this guide is available as a set of smaller pamphlets…” here.

In January, 2011, CES published the second of these booklets.  Evaluation questions addresses the key questions about program evaluation and is three pages long.

CES posted the third booklet in April, 2011.  It is called “What methods to use” and can be found here.  Shackman discusses briefly the benefits and limitations of qualitative and quantitative methods, the two main categories of answering evaluation questions.  A third approach that has gained credibility is mixed methods.

The next booklet, posted by CES in October 2012, is on surveys.  It “…explains what they are, what they are usually used for, and what typical questions are asked… as well as the pros and cons of different sampling methods.

The most recent booklet just posted (November, 2012) is about qualitative methods such as focus groups and interviews.

One characteristic of these five booklets is the additional resources that Shackman lists for each of the topics.  I have my favorites (and I’ve mentioned them from time to tine; those new to the field need to develop favorite sources.

What is important is that you embrace the options…this is  only one way to look at evaluation.

 

 

 

 

 

 

 

I spent much of the last week thinking about what I would write on November 7, 2012.

Would I know anything before I went to bed?  Would I like what I knew?  Would I breathe a sigh of relief?

Yes, yes, and yes, thankfully.  We are one nation and one people and the results of yesterday demonstrate that we are also evaluators.

Yesterday is a good example that everyday we evaluate.  (What is the root of the word evaluation?)  We review a program (in this case the candidates); we determine the value (what they say they believe); we develop a rubric (criteria); we support those values and that criteria; and we apply those criteria (vote).  Yesterday over 117 million people did just that.  Being a good evaluator I can’t just talk about the respondents without talking about the total population–the total number of possible respondents. One guess estimates that  169 million people are  registered to vote – 86 million Democrat – 55 million Republican – 28 million others registered.  The total response rate for this evaluation was 69.2%.  Very impressive–especially given the long lines. (Something the President said that needed fixing [I guess he is an evaluator, too.])

I am reminded that Senators and Representatives are elected to represent the voice of the people.  Their job is to represent you.  If they do not fulfill that responsibility, it is our responsibility to do something about it.  If you don’t hold them accountable, you can’t complain about the outcome.  Another evaluative activity.  (Did I ever tell you that evaluation is a political activity…?)  Our job as evaluators doesn’t stop when we cast our ballot; our job continues throughout the life of the program (in this case, the term in office).  Our job is to use those evaluation results to make things better.  Often, use is ignored.  Often, the follow-through is missing.  As evaluators, we need to come full circle.

Evaluation is an everyday activity.

 

 

 

As with a lot of folks who are posting to Eval Central,  I got back Monday from the TCs and AEA’s annual conference, Evaluation ’12.  I

I’ve been going to this conference since 1981 when Bob Ingle decided that the Evaluation Research Society and Evaluation Network needed to pool its resources and have one conference, Evaluation ’81.  I was a graduate student.  That conference changed my life.  This was my professional home.  I loved going and being there.  I was energized; excited; delighted by what I learned, saw, and did.

Reflecting  back over the 30+  years and all that has happened has provided me with insights and new awarenesses.  This year was a bittersweet experience for me, for may reasons–not the least of them being Susan Kistler’s resignation from her role as AEA Executive Director. I remember meeting Susan and her daughter Emily in Chicago when Susan was in graduate school and Emily was three.  Susan has helped make AEA what it is today.  I will miss seeing her at the annual meeting.  Because she lives on the east coast, I will rarely see her in person, now.  There are fewer and fewer long time colleagues and friends at this meeting.  And even though a very wise woman said to me, “Make younger friends”.  Making younger friends isn’t easy when you are an old person (aka OWG) like me and see these new folks only once a year.

I will probably continue going until my youngest daughter, now a junior in high school, finishes college. What I bring home is less this year than last; and less last year than the year before.  It is the people, certainly. I also find that the content challenges me less and less.  Not that the sessions are not interesting or well presented–they are.  I’m just not excited; not energized when I get back to the office. To me a conference is a “good” conference (ever the evaluator) if I met three new people with whom I wanted to maintain contact; spent time with three long time friends/colleagues; and brought home three new ideas. This year, not three new people; yes three long time friends; only one new idea.  4/9. I was delighted to hear that the younger folks were closer to the 9/9. Maybe I’m jaded.

The professional development session I attended (From Metaphor to Model) provided me with a visual for conceptualizing a complex program I’ll be evaluating.  The plenary I attended with Oren Hesterman from the Fair Food Network in Detroit demonstrated how evaluative tools and good questions support food sustainability.  What I found interesting was that during the question/comment session following the plenary, all the questions/comments were about food sustainability, NOT evaluation, even though Ricardo Millett asked really targeted evaluative questions.  Food sustainability seems to be a really important topic–talk about a complex messy system.  I also attended a couple of other sessions that really stood out and some that didn’t.  Is attending this meeting important, even in my jaded view?  Yes.  It is how evaluators grow and change; even when the change is not the goal.  Yes.  The only constant is change.  AEA provides professional development, in it pre and post sesssions as well as plenary and concurrent sessions.  Evaluators need that.