What have you listed as your goal(s) for 2013?

How is that goal related to evaluation?

One study suggests that you’re 10 times more likely alter a behavior successfully (i.e. get rid of a “bad” behavior; adopt a “good” behavior) than you would if you didn’t make resolution.  That statement is evaluative; a good place to start.  10 times!  Wow.  Yet, even that isn’t a guarantee you will be successful.

How can you increase the likelihood that you will be successful?

  1. Set specific goals.  Break the big goal into small steps; tie those small steps to a time line.  You want to read how many pages by when?  Write it down.  Keep track.
  2. Make it public.  Just like other intentions, if you tell someone there is an increased likelihood you will complete them.  I put it in my quarterly reports to my supervisors.
  3. Substitute “good” for “less than desirable”.  I know how hard it is to write (for example).  I have in the past and will this year again, schedule and protect a specified time to write those three articles that are sitting partly complete.  I’ve substituted “10:00 on Wednesdays and Fridays” for the vague “when I have a block of time I’ll get it done”.  The block of time never materializes.
  4. Keep track of progress.  I mentioned it in number 1; I’ll say it again:  Keep track; make a chart.  I’m going to get those manuscripts done by X data…my chart will reflect that

So are you going to

  1. Read something new to you (even if it is not new)?
  2. Write that manuscript from that presentation you made?
  3. Finish that manuscript you have started AND submit it for publication?
  4. Register for and watch a webinar on a topic you know little about?
  5. Explore a topic you find interesting?
  6. Something else?

Let me hear from you as to your resolutions; I’ll periodically give you an update.

 

And be grateful for the opportunity…gratitude is a powerful way to reinforce you and your goal setting.

 

Hanukkah ended last Saturday, two days after the December new moon.

Solstice happens Friday at 11:11 GMT (or the end of the world according to the Mayan calendar).

Christmas is next Tuesday.

Kwanzaa (a harvest festival that includes candles–light) starts the day after Christmas for seven days.

The twelfth day of Christmas occurs on January 6 (according to  some western Christian calendars).

All of these holidays are festivals of light…Enjoy.

And may 2013 bring you health, wealth, happiness, and the time to enjoy them.

 

(I’ll be gone next week hence two posts this week.)

At the end of January, participants in an evaluation capacity building program I lead will provide highlights of the evaluations they completed for this program.  That the event happens to be in Tucson and I happen to be able to get out of the wet and dreary northwest is no accident.  The event will capstone WECT (Western [Region] Evaluation Capacity Training–Say ‘west’) participants evaluations of the past 17 months.  Since each participant will be presenting their programs and the evaluations they did of those programs.  There will be a lot of data (hopefully).  The participants and those data could use (or not) a new and innovative take on data visualization.  Susan Kistler, AEA’s Executive Director, has blogged in AEA365 several times about data visualization.  Perhaps these reposts will help.

 

Susan Kistler says • “Colleagues, I wanted to return to this ongoing discussion. At this year’s conference (Evaluation ’12), I did a presentation on 25 low-cost/no-cost tech tools for data visualization and reporting. An outline of the tools covered and the slides may be accessed via the related aea365 post here http://aea365.org/blog/?p=7491. If you download the slides, each tool includes a link to access it, cost information, and in most cases supplementary notes and examples as needed.

A couple of the new ones that were favorites included wallwisher and poll everywhere. I also have on my to do list to explore both datawrapper and amCharts over the holidays.

But…am returning to you all to ask if there is anything out there that just makes you do your happy dance in terms of new low-cost, no-cost tools for data visualization and/or reporting. (This is a genuine request–if there is something out there, let Susan know.  You can comment on the blog, contact her through AEA (susan@eval.org), or let me know, I’ll forward it.

Susan also says in Saturday’s (December 15 , 2012) blog (and this would be very timely for WECT participants):

Enroll in the Free Knight Center’s Introduction to Infographics and Data Visualization: The course is online, and free, and will be offered between January 12 and February 23. According to the course information, we’ll learn the basics of:

“How to analyze and critique infographics and visualizations in newspapers, books, TV, etc., and how to propose alternatives that would improve them.

How to plan for data-based storytelling through charts, maps, and diagrams.

How to design infographics and visualizations that are not just attractive but, above all, informative, deep, and accurate.

The rules of graphic design and of interaction design, applied to infographics and visualizations.

Optional: How to use Adobe Illustrator to create infographics.”

 

What do I know that they don’t know?
What do they know that I don’t know?
What do all of us need to know that few of us knows?”

These three questions have buzzed around my head for a while in various formats.

When I attend a conference, I wonder.

When I conduct a program, I wonder, again.

When I explore something new, I am reminded that perhaps someone else has been here and wonder, yet again.

Thinking about these questions, I had these ideas

  • I see the first statement relating to capacity building;
  • The second statement  relating to engagement; and
  • The third statement (relating to statements one and two) relating to cultural competence.

After all, aren’t both of these statements (capacity building and engagement)  relating to a “foreign country” and a different culture?

How does all this relate to evaluation?  Read on…

Premise:  Evaluation is an everyday activity.  You evaluate everyday; all the time; you call it making decisions.  Every time you make a decision, you are building capacity in your ability to evaluate.  Sure, some of those decisions may need to be revised.  Sure, some of those decisions may just yield “negative” results.  Even so, you are building capacity.  AND you share that knowledge–with your children (if you have them), with your friends, with your colleagues, with the random shopper in the (grocery) store.  That is building capacity.  Building capacity can be systematic, organized, sequential.  Sometimes formal, scheduled, deliberate.  It is sharing “What do I know that they don’t know (in the hope that they too will know it and use it).

Premise:  Everyone knows something.  In knowing something, evaluation happens–because people made decisions about what is important and what is not.  To really engage (not just outreach which much of Extension does), one needs to “do as” the group that is being engaged.  To do anything else (“doing to” or “doing with”) is simply outreach and little or no knowledge is exchanged.  Doesn’t mean that knowledge isn’t distributed; Extension has been doing that for years.  Just means that the assumption (and you know what assumptions do) is that only the expert can distribute knowledge.  Who is to say that the group (target audience, participants) aren’t expert in at least part of what is being communicated.  Probably are.  It is the idea that … they know something that I don’t know (and I would benefit from knowing).

Premise:  Everything, everyone is connected.  Being prepared is the best way to learn something.  Being prepared by understanding culture (I’m not talking only about the intersection of race and gender; I’m talking about all the stereotypes you carry with you all the time) reinforces connections.  Learning about other cultures (something everyone can do) helps dis-spell stereotypes and mitigate stereotype threats.  And that is an evaluative task.  Think about it.  I think it captures the What do all of us need to know that few of us knows?” question.

 

 

 

Needs Assessment is an evaluative activity; the first assessment that a program developer must do to understand the gap between what is and what needs to be (what is  desired).  Needs assessments are the evaluative activity in the Situation box of a linear logic model. 

Sometimes, however, the target audience doesn’t know what they need to know and that presents challenges for the program planner.  How do you capture a need when the target audience doesn’t know they need the (fill in the blank).  That challenge is the stuff of other posts, however.

I had the good fortune to talk with Sam Angima, an Oregon  Regional Administrator who has been tasked with the charge of developing expertise in needs assessment.  Each Regional Administrator (there are 12) has been tasked with different charges to whom faculty can be referred.  We captured Sam’s insights in a conversational Aha! moment.  Let me know what you think.

 

 

We just celebrated Thanksgiving , a time in the US when citizens pause and reflect on those things for which we are thankful.  Often those things for which we are thankful are based in our values–things like education, voting, religion/belief systems, honesty, truth, peace.  In thinking about those things, I was reminded that the root word of evaluation is value…I thought this would be a good time to share AEA’s value statement.

 

Are you familiar with AEA’s values statement? What do these values mean to you?

 

AEA’s Values Statement

The American Evaluation Association values excellence in evaluation practice, utilization of evaluation findings, and inclusion and diversity in the evaluation community.

 

i.  We value high quality, ethically defensible, culturally responsive evaluation practices that lead to effective and humane organizations and ultimately to the enhancement of the public good.

ii. We value high quality, ethically defensible, culturally responsive evaluation practices that contribute to decision-making processes, program improvement, and policy formulation.

iii. We value a global and international evaluation community and understanding of evaluation practices.

iv. We value the continual development of evaluation professionals and the development of evaluators from under-represented groups.

v. We value inclusiveness and diversity, welcoming members at any point in their career, from any context, and representing a range of thought and approaches.

vi. We value efficient, effective, responsive, transparent, and socially responsible association operations.

 

See AEA’s Mission, Vision, Values

 

Values enter into all aspects of evaluation–planning, implementing, analyzing, reporting, and use.  Values are all around us.  Have you taken a good look at your values lately.  Review is always beneficial, informative, and insightful.  I encourage it.

The US elections are over; the analysis is mostly done;  the issues are still issues.  Well come, the next four years.  As Dickens said, It is the best of times; it is the worst of times.  Which? you ask–it all depends and that is the evaluative question of the day.

So what do you need to know now?  You need to help someone answer the question, Is it effective?  OR (maybe) Did it make a difference?

The Canadian Evaluation Society, the Canadian counter part to the American Evaluation Association has put together a series (six so far) of pamphlets for new evaluators.  This week, I’ve decided to go back to the beginning and promote evaluation as a profession.

Gene Shackman (no picture could be found) originally organized these brief pieces and is willing to share them.  Gene is an applied sociologist and director of the Global Social Change Research Project.  His first contribution was in December 2010; the most current, November 2012.

Hope these help.

Although this was the CES fourth post (in July, 2011), I believe it is something that evaluators  and those who woke up and found out they were evaluators need before any of the other booklets. Even though there will probably be strange and unfamiliar words in the booklet, it provides a foundation.  Every evaluator will know some of these words; some will be new; some will be context specific.   Every evaluator needs to have a comprehensive glossary of terminology. The glossary was compiled originally by the International Development Evaluation Association.  It is available for down load in English, French, and Arabic and is 65 pages.

CES is also posting a series (five as of this post) that Gene Shackman put together.  The first booklet, posted by CES in December, 2010 is called “What is program evaluation?” and is a 17 page booklet introducing program evaluation.  Shackman tells us that “this guide is available as a set of smaller pamphlets…” here.

In January, 2011, CES published the second of these booklets.  Evaluation questions addresses the key questions about program evaluation and is three pages long.

CES posted the third booklet in April, 2011.  It is called “What methods to use” and can be found here.  Shackman discusses briefly the benefits and limitations of qualitative and quantitative methods, the two main categories of answering evaluation questions.  A third approach that has gained credibility is mixed methods.

The next booklet, posted by CES in October 2012, is on surveys.  It “…explains what they are, what they are usually used for, and what typical questions are asked… as well as the pros and cons of different sampling methods.

The most recent booklet just posted (November, 2012) is about qualitative methods such as focus groups and interviews.

One characteristic of these five booklets is the additional resources that Shackman lists for each of the topics.  I have my favorites (and I’ve mentioned them from time to tine; those new to the field need to develop favorite sources.

What is important is that you embrace the options…this is  only one way to look at evaluation.

 

 

 

 

 

 

 

I spent much of the last week thinking about what I would write on November 7, 2012.

Would I know anything before I went to bed?  Would I like what I knew?  Would I breathe a sigh of relief?

Yes, yes, and yes, thankfully.  We are one nation and one people and the results of yesterday demonstrate that we are also evaluators.

Yesterday is a good example that everyday we evaluate.  (What is the root of the word evaluation?)  We review a program (in this case the candidates); we determine the value (what they say they believe); we develop a rubric (criteria); we support those values and that criteria; and we apply those criteria (vote).  Yesterday over 117 million people did just that.  Being a good evaluator I can’t just talk about the respondents without talking about the total population–the total number of possible respondents. One guess estimates that  169 million people are  registered to vote – 86 million Democrat – 55 million Republican – 28 million others registered.  The total response rate for this evaluation was 69.2%.  Very impressive–especially given the long lines. (Something the President said that needed fixing [I guess he is an evaluator, too.])

I am reminded that Senators and Representatives are elected to represent the voice of the people.  Their job is to represent you.  If they do not fulfill that responsibility, it is our responsibility to do something about it.  If you don’t hold them accountable, you can’t complain about the outcome.  Another evaluative activity.  (Did I ever tell you that evaluation is a political activity…?)  Our job as evaluators doesn’t stop when we cast our ballot; our job continues throughout the life of the program (in this case, the term in office).  Our job is to use those evaluation results to make things better.  Often, use is ignored.  Often, the follow-through is missing.  As evaluators, we need to come full circle.

Evaluation is an everyday activity.

 

 

 

As with a lot of folks who are posting to Eval Central,  I got back Monday from the TCs and AEA’s annual conference, Evaluation ’12.  I

I’ve been going to this conference since 1981 when Bob Ingle decided that the Evaluation Research Society and Evaluation Network needed to pool its resources and have one conference, Evaluation ’81.  I was a graduate student.  That conference changed my life.  This was my professional home.  I loved going and being there.  I was energized; excited; delighted by what I learned, saw, and did.

Reflecting  back over the 30+  years and all that has happened has provided me with insights and new awarenesses.  This year was a bittersweet experience for me, for may reasons–not the least of them being Susan Kistler’s resignation from her role as AEA Executive Director. I remember meeting Susan and her daughter Emily in Chicago when Susan was in graduate school and Emily was three.  Susan has helped make AEA what it is today.  I will miss seeing her at the annual meeting.  Because she lives on the east coast, I will rarely see her in person, now.  There are fewer and fewer long time colleagues and friends at this meeting.  And even though a very wise woman said to me, “Make younger friends”.  Making younger friends isn’t easy when you are an old person (aka OWG) like me and see these new folks only once a year.

I will probably continue going until my youngest daughter, now a junior in high school, finishes college. What I bring home is less this year than last; and less last year than the year before.  It is the people, certainly. I also find that the content challenges me less and less.  Not that the sessions are not interesting or well presented–they are.  I’m just not excited; not energized when I get back to the office. To me a conference is a “good” conference (ever the evaluator) if I met three new people with whom I wanted to maintain contact; spent time with three long time friends/colleagues; and brought home three new ideas. This year, not three new people; yes three long time friends; only one new idea.  4/9. I was delighted to hear that the younger folks were closer to the 9/9. Maybe I’m jaded.

The professional development session I attended (From Metaphor to Model) provided me with a visual for conceptualizing a complex program I’ll be evaluating.  The plenary I attended with Oren Hesterman from the Fair Food Network in Detroit demonstrated how evaluative tools and good questions support food sustainability.  What I found interesting was that during the question/comment session following the plenary, all the questions/comments were about food sustainability, NOT evaluation, even though Ricardo Millett asked really targeted evaluative questions.  Food sustainability seems to be a really important topic–talk about a complex messy system.  I also attended a couple of other sessions that really stood out and some that didn’t.  Is attending this meeting important, even in my jaded view?  Yes.  It is how evaluators grow and change; even when the change is not the goal.  Yes.  The only constant is change.  AEA provides professional development, in it pre and post sesssions as well as plenary and concurrent sessions.  Evaluators need that.

 

 

The topic of survey development seems to be  popping up everywhere–AEA365, Kirkpatrick Partners, eXtension Evaluation Community of Practice, among others.  Because survey development is so important to Extension faculty, I’m providing links and summaries.

 

 AEA365 says:

“… it is critical that you pre-test it with a small sample first.”  Real time testing helps eliminate confusion, improve clarity, and assures that you are asking a question that will give you an answer to what you want to know.  This is so important today when many surveys are electronic.

It is also important to “Train your data collection staff…Data collection staff are the front line in the research process.”  Since they are the people who will be collecting the data, they need to understand the protocols, the rationales, and the purposes of the survey.

Kirkpatrick Partners say:

“Survey questions are frequently impossible to answer accurately because they actually ask more than one question. ”  This is the biggest problem in constructing survey questions.  They provide some examples of asking more than one question.

 

Michael W. Duttweiler, Assistant Director for Program Development and Accountability at Cornell Cooperative Extension stresses the four phases of survey construction:

  1. Developing a Precise Evaluation Purpose Statement and Evaluation Questions
  2. Identifying and Refining Survey Questions
  3. Applying Golden Rules for Instrument Design
  4. Testing, Monitoring and Revising

He then indicates that the next three blog posts will cover point 2, 3, and 4.

Probably my favorite post on survey recently was one that Jane Davidson did back in August, 2012 in talking about survey response scales.  Her “boxers or briefs” example captures so many issues related to survey development.

Writing survey questions which give you useable data that answers your questions about your program is a challenge; it is not impossible.  Dillman writes the book about surveys; it should be on your desk.

Here is the Dillman citation:
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009).  Internet, mail, and mixed-mode surveys: The tailored design method.  Hoboken, NJ: John Wiley & Sons, Inc.