In a conversation with a colleague on the need for IRB when what was being conducted was evaluation not research, I was struck by two things:

  1. I needed to discuss the protections provided by IRB  (the next timely topic??) and
  2. the difference between evaluation and research needed to be made clear.

Leaving number 1 for another time, number 2 is the topic of the day.

A while back, AEA 365 did a post on the difference between evaluation and research (some of which is included below) from a graduate students perspective.  Perhaps providing other resources would be valuable.

To have evaluation grouped with research is at worst a travesty; at best unfair.  Yes, evaluation uses research tools and techniques.  Yes, evaluation contributes to a larger body of knowledge (and in that sense seeks truth, albeit contextual).  Yes, evaluation needs to have institutional review board documentation.  So in many cases, people could be justified in saying evaluation and research are the same.

NOT.

Carol Weiss   (1927-2013, she died in January) has written extensively on this difference and  makes the distinction clearly.  Weiss’s first edition of Evaluation Research  was published in 1972.She revised this volume in 1998 and issued it under the title of Evaluation. (Both have subtitles.)

She says that evaluation applies social science research methods and makes the case that it is intent of the study which makes the difference between evaluation and research.  She lists the following differences (pp 15 – 17, 2nd ed.):

  1. Utility;
  2. Program-driven questions;
  3. Judgmental quality;
  4. Action setting;
  5. Role Conflicts;
  6. Publication; and
  7. Allegiance.

 

(For those of you who are still skeptical, she also lists similarities.)  Understanding and knowing the difference between evaluation and research matters.  I recommend her books.

Gisele Tchamba who wrote the AEA365 post says the following: 

  1. Know the difference.  I came to realize that practicing evaluation does not preclude doing pure research. On the contrary, the methods are interconnected but the aim is different (I think this mirrors Weiss’s concept of intent).
  2. The burden of explaining. Many people in academia vaguely know the meaning of evaluation. Those who think they do mistake evaluation for assessment in education. Whenever I meet with people whose understanding of evaluation is limited to educational assessment, I use Scriven’s definition and emphasis words like “value, merit, and worth”.
  3. Distinguishing between evaluation and social science research.  Theoretical and practical experiences are helpful ways to distinguish between the two disciplines. Extensive reading of evaluation literature helps to see the difference.

She also sites a Trochim definition that is worth keeping in mind as it captures the various unique qualities of evaluation.  Carol Weiss mentioned them all in her list (above):

  •  “Evaluation is a profession that uses formal methodologies to provide useful empirical evidence about public entities (such as programs, products, performance) in decision making contexts that are inherently political and involve multiple often conflicting stakeholders, where resources are seldom sufficient, and where time-pressures are salient”.

Resources:

What have you listed as your goal(s) for 2013?

How is that goal related to evaluation?

One study suggests that you’re 10 times more likely alter a behavior successfully (i.e. get rid of a “bad” behavior; adopt a “good” behavior) than you would if you didn’t make resolution.  That statement is evaluative; a good place to start.  10 times!  Wow.  Yet, even that isn’t a guarantee you will be successful.

How can you increase the likelihood that you will be successful?

  1. Set specific goals.  Break the big goal into small steps; tie those small steps to a time line.  You want to read how many pages by when?  Write it down.  Keep track.
  2. Make it public.  Just like other intentions, if you tell someone there is an increased likelihood you will complete them.  I put it in my quarterly reports to my supervisors.
  3. Substitute “good” for “less than desirable”.  I know how hard it is to write (for example).  I have in the past and will this year again, schedule and protect a specified time to write those three articles that are sitting partly complete.  I’ve substituted “10:00 on Wednesdays and Fridays” for the vague “when I have a block of time I’ll get it done”.  The block of time never materializes.
  4. Keep track of progress.  I mentioned it in number 1; I’ll say it again:  Keep track; make a chart.  I’m going to get those manuscripts done by X data…my chart will reflect that

So are you going to

  1. Read something new to you (even if it is not new)?
  2. Write that manuscript from that presentation you made?
  3. Finish that manuscript you have started AND submit it for publication?
  4. Register for and watch a webinar on a topic you know little about?
  5. Explore a topic you find interesting?
  6. Something else?

Let me hear from you as to your resolutions; I’ll periodically give you an update.

 

And be grateful for the opportunity…gratitude is a powerful way to reinforce you and your goal setting.

 

What do I know that they don’t know?
What do they know that I don’t know?
What do all of us need to know that few of us knows?”

These three questions have buzzed around my head for a while in various formats.

When I attend a conference, I wonder.

When I conduct a program, I wonder, again.

When I explore something new, I am reminded that perhaps someone else has been here and wonder, yet again.

Thinking about these questions, I had these ideas

  • I see the first statement relating to capacity building;
  • The second statement  relating to engagement; and
  • The third statement (relating to statements one and two) relating to cultural competence.

After all, aren’t both of these statements (capacity building and engagement)  relating to a “foreign country” and a different culture?

How does all this relate to evaluation?  Read on…

Premise:  Evaluation is an everyday activity.  You evaluate everyday; all the time; you call it making decisions.  Every time you make a decision, you are building capacity in your ability to evaluate.  Sure, some of those decisions may need to be revised.  Sure, some of those decisions may just yield “negative” results.  Even so, you are building capacity.  AND you share that knowledge–with your children (if you have them), with your friends, with your colleagues, with the random shopper in the (grocery) store.  That is building capacity.  Building capacity can be systematic, organized, sequential.  Sometimes formal, scheduled, deliberate.  It is sharing “What do I know that they don’t know (in the hope that they too will know it and use it).

Premise:  Everyone knows something.  In knowing something, evaluation happens–because people made decisions about what is important and what is not.  To really engage (not just outreach which much of Extension does), one needs to “do as” the group that is being engaged.  To do anything else (“doing to” or “doing with”) is simply outreach and little or no knowledge is exchanged.  Doesn’t mean that knowledge isn’t distributed; Extension has been doing that for years.  Just means that the assumption (and you know what assumptions do) is that only the expert can distribute knowledge.  Who is to say that the group (target audience, participants) aren’t expert in at least part of what is being communicated.  Probably are.  It is the idea that … they know something that I don’t know (and I would benefit from knowing).

Premise:  Everything, everyone is connected.  Being prepared is the best way to learn something.  Being prepared by understanding culture (I’m not talking only about the intersection of race and gender; I’m talking about all the stereotypes you carry with you all the time) reinforces connections.  Learning about other cultures (something everyone can do) helps dis-spell stereotypes and mitigate stereotype threats.  And that is an evaluative task.  Think about it.  I think it captures the What do all of us need to know that few of us knows?” question.

 

 

 

The US elections are over; the analysis is mostly done;  the issues are still issues.  Well come, the next four years.  As Dickens said, It is the best of times; it is the worst of times.  Which? you ask–it all depends and that is the evaluative question of the day.

So what do you need to know now?  You need to help someone answer the question, Is it effective?  OR (maybe) Did it make a difference?

The Canadian Evaluation Society, the Canadian counter part to the American Evaluation Association has put together a series (six so far) of pamphlets for new evaluators.  This week, I’ve decided to go back to the beginning and promote evaluation as a profession.

Gene Shackman (no picture could be found) originally organized these brief pieces and is willing to share them.  Gene is an applied sociologist and director of the Global Social Change Research Project.  His first contribution was in December 2010; the most current, November 2012.

Hope these help.

Although this was the CES fourth post (in July, 2011), I believe it is something that evaluators  and those who woke up and found out they were evaluators need before any of the other booklets. Even though there will probably be strange and unfamiliar words in the booklet, it provides a foundation.  Every evaluator will know some of these words; some will be new; some will be context specific.   Every evaluator needs to have a comprehensive glossary of terminology. The glossary was compiled originally by the International Development Evaluation Association.  It is available for down load in English, French, and Arabic and is 65 pages.

CES is also posting a series (five as of this post) that Gene Shackman put together.  The first booklet, posted by CES in December, 2010 is called “What is program evaluation?” and is a 17 page booklet introducing program evaluation.  Shackman tells us that “this guide is available as a set of smaller pamphlets…” here.

In January, 2011, CES published the second of these booklets.  Evaluation questions addresses the key questions about program evaluation and is three pages long.

CES posted the third booklet in April, 2011.  It is called “What methods to use” and can be found here.  Shackman discusses briefly the benefits and limitations of qualitative and quantitative methods, the two main categories of answering evaluation questions.  A third approach that has gained credibility is mixed methods.

The next booklet, posted by CES in October 2012, is on surveys.  It “…explains what they are, what they are usually used for, and what typical questions are asked… as well as the pros and cons of different sampling methods.

The most recent booklet just posted (November, 2012) is about qualitative methods such as focus groups and interviews.

One characteristic of these five booklets is the additional resources that Shackman lists for each of the topics.  I have my favorites (and I’ve mentioned them from time to tine; those new to the field need to develop favorite sources.

What is important is that you embrace the options…this is  only one way to look at evaluation.

 

 

 

 

 

 

 

I spent much of the last week thinking about what I would write on November 7, 2012.

Would I know anything before I went to bed?  Would I like what I knew?  Would I breathe a sigh of relief?

Yes, yes, and yes, thankfully.  We are one nation and one people and the results of yesterday demonstrate that we are also evaluators.

Yesterday is a good example that everyday we evaluate.  (What is the root of the word evaluation?)  We review a program (in this case the candidates); we determine the value (what they say they believe); we develop a rubric (criteria); we support those values and that criteria; and we apply those criteria (vote).  Yesterday over 117 million people did just that.  Being a good evaluator I can’t just talk about the respondents without talking about the total population–the total number of possible respondents. One guess estimates that  169 million people are  registered to vote – 86 million Democrat – 55 million Republican – 28 million others registered.  The total response rate for this evaluation was 69.2%.  Very impressive–especially given the long lines. (Something the President said that needed fixing [I guess he is an evaluator, too.])

I am reminded that Senators and Representatives are elected to represent the voice of the people.  Their job is to represent you.  If they do not fulfill that responsibility, it is our responsibility to do something about it.  If you don’t hold them accountable, you can’t complain about the outcome.  Another evaluative activity.  (Did I ever tell you that evaluation is a political activity…?)  Our job as evaluators doesn’t stop when we cast our ballot; our job continues throughout the life of the program (in this case, the term in office).  Our job is to use those evaluation results to make things better.  Often, use is ignored.  Often, the follow-through is missing.  As evaluators, we need to come full circle.

Evaluation is an everyday activity.

 

 

 

As with a lot of folks who are posting to Eval Central,  I got back Monday from the TCs and AEA’s annual conference, Evaluation ’12.  I

I’ve been going to this conference since 1981 when Bob Ingle decided that the Evaluation Research Society and Evaluation Network needed to pool its resources and have one conference, Evaluation ’81.  I was a graduate student.  That conference changed my life.  This was my professional home.  I loved going and being there.  I was energized; excited; delighted by what I learned, saw, and did.

Reflecting  back over the 30+  years and all that has happened has provided me with insights and new awarenesses.  This year was a bittersweet experience for me, for may reasons–not the least of them being Susan Kistler’s resignation from her role as AEA Executive Director. I remember meeting Susan and her daughter Emily in Chicago when Susan was in graduate school and Emily was three.  Susan has helped make AEA what it is today.  I will miss seeing her at the annual meeting.  Because she lives on the east coast, I will rarely see her in person, now.  There are fewer and fewer long time colleagues and friends at this meeting.  And even though a very wise woman said to me, “Make younger friends”.  Making younger friends isn’t easy when you are an old person (aka OWG) like me and see these new folks only once a year.

I will probably continue going until my youngest daughter, now a junior in high school, finishes college. What I bring home is less this year than last; and less last year than the year before.  It is the people, certainly. I also find that the content challenges me less and less.  Not that the sessions are not interesting or well presented–they are.  I’m just not excited; not energized when I get back to the office. To me a conference is a “good” conference (ever the evaluator) if I met three new people with whom I wanted to maintain contact; spent time with three long time friends/colleagues; and brought home three new ideas. This year, not three new people; yes three long time friends; only one new idea.  4/9. I was delighted to hear that the younger folks were closer to the 9/9. Maybe I’m jaded.

The professional development session I attended (From Metaphor to Model) provided me with a visual for conceptualizing a complex program I’ll be evaluating.  The plenary I attended with Oren Hesterman from the Fair Food Network in Detroit demonstrated how evaluative tools and good questions support food sustainability.  What I found interesting was that during the question/comment session following the plenary, all the questions/comments were about food sustainability, NOT evaluation, even though Ricardo Millett asked really targeted evaluative questions.  Food sustainability seems to be a really important topic–talk about a complex messy system.  I also attended a couple of other sessions that really stood out and some that didn’t.  Is attending this meeting important, even in my jaded view?  Yes.  It is how evaluators grow and change; even when the change is not the goal.  Yes.  The only constant is change.  AEA provides professional development, in it pre and post sesssions as well as plenary and concurrent sessions.  Evaluators need that.

 

 

What is the difference between need to know and nice to know?  How does this affect evaluation?  I got a post this week on a blog I follow (Kirkpatrick) that talks about how much data does a trainer really need?  (Remember that Don Kirkpatrick developed and established an evaluation model for professional training back in the 1954 that still holds today.)

Most Extension faculty don’t do training programs per se, although there are training elements in Extension programs.  Extension faculty are typically looking for program impacts in their program evaluations.  Program improvement evaluations, although necessary, are not sufficient.  Yes, they provide important information to the program planner; they don’t necessarily give you information about how effective your program has been (i.e., outcome information). (You will note that I will use the term “impacts” interchangeably with “outcomes” because most Extension faculty parrot the language of reporting impacts.)

OK.  So how much data do you really need?  How do you determine what is nice to have and what is necessary (need) to have?  How do you know?

  1. Look at your logic model.  Do you have questions that reflect what you expect to have happen as a result of your program?
  2. Review your goals.  Review your stated goals, not the goals you think will happen because you “know you have a good program”.
  3. Ask yourself, How will I USE these data?  If the data will not be used to defend your program, you don’t need it.
  4. Does the question describe your target audience?  Although not demonstrating impact, knowing what your target audience looks like is important.  Journal articles and professional presentations want to know this.
  5. Finally, ask yourself, Do I really need to know the answer to this question or will it burden the participant.  If it is a burden, your participants will tend to not answer, then you  have a low response rate; not something you want.

Kirkpatrick also advises to avoid redundant questions.  That means questions asked in a number of ways and giving you the same answer; questions written in positive and negative forms.  The other question that I always include because it will give me a way to determine how my program is making a difference is a question on intention including a time frame.  For example, “In the next six months do you intend to try any of the skills you learned to day?  If so, which one.”  Mazmaniam has identified the best predictor of behavior change (a measure of making a difference) is stated intention to change.  Telling someone else makes the participant accountable.  That seems to make the difference.

 

Reference:

Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowits, M. P. (1998).   Information about barriers to planned change: A Randomized controlled trail involving continuing medical education lectures and commitment to change.  Academic Medicine, 73(8).

 

P.S.  No blog next week; away on business.

 

 

 

Creativity is not an escape from disciplined thinking. It is an escape with disciplined thinking.” – Jerry Hirschberg – via @BarbaraOrmsby

The above quote was in the September 7 post of Harold Jarche’s blog.  I think it has relevance to the work we do as evaluators.  Certainly, there is a creative part to evaluation; certainly there is a disciplined thinking part to evaluation.  Remembering that is sometimes a challenge.

So where in the process do we see creativity and where do we see disciplined thinking?

When evaluators construct a logic model, you see creativity; you also see disciplined thinking

When evaluators develop an implementation plan, you see creativity; you also see disciplined thinking.

When evaluators develop a methodology and a method, you see creativity; you also see disciplined thinking.

When evaluators present the findings for use, you see creativity; you also see disciplined thinking.

So the next time you say “give me a survey for this program”,  think–Is a survey the best approach to determining if this program is effective; will it really answer my questions?

Creativity and disciplined thinking are companions in evaluation.

 

Bright ideas are often the result of  “Aha” moments.  Aha moments  are “The sudden understanding or grasp of a concept…an event that is typically rewarding and pleasurable.  Usually, the insights remain in our memory as lasting impressions.” — Senior News Editor for Psych Central.

How often have you had an “A-ha” moment when you are evaluating?  A colleague had one, maybe several, that made an impression on her.  Talk about building capacity–this did.  She has agreed to share that experience, soon (the bright idea).

Not only did it make an impression on her, her telling me made an impression on me.  I am once again reminded of how much I take evaluation for granted.  Because evaluation is an everyday activity, I often assume that people know what I’m talking about.  We all know what happens when we assume something….  I am also reminded how many people don’t know what I consider basic  evaluation information, like constructing a survey item (Got  Dillman on your shelf, yet?).

 

What is this symbol called?  No, it is not the square root sign–although that is its function.  “It’s called a radical…because it gets at the root…the definition of radical is: of or going to the root or origin.”–Guy McPherson

How radical are you?  How does that relate to evaluation, you wonder?  Telling truth to power is a radical concept (the definition here is departure from the usual or traditional); one to which evaluators who hold integrity sacrosanct adhere. (It is the third AEA guiding principle.)  Evaluators often, if they are doing their job right, have to speak truth to power–because the program wasn’t effective, or it resulted in something different than what was planned, or it cost too much to replicate, or it just didn’t work out .  Funders, supervisors, program leaders need to know the truth as you found it.


“Those who seek to isolate will become isolated themselves.”Diederick Stoel  This sage piece of advice is the lead for Jim Kirkpatrick’s quick tip for evaluating training activities.  He says, “Attempting to isolate the impact of the formal training class at the start of the initiative is basically discounting and disrespecting the contributions of other factors…Instead of seeking to isolate the impact of your training, gather data on all of the factors that contributed to the success of the initiative, and give credit where credit is due. This way, your role is not simply to deliver training, but to create and orchestrate organizational success. This makes you a strategic business partner who contributes to your organization’s competitive advantage and is therefore indispensable.”  Extension faculty conduct a lot of trainings and want to take credit for the training effectiveness.  It is important to recognize that there may be other factors at work–mitigating factors; intermediate factors; even confounding factors.  As much as Extension faculty want to isolate (i.e., take credit), it is important to share the credit.

 

 

 

Yesterday was the 236th anniversary of the US independence from England (and George III, in his infinite wisdom, is said to have said nothing important happened…right…oh, all right, how WOULD he have known anything had happened several thousand miles away?).  And yes, I saw fireworks.  More importantly, though, I thought a lot about what does independence mean?  And then, because I’m posting here, what does independence mean for evaluation and evaluators?

In thinking about independence, I am reminded about intercultural communication and the contrast between individualism and collectivism.  To make this distinction clear, think “I- centered” vs. “We-centered”.  Think western Europe, US vs. Asia, Japan.  To me, individualism is reflective of independence and collectivism is reflective of networks, systems if you will.  When we talk about independence, the words “freedom” and “separate” and “unattached” are bandied about and that certainly applies to the anniversary celebrated yesterday.  Yet, when I contrast it with collectivism and think of the words that are often used in that context (“interdependence”, “group”, “collaboration”), I become aware of other concepts.

Like, what is missing when we are independent?  What have we lost being independent?  What are we avoiding by being independent?  Think “Little Red Hen”.  And conversely, what have we gained by being collective, by collaborating, by connecting?  Think “Spock and Good of the Many”.

There is in AEA a topical interest group of “Independent Consulting”.  This TIG is home to those evaluators who function outside of an institution and who have made their own organization; who work independently, on contract.  In their mission statement, they pro port to “Foster a community of independent evaluators…”  So by being separate, are they missing community and need to foster that aspect?  They insist that they are “…great at networking”, which doesn’t sound very independent; it sounds almost collective.  A small example, and probably not the best.

I think about the way the western world is today; other than your children and/or spouse/significant other are you connected to a community? a network? a group?  not just in membership (like at church or club); really connected (like in extended family–whether of the heart or of the blood)?  Although the Independent Consulting TIG says they are great at networking and some even work in groups, are they connected?  (Social media doesn’t count.)  Is the “I” identity a product of being independent?  It certainly is a characteristic of individualism.  Can you measure the value, merit, worth of the work you do by the level of independence you possess?  Do internal evaluators garner all the benefits of being connected.  (As an internal evaluator, I’m pretty independent, even though there is a critical mass of evaluators where I work.)

Although being an independent evaluator has its benefits–less bias, different perspective (do I dare say, more objective?), is the distance created, the competition for position, the risk taking worth the lack of relational harmony that can accompany relationships? Is the US better off as its own country?  I’d say probably.   My musings only…what do you think?