We are four months into 2013 and I keep asking the question “Is this blog making a difference?” I’ve asked for an analytic report to give me some answers. I’ve asked you readers for your stories.
Let’s hear it for SEOs and how they pick up that title–I credit that with the number of comments I’ve gotten. I AM surprised at the number of comments I have gotten since January (hundreds, literally). Most say things like, “of course it is making a difference.” Some compliment me on my writing style. Some are in a foreign language which I cannot read (I am illiterate when it comes to Cyrillic, Arabic, Greek, Chinese, and other non-English alphabets). Some are marketing–wanting ping backs to their recently started blogs for some product. Some have commented specifically on the content (sample size and confidence intervals); some have commented on the time of year (vernal equinox). Occasionally, I get a comment like the comment below and I keep writing.
The questions of all questions… Do I make a difference? I like how you write and let me answer your question. Personally I was supposed to be dead ages ago because someone tried to kill me for the h… of it … Since then (I barely survived) I have asked myself the same question several times and every single time I answer with YES. Why? Because I noticed that whatever you do, there is always someone using what you say or do to improve their own life. So, I can answer the question for you: Do you make a difference? Yes, you do, because there will always be someone who uses your writings to do something positive with it. So, I hope I just made your day! And needless to say, keep the blog posts coming!
Enough update. New topic: I just got a copy of the third edition of Miles and Huberman (my to go reference for qualitative data analysis). Wait you say–Miles and Huberman are dead–yes, they are. Johnny Saldana (there needs to be a~ above the “n” in his name only I don’t know how to do that with this keyboard) was approached by Sage to be the third author and revise and update the book. A good thing, I think. Miles and Huberman’s second edition was published in 1994. That is almost 20 years. I’m eager to see if it will hold as a classic given that there are many other books on qualitative coding in press currently. (The spring research flyer from Gilford lists several on qualitative inquiry and analysis from some established authors.)
I also recently sat in on a research presentation of a candidate for a tenure track position here at OSU who talked about how the analysis of qualitative data was accomplished. Took me back to when I was learning–index cards and sticky notes. Yes, there are marvelous software programs out there (NVivo, Ethnograph, N*udist); I will support the argument that the best way to learn about your qualitative data is to immerse yourself in it with color coded index cards and sticky notes. Then you can use the software to check your results. Keep in mind, though, that you are the PI and you will bring many biases to the analysis of your data.
A rubric is a way to make criteria (or standards) explicit and it does that in writing so that there can be no misunderstanding. It is found in many evaluative activities especially assessment of classroom work. (Misunderstanding is still possible because the English language is often not clear–something I won’t get into today; suffice it to say that a wise woman said words are important–keep that in mind when crafting a rubric.)
This week there were many events that required rubrics. Rubrics may have been implicit; they certainly were not explicit. Explicit rubrics were needed.
I’ll start with apologies for the political nature of today’s post.
Certainly, an implicit rubric for this event can be found in this statement:
Only it was not used. When there are clear examples of inappropriate behavior; behavior that my daughters’ kindergarten teacher said was mean and not nice, a rubric exists. Simple rubrics are understood by five year olds (was that behavioir mean OR was that behavior nice). Obviously 46 senators could only hear the NRA; they didn’t hear that the behavior (school shootings) was mean.
Boston provided us with another example of the mean vs. nice rubric. Bernstein got the concept of mean vs. nice.
There were lots of rubrics, however implicit, for that event. The NY Times reported that helpers (my word) ran TOWARD those in need not away from the site of the explosion (violence). There were many helpers. A rubric existed, however implicit.
I’m no longer worked up–just determined and for that I need a rubric. This image may not give me the answer; it does however give me pause.
For more information on assessment and rubrics see: Walvoord, B. E. (2004). Assessment clear and simple. San Francisco: Jossey-Bass.
In a conversation with a colleague on the need for IRB when what was being conducted was evaluation not research, I was struck by two things:
Leaving number 1 for another time, number 2 is the topic of the day.
A while back, AEA 365 did a post on the difference between evaluation and research (some of which is included below) from a graduate students perspective. Perhaps providing other resources would be valuable.
To have evaluation grouped with research is at worst a travesty; at best unfair. Yes, evaluation uses research tools and techniques. Yes, evaluation contributes to a larger body of knowledge (and in that sense seeks truth, albeit contextual). Yes, evaluation needs to have institutional review board documentation. So in many cases, people could be justified in saying evaluation and research are the same.
Carol Weiss (1927-2013, she died in January) has written extensively on this difference and makes the distinction clearly. Weiss’s first edition of Evaluation Research was published in 1972.She revised this volume in 1998 and issued it under the title of Evaluation. (Both have subtitles.)
She says that evaluation applies social science research methods and makes the case that it is intent of the study which makes the difference between evaluation and research. She lists the following differences (pp 15 – 17, 2nd ed.):
(For those of you who are still skeptical, she also lists similarities.) Understanding and knowing the difference between evaluation and research matters. I recommend her books.
Gisele Tchamba who wrote the AEA365 post says the following:
She also sites a Trochim definition that is worth keeping in mind as it captures the various unique qualities of evaluation. Carol Weiss mentioned them all in her list (above):
What have you listed as your goal(s) for 2013?
How is that goal related to evaluation?
One study suggests that you’re 10 times more likely alter a behavior successfully (i.e. get rid of a “bad” behavior; adopt a “good” behavior) than you would if you didn’t make resolution. That statement is evaluative; a good place to start. 10 times! Wow. Yet, even that isn’t a guarantee you will be successful.
How can you increase the likelihood that you will be successful?
So are you going to
And be grateful for the opportunity…gratitude is a powerful way to reinforce you and your goal setting.
These three questions have buzzed around my head for a while in various formats.
When I attend a conference, I wonder.
When I conduct a program, I wonder, again.
When I explore something new, I am reminded that perhaps someone else has been here and wonder, yet again.
After all, aren’t both of these statements (capacity building and engagement) relating to a “foreign country” and a different culture?
How does all this relate to evaluation? Read on…
Premise: Evaluation is an everyday activity. You evaluate everyday; all the time; you call it making decisions. Every time you make a decision, you are building capacity in your ability to evaluate. Sure, some of those decisions may need to be revised. Sure, some of those decisions may just yield “negative” results. Even so, you are building capacity. AND you share that knowledge–with your children (if you have them), with your friends, with your colleagues, with the random shopper in the (grocery) store. That is building capacity. Building capacity can be systematic, organized, sequential. Sometimes formal, scheduled, deliberate. It is sharing “What do I know that they don’t know (in the hope that they too will know it and use it).
Premise: Everyone knows something. In knowing something, evaluation happens–because people made decisions about what is important and what is not. To really engage (not just outreach which much of Extension does), one needs to “do as” the group that is being engaged. To do anything else (“doing to” or “doing with”) is simply outreach and little or no knowledge is exchanged. Doesn’t mean that knowledge isn’t distributed; Extension has been doing that for years. Just means that the assumption (and you know what assumptions do) is that only the expert can distribute knowledge. Who is to say that the group (target audience, participants) aren’t expert in at least part of what is being communicated. Probably are. It is the idea that … they know something that I don’t know (and I would benefit from knowing).
Premise: Everything, everyone is connected. Being prepared is the best way to learn something. Being prepared by understanding culture (I’m not talking only about the intersection of race and gender; I’m talking about all the stereotypes you carry with you all the time) reinforces connections. Learning about other cultures (something everyone can do) helps dis-spell stereotypes and mitigate stereotype threats. And that is an evaluative task. Think about it. I think it captures the What do all of us need to know that few of us knows?” question.
We just celebrated Thanksgiving , a time in the US when citizens pause and reflect on those things for which we are thankful. Often those things for which we are thankful are based in our values–things like education, voting, religion/belief systems, honesty, truth, peace. In thinking about those things, I was reminded that the root word of evaluation is value…I thought this would be a good time to share AEA’s value statement.
Are you familiar with AEA’s values statement? What do these values mean to you?
AEA’s Values Statement
The American Evaluation Association values excellence in evaluation practice, utilization of evaluation findings, and inclusion and diversity in the evaluation community.
i. We value high quality, ethically defensible, culturally responsive evaluation practices that lead to effective and humane organizations and ultimately to the enhancement of the public good.
ii. We value high quality, ethically defensible, culturally responsive evaluation practices that contribute to decision-making processes, program improvement, and policy formulation.
iii. We value a global and international evaluation community and understanding of evaluation practices.
iv. We value the continual development of evaluation professionals and the development of evaluators from under-represented groups.
v. We value inclusiveness and diversity, welcoming members at any point in their career, from any context, and representing a range of thought and approaches.
vi. We value efficient, effective, responsive, transparent, and socially responsible association operations.
Values enter into all aspects of evaluation–planning, implementing, analyzing, reporting, and use. Values are all around us. Have you taken a good look at your values lately. Review is always beneficial, informative, and insightful. I encourage it.
I spent much of the last week thinking about what I would write on November 7, 2012.
Would I know anything before I went to bed? Would I like what I knew? Would I breathe a sigh of relief?
Yesterday is a good example that everyday we evaluate. (What is the root of the word evaluation?) We review a program (in this case the candidates); we determine the value (what they say they believe); we develop a rubric (criteria); we support those values and that criteria; and we apply those criteria (vote). Yesterday over 117 million people did just that. Being a good evaluator I can’t just talk about the respondents without talking about the total population–the total number of possible respondents. One guess estimates that 169 million people are registered to vote - 86 million Democrat – 55 million Republican – 28 million others registered. The total response rate for this evaluation was 69.2%. Very impressive–especially given the long lines. (Something the President said that needed fixing [I guess he is an evaluator, too.])
I am reminded that Senators and Representatives are elected to represent the voice of the people. Their job is to represent you. If they do not fulfill that responsibility, it is our responsibility to do something about it. If you don’t hold them accountable, you can’t complain about the outcome. Another evaluative activity. (Did I ever tell you that evaluation is a political activity…?) Our job as evaluators doesn’t stop when we cast our ballot; our job continues throughout the life of the program (in this case, the term in office). Our job is to use those evaluation results to make things better. Often, use is ignored. Often, the follow-through is missing. As evaluators, we need to come full circle.
Evaluation is an everyday activity.
I’ve been going to this conference since 1981 when Bob Ingle decided that the Evaluation Research Society and Evaluation Network needed to pool its resources and have one conference, Evaluation ’81. I was a graduate student. That conference changed my life. This was my professional home. I loved going and being there. I was energized; excited; delighted by what I learned, saw, and did.
Reflecting back over the 30+ years and all that has happened has provided me with insights and new awarenesses. This year was a bittersweet experience for me, for may reasons–not the least of them being Susan Kistler’s resignation from her role as AEA Executive Director. I remember meeting Susan and her daughter Emily in Chicago when Susan was in graduate school and Emily was three. Susan has helped make AEA what it is today. I will miss seeing her at the annual meeting. Because she lives on the east coast, I will rarely see her in person, now. There are fewer and fewer long time colleagues and friends at this meeting. And even though a very wise woman said to me, “Make younger friends”. Making younger friends isn’t easy when you are an old person (aka OWG) like me and see these new folks only once a year.
I will probably continue going until my youngest daughter, now a junior in high school, finishes college. What I bring home is less this year than last; and less last year than the year before. It is the people, certainly. I also find that the content challenges me less and less. Not that the sessions are not interesting or well presented–they are. I’m just not excited; not energized when I get back to the office. To me a conference is a “good” conference (ever the evaluator) if I met three new people with whom I wanted to maintain contact; spent time with three long time friends/colleagues; and brought home three new ideas. This year, not three new people; yes three long time friends; only one new idea. 4/9. I was delighted to hear that the younger folks were closer to the 9/9. Maybe I’m jaded.
The professional development session I attended (From Metaphor to Model) provided me with a visual for conceptualizing a complex program I’ll be evaluating. The plenary I attended with Oren Hesterman from the Fair Food Network in Detroit demonstrated how evaluative tools and good questions support food sustainability. What I found interesting was that during the question/comment session following the plenary, all the questions/comments were about food sustainability, NOT evaluation, even though Ricardo Millett asked really targeted evaluative questions. Food sustainability seems to be a really important topic–talk about a complex messy system. I also attended a couple of other sessions that really stood out and some that didn’t. Is attending this meeting important, even in my jaded view? Yes. It is how evaluators grow and change; even when the change is not the goal. Yes. The only constant is change. AEA provides professional development, in it pre and post sesssions as well as plenary and concurrent sessions. Evaluators need that.
What is the difference between need to know and nice to know? How does this affect evaluation? I got a post this week on a blog I follow (Kirkpatrick) that talks about how much data does a trainer really need? (Remember that Don Kirkpatrick developed and established an evaluation model for professional training back in the 1954 that still holds today.)
Most Extension faculty don’t do training programs per se, although there are training elements in Extension programs. Extension faculty are typically looking for program impacts in their program evaluations. Program improvement evaluations, although necessary, are not sufficient. Yes, they provide important information to the program planner; they don’t necessarily give you information about how effective your program has been (i.e., outcome information). (You will note that I will use the term “impacts” interchangeably with “outcomes” because most Extension faculty parrot the language of reporting impacts.)
OK. So how much data do you really need? How do you determine what is nice to have and what is necessary (need) to have? How do you know?
Kirkpatrick also advises to avoid redundant questions. That means questions asked in a number of ways and giving you the same answer; questions written in positive and negative forms. The other question that I always include because it will give me a way to determine how my program is making a difference is a question on intention including a time frame. For example, “In the next six months do you intend to try any of the skills you learned to day? If so, which one.” Mazmaniam has identified the best predictor of behavior change (a measure of making a difference) is stated intention to change. Telling someone else makes the participant accountable. That seems to make the difference.
Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowits, M. P. (1998). Information about barriers to planned change: A Randomized controlled trail involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8).
P.S. No blog next week; away on business.
Creativity is not an escape from disciplined thinking. It is an escape with disciplined thinking.” – Jerry Hirschberg – via @BarbaraOrmsby
The above quote was in the September 7 post of Harold Jarche’s blog. I think it has relevance to the work we do as evaluators. Certainly, there is a creative part to evaluation; certainly there is a disciplined thinking part to evaluation. Remembering that is sometimes a challenge.
So where in the process do we see creativity and where do we see disciplined thinking?
When evaluators construct a logic model, you see creativity; you also see disciplined thinking
When evaluators develop an implementation plan, you see creativity; you also see disciplined thinking.
When evaluators develop a methodology and a method, you see creativity; you also see disciplined thinking.
When evaluators present the findings for use, you see creativity; you also see disciplined thinking.
So the next time you say “give me a survey for this program”, think–Is a survey the best approach to determining if this program is effective; will it really answer my questions?
Creativity and disciplined thinking are companions in evaluation.