Recently, I was privileged to see the recommendations of William (Bill) Tierney on the top education blogs. (Tierney is the Co-director of the Pullias Center for Higher Education at the University of Southern California.) He (among others) writes the blog, 21st scholar. The blogs are actually the recommendation of his research assistant Daniel Almeida. These are the recommendations:
What criteria were used? What criteria would you use? Some criteria that come to mind are interest, readability, length, frequency. But I’m assuming that they would be your criteria (and you know what assuming does…)
If I’ve learned anything in my years as an evaluator, it is to make assumptions explicit. Everyone comes to the table with built in biases (called cognitive biases). I call them personal and situational biases (I did my dissertation on those biases). So by making your assumptions explicit (and thereby avoiding personal and situational biases), you are building a rubric because a rubric is developed from criteria for a particular product, program, policy, etc.
How would you build your rubric? Many rubrics are in chart format, that is columns and rows with the criteria detailed in those cross boxes. That isn’t cast in stone. Given the different ways people view the world–linear, circular, webbed–there may be others, I would set yours up in the format that works best for you. The only thing to keep in mind is be specific.
Now, perhaps you are wondering how this relates to evaluation in the way I’ve been using evaluation. Keep in mind evaluation is an everyday activity. And everyday, all day, you perform evaluations. Rubrics formalizes the evaluations you conduct–by making the criteria explicit. Sometimes you internalize them; sometimes you write them down. If you need to remember what you did the last time you were in a similar situation, I would suggest you write them down. No, you won’t end up with lots of little sticky notes posted all over. Use your computer. Create a file. Develop criteria that are important to you. Typically, the criteria are in a table format; an x by x form. If you are assigning number, you might want to have the rows be the numbers (for example, 1-10) and the columns be words that describe those numbers (for example, 1 boring; 10 stimulating and engaging). Rubrics are used in reviewing manuscripts, student papers, assigning grades to activities as well as programs. Your format might look like this:
Or it might not. What other configuration have you seen rubrics? How would you develop your rubric? Or would you–perhaps you prefer a bunch of sticky notes. Let me know.
We are four months into 2013 and I keep asking the question “Is this blog making a difference?” I’ve asked for an analytic report to give me some answers. I’ve asked you readers for your stories.
Let’s hear it for SEOs and how they pick up that title–I credit that with the number of comments I’ve gotten. I AM surprised at the number of comments I have gotten since January (hundreds, literally). Most say things like, “of course it is making a difference.” Some compliment me on my writing style. Some are in a foreign language which I cannot read (I am illiterate when it comes to Cyrillic, Arabic, Greek, Chinese, and other non-English alphabets). Some are marketing–wanting ping backs to their recently started blogs for some product. Some have commented specifically on the content (sample size and confidence intervals); some have commented on the time of year (vernal equinox). Occasionally, I get a comment like the comment below and I keep writing.
The questions of all questions… Do I make a difference? I like how you write and let me answer your question. Personally I was supposed to be dead ages ago because someone tried to kill me for the h… of it … Since then (I barely survived) I have asked myself the same question several times and every single time I answer with YES. Why? Because I noticed that whatever you do, there is always someone using what you say or do to improve their own life. So, I can answer the question for you: Do you make a difference? Yes, you do, because there will always be someone who uses your writings to do something positive with it. So, I hope I just made your day! And needless to say, keep the blog posts coming!
Enough update. New topic: I just got a copy of the third edition of Miles and Huberman (my to go reference for qualitative data analysis). Wait you say–Miles and Huberman are dead–yes, they are. Johnny Saldana (there needs to be a~ above the “n” in his name only I don’t know how to do that with this keyboard) was approached by Sage to be the third author and revise and update the book. A good thing, I think. Miles and Huberman’s second edition was published in 1994. That is almost 20 years. I’m eager to see if it will hold as a classic given that there are many other books on qualitative coding in press currently. (The spring research flyer from Gilford lists several on qualitative inquiry and analysis from some established authors.)
I also recently sat in on a research presentation of a candidate for a tenure track position here at OSU who talked about how the analysis of qualitative data was accomplished. Took me back to when I was learning–index cards and sticky notes. Yes, there are marvelous software programs out there (NVivo, Ethnograph, N*udist); I will support the argument that the best way to learn about your qualitative data is to immerse yourself in it with color coded index cards and sticky notes. Then you can use the software to check your results. Keep in mind, though, that you are the PI and you will bring many biases to the analysis of your data.
A rubric is a way to make criteria (or standards) explicit and it does that in writing so that there can be no misunderstanding. It is found in many evaluative activities especially assessment of classroom work. (Misunderstanding is still possible because the English language is often not clear–something I won’t get into today; suffice it to say that a wise woman said words are important–keep that in mind when crafting a rubric.)
This week there were many events that required rubrics. Rubrics may have been implicit; they certainly were not explicit. Explicit rubrics were needed.
I’ll start with apologies for the political nature of today’s post.
Certainly, an implicit rubric for this event can be found in this statement:
Only it was not used. When there are clear examples of inappropriate behavior; behavior that my daughters’ kindergarten teacher said was mean and not nice, a rubric exists. Simple rubrics are understood by five year olds (was that behavioir mean OR was that behavior nice). Obviously 46 senators could only hear the NRA; they didn’t hear that the behavior (school shootings) was mean.
Boston provided us with another example of the mean vs. nice rubric. Bernstein got the concept of mean vs. nice.
There were lots of rubrics, however implicit, for that event. The NY Times reported that helpers (my word) ran TOWARD those in need not away from the site of the explosion (violence). There were many helpers. A rubric existed, however implicit.
I’m no longer worked up–just determined and for that I need a rubric. This image may not give me the answer; it does however give me pause.
For more information on assessment and rubrics see: Walvoord, B. E. (2004). Assessment clear and simple. San Francisco: Jossey-Bass.
Today is the first full day of spring…this morning when I biked to the office it rained (not unlike winter…) and it was cold (also, not unlike winter)…although I just looked out the window and it is sunny so maybe spring is really here. Certainly the foliage tells us it is spring–forsythia, flowering quince, ornamental plum trees; although the crocuses are spent, daffodils shine from front yards; tulips are in bud, and daphne–oh, the daphne–is in its glory.
I’ve already posted this week; next week is spring break at OSU and at the local high school. I won’t be posting. So I leave you with this thought: Evaluation is an everyday activity, one you and I do often without thinking; make evaluation systematic and think about the merit and worth. Stop and smell the flowers.
In a conversation with a colleague on the need for IRB when what was being conducted was evaluation not research, I was struck by two things:
Leaving number 1 for another time, number 2 is the topic of the day.
A while back, AEA 365 did a post on the difference between evaluation and research (some of which is included below) from a graduate students perspective. Perhaps providing other resources would be valuable.
To have evaluation grouped with research is at worst a travesty; at best unfair. Yes, evaluation uses research tools and techniques. Yes, evaluation contributes to a larger body of knowledge (and in that sense seeks truth, albeit contextual). Yes, evaluation needs to have institutional review board documentation. So in many cases, people could be justified in saying evaluation and research are the same.
Carol Weiss (1927-2013, she died in January) has written extensively on this difference and makes the distinction clearly. Weiss’s first edition of Evaluation Research was published in 1972.She revised this volume in 1998 and issued it under the title of Evaluation. (Both have subtitles.)
She says that evaluation applies social science research methods and makes the case that it is intent of the study which makes the difference between evaluation and research. She lists the following differences (pp 15 – 17, 2nd ed.):
(For those of you who are still skeptical, she also lists similarities.) Understanding and knowing the difference between evaluation and research matters. I recommend her books.
Gisele Tchamba who wrote the AEA365 post says the following:
She also sites a Trochim definition that is worth keeping in mind as it captures the various unique qualities of evaluation. Carol Weiss mentioned them all in her list (above):
The US elections are over; the analysis is mostly done; the issues are still issues. Well come, the next four years. As Dickens said, It is the best of times; it is the worst of times. Which? you ask–it all depends and that is the evaluative question of the day.
So what do you need to know now? You need to help someone answer the question, Is it effective? OR (maybe) Did it make a difference?
The Canadian Evaluation Society, the Canadian counter part to the American Evaluation Association has put together a series (six so far) of pamphlets for new evaluators. This week, I’ve decided to go back to the beginning and promote evaluation as a profession.
Gene Shackman (no picture could be found) originally organized these brief pieces and is willing to share them. Gene is an applied sociologist and director of the Global Social Change Research Project. His first contribution was in December 2010; the most current, November 2012.
Hope these help.
Although this was the CES fourth post (in July, 2011), I believe it is something that evaluators and those who woke up and found out they were evaluators need before any of the other booklets. Even though there will probably be strange and unfamiliar words in the booklet, it provides a foundation. Every evaluator will know some of these words; some will be new; some will be context specific. Every evaluator needs to have a comprehensive glossary of terminology. The glossary was compiled originally by the International Development Evaluation Association. It is available for down load in English, French, and Arabic and is 65 pages.
CES is also posting a series (five as of this post) that Gene Shackman put together. The first booklet, posted by CES in December, 2010 is called “What is program evaluation?” and is a 17 page booklet introducing program evaluation. Shackman tells us that “this guide is available as a set of smaller pamphlets…” here.
In January, 2011, CES published the second of these booklets. Evaluation questions addresses the key questions about program evaluation and is three pages long.
CES posted the third booklet in April, 2011. It is called “What methods to use” and can be found here. Shackman discusses briefly the benefits and limitations of qualitative and quantitative methods, the two main categories of answering evaluation questions. A third approach that has gained credibility is mixed methods.
The next booklet, posted by CES in October 2012, is on surveys. It “…explains what they are, what they are usually used for, and what typical questions are asked… as well as the pros and cons of different sampling methods.
The most recent booklet just posted (November, 2012) is about qualitative methods such as focus groups and interviews.
One characteristic of these five booklets is the additional resources that Shackman lists for each of the topics. I have my favorites (and I’ve mentioned them from time to tine; those new to the field need to develop favorite sources.
What is important is that you embrace the options…this is only one way to look at evaluation.
I spent much of the last week thinking about what I would write on November 7, 2012.
Would I know anything before I went to bed? Would I like what I knew? Would I breathe a sigh of relief?
Yesterday is a good example that everyday we evaluate. (What is the root of the word evaluation?) We review a program (in this case the candidates); we determine the value (what they say they believe); we develop a rubric (criteria); we support those values and that criteria; and we apply those criteria (vote). Yesterday over 117 million people did just that. Being a good evaluator I can’t just talk about the respondents without talking about the total population–the total number of possible respondents. One guess estimates that 169 million people are registered to vote - 86 million Democrat – 55 million Republican – 28 million others registered. The total response rate for this evaluation was 69.2%. Very impressive–especially given the long lines. (Something the President said that needed fixing [I guess he is an evaluator, too.])
I am reminded that Senators and Representatives are elected to represent the voice of the people. Their job is to represent you. If they do not fulfill that responsibility, it is our responsibility to do something about it. If you don’t hold them accountable, you can’t complain about the outcome. Another evaluative activity. (Did I ever tell you that evaluation is a political activity…?) Our job as evaluators doesn’t stop when we cast our ballot; our job continues throughout the life of the program (in this case, the term in office). Our job is to use those evaluation results to make things better. Often, use is ignored. Often, the follow-through is missing. As evaluators, we need to come full circle.
Evaluation is an everyday activity.
I’ve been going to this conference since 1981 when Bob Ingle decided that the Evaluation Research Society and Evaluation Network needed to pool its resources and have one conference, Evaluation ’81. I was a graduate student. That conference changed my life. This was my professional home. I loved going and being there. I was energized; excited; delighted by what I learned, saw, and did.
Reflecting back over the 30+ years and all that has happened has provided me with insights and new awarenesses. This year was a bittersweet experience for me, for may reasons–not the least of them being Susan Kistler’s resignation from her role as AEA Executive Director. I remember meeting Susan and her daughter Emily in Chicago when Susan was in graduate school and Emily was three. Susan has helped make AEA what it is today. I will miss seeing her at the annual meeting. Because she lives on the east coast, I will rarely see her in person, now. There are fewer and fewer long time colleagues and friends at this meeting. And even though a very wise woman said to me, “Make younger friends”. Making younger friends isn’t easy when you are an old person (aka OWG) like me and see these new folks only once a year.
I will probably continue going until my youngest daughter, now a junior in high school, finishes college. What I bring home is less this year than last; and less last year than the year before. It is the people, certainly. I also find that the content challenges me less and less. Not that the sessions are not interesting or well presented–they are. I’m just not excited; not energized when I get back to the office. To me a conference is a “good” conference (ever the evaluator) if I met three new people with whom I wanted to maintain contact; spent time with three long time friends/colleagues; and brought home three new ideas. This year, not three new people; yes three long time friends; only one new idea. 4/9. I was delighted to hear that the younger folks were closer to the 9/9. Maybe I’m jaded.
The professional development session I attended (From Metaphor to Model) provided me with a visual for conceptualizing a complex program I’ll be evaluating. The plenary I attended with Oren Hesterman from the Fair Food Network in Detroit demonstrated how evaluative tools and good questions support food sustainability. What I found interesting was that during the question/comment session following the plenary, all the questions/comments were about food sustainability, NOT evaluation, even though Ricardo Millett asked really targeted evaluative questions. Food sustainability seems to be a really important topic–talk about a complex messy system. I also attended a couple of other sessions that really stood out and some that didn’t. Is attending this meeting important, even in my jaded view? Yes. It is how evaluators grow and change; even when the change is not the goal. Yes. The only constant is change. AEA provides professional development, in it pre and post sesssions as well as plenary and concurrent sessions. Evaluators need that.
What is the difference between need to know and nice to know? How does this affect evaluation? I got a post this week on a blog I follow (Kirkpatrick) that talks about how much data does a trainer really need? (Remember that Don Kirkpatrick developed and established an evaluation model for professional training back in the 1954 that still holds today.)
Most Extension faculty don’t do training programs per se, although there are training elements in Extension programs. Extension faculty are typically looking for program impacts in their program evaluations. Program improvement evaluations, although necessary, are not sufficient. Yes, they provide important information to the program planner; they don’t necessarily give you information about how effective your program has been (i.e., outcome information). (You will note that I will use the term “impacts” interchangeably with “outcomes” because most Extension faculty parrot the language of reporting impacts.)
OK. So how much data do you really need? How do you determine what is nice to have and what is necessary (need) to have? How do you know?
Kirkpatrick also advises to avoid redundant questions. That means questions asked in a number of ways and giving you the same answer; questions written in positive and negative forms. The other question that I always include because it will give me a way to determine how my program is making a difference is a question on intention including a time frame. For example, “In the next six months do you intend to try any of the skills you learned to day? If so, which one.” Mazmaniam has identified the best predictor of behavior change (a measure of making a difference) is stated intention to change. Telling someone else makes the participant accountable. That seems to make the difference.
Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowits, M. P. (1998). Information about barriers to planned change: A Randomized controlled trail involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8).
P.S. No blog next week; away on business.
Creativity is not an escape from disciplined thinking. It is an escape with disciplined thinking.” – Jerry Hirschberg – via @BarbaraOrmsby
The above quote was in the September 7 post of Harold Jarche’s blog. I think it has relevance to the work we do as evaluators. Certainly, there is a creative part to evaluation; certainly there is a disciplined thinking part to evaluation. Remembering that is sometimes a challenge.
So where in the process do we see creativity and where do we see disciplined thinking?
When evaluators construct a logic model, you see creativity; you also see disciplined thinking
When evaluators develop an implementation plan, you see creativity; you also see disciplined thinking.
When evaluators develop a methodology and a method, you see creativity; you also see disciplined thinking.
When evaluators present the findings for use, you see creativity; you also see disciplined thinking.
So the next time you say “give me a survey for this program”, think–Is a survey the best approach to determining if this program is effective; will it really answer my questions?
Creativity and disciplined thinking are companions in evaluation.