A rubric is a way to make criteria (or standards) explicit and it does that in writing so that there can be no misunderstanding. It is found in many evaluative activities especially assessment of classroom work. (Misunderstanding is still possible because the English language is often not clear–something I won’t get into today; suffice it to say that a wise woman said words are important–keep that in mind when crafting a rubric.)
This week there were many events that required rubrics. Rubrics may have been implicit; they certainly were not explicit. Explicit rubrics were needed.
I’ll start with apologies for the political nature of today’s post.
Certainly, an implicit rubric for this event can be found in this statement:
Only it was not used. When there are clear examples of inappropriate behavior; behavior that my daughters’ kindergarten teacher said was mean and not nice, a rubric exists. Simple rubrics are understood by five year olds (was that behavioir mean OR was that behavior nice). Obviously 46 senators could only hear the NRA; they didn’t hear that the behavior (school shootings) was mean.
Boston provided us with another example of the mean vs. nice rubric. Bernstein got the concept of mean vs. nice.
There were lots of rubrics, however implicit, for that event. The NY Times reported that helpers (my word) ran TOWARD those in need not away from the site of the explosion (violence). There were many helpers. A rubric existed, however implicit.
I’m no longer worked up–just determined and for that I need a rubric. This image may not give me the answer; it does however give me pause.
For more information on assessment and rubrics see: Walvoord, B. E. (2004). Assessment clear and simple. San Francisco: Jossey-Bass.
Today’s post is longer than I usually post. I think it is important because it captures an aspect of data analysis and evaluation use that many of us skip right over: How to present findings using the tools that are available. Let me know if this works for you.
Ann Emery blogs at Emery Evaluation. She challenged readers a couple of weeks ago to reproduce a bubble chart in either Excel or R. This week she posted the answer. She has given me permission to share that information with you. You can look at the complete post at Dataviz Copycat Challenge: The Answers.
I’ve also copied it here in a shortened format:
“Here’s my how-to guide. At the bottom of this blog post, you can download an Excel file that contains each of the submissions. We each used a slightly different approach, so I encourage you to study the file and see how we manipulated Excel in different ways.
Here’s that chart from page 7 of the State of Evaluation 2012 report. We want to see whether we can re-create the chart in the lower right corner. The visualization uses circles, which means we’re going to create a bubble chart in Excel.
To fool Excel into making circles, we need to create a bubble chart in Excel. Click here for a Microsoft Office tutorial. According to the tutorial, “A bubble chart is a variation of a scatter chart in which the data points are replaced with bubbles. A bubble chart can be used instead of a scatter chart if your data has three data series.”
We’re not creating a true scatter plot or bubble chart because we’re not showing correlations between any variables. Instead, we’re just using the foundation of the bubble chart design – the circles. But, we still need to envision our chart on an x-y axis in order to make the circles.
It helps to sketch this part by hand. I printed page 7 of the report and drew my x and y axes right on top of the chart. For example, 79% of large nonprofit organizations reported that they compile statistics. This bubble would get an x-value of 3 and a y-value of 5.
I didn’t use sequential numbering on my axes. In other words, you’ll notice that my y-axis has values of 1, 3, and 5 instead of 1, 2, and 3. I learned that the formatting seemed to look better when I had a little more space between my bubbles.
Open a new Excel file and start typing in your values. For example, we know that 79% of large nonprofit organizations reported that they compile statistics. This bubble has an x-value of 3, a y-value of 5, and a bubble size of 79%.
Go slowly. Check your work. If you make a typo in this step, your chart will get all wonky.
Highlight the three columns on the right – the x column, the y column, and the frequency column. Don’t highlight the headers themselves (x, y, and bubble size). Click on the “Insert” tab at the top of the screen. Click on “Other Charts” and select a “Bubble Chart.”
First, add the basic data labels. Right-click on one of the bubbles. A drop-down menu will appear. Select “Add Data Labels.” You’ll get something that looks like this:
Second, adjust the data labels. Right-click on one of the data labels (not on the bubble). A drop-down menu will appear. Select “Format Data Labels.” A pop-up screen will appear. You need to adjust two things. Under “Label Contains,” select “Bubble Size.” (The default setting on my computer is “Y Value.”) Next, under “Label Position,” select “Center.” (The default setting on my computer is “Right.)
Your basic bubble chart is finished! Now, you just need to fiddle with the formatting. This is easier said than done, and probably takes the longest out of all the steps.
Here’s how I formatted my bubble chart:
For more details about formatting charts, check out these tutorials.
Click here to download the Excel file that I used to create this bubble chart. Please explore the chart by right-clicking to see how the various components were made. You’ll notice a lot of text boxes on top of each other!”
One of the expectations for the evaluation capacity building program that just finished is that the program findings will be written up for publication in scientific journals.
Easy to say. Hard to do.
Writing is HARD.
To that end, I’m going to dig out my old notes from when I taught technical writing to graduate students, medical students, residents, and young faculty and give a few highlights.
These three questions have buzzed around my head for a while in various formats.
When I attend a conference, I wonder.
When I conduct a program, I wonder, again.
When I explore something new, I am reminded that perhaps someone else has been here and wonder, yet again.
After all, aren’t both of these statements (capacity building and engagement) relating to a “foreign country” and a different culture?
How does all this relate to evaluation? Read on…
Premise: Evaluation is an everyday activity. You evaluate everyday; all the time; you call it making decisions. Every time you make a decision, you are building capacity in your ability to evaluate. Sure, some of those decisions may need to be revised. Sure, some of those decisions may just yield “negative” results. Even so, you are building capacity. AND you share that knowledge–with your children (if you have them), with your friends, with your colleagues, with the random shopper in the (grocery) store. That is building capacity. Building capacity can be systematic, organized, sequential. Sometimes formal, scheduled, deliberate. It is sharing “What do I know that they don’t know (in the hope that they too will know it and use it).
Premise: Everyone knows something. In knowing something, evaluation happens–because people made decisions about what is important and what is not. To really engage (not just outreach which much of Extension does), one needs to “do as” the group that is being engaged. To do anything else (“doing to” or “doing with”) is simply outreach and little or no knowledge is exchanged. Doesn’t mean that knowledge isn’t distributed; Extension has been doing that for years. Just means that the assumption (and you know what assumptions do) is that only the expert can distribute knowledge. Who is to say that the group (target audience, participants) aren’t expert in at least part of what is being communicated. Probably are. It is the idea that … they know something that I don’t know (and I would benefit from knowing).
Premise: Everything, everyone is connected. Being prepared is the best way to learn something. Being prepared by understanding culture (I’m not talking only about the intersection of race and gender; I’m talking about all the stereotypes you carry with you all the time) reinforces connections. Learning about other cultures (something everyone can do) helps dis-spell stereotypes and mitigate stereotype threats. And that is an evaluative task. Think about it. I think it captures the What do all of us need to know that few of us knows?” question.
The US elections are over; the analysis is mostly done; the issues are still issues. Well come, the next four years. As Dickens said, It is the best of times; it is the worst of times. Which? you ask–it all depends and that is the evaluative question of the day.
So what do you need to know now? You need to help someone answer the question, Is it effective? OR (maybe) Did it make a difference?
The Canadian Evaluation Society, the Canadian counter part to the American Evaluation Association has put together a series (six so far) of pamphlets for new evaluators. This week, I’ve decided to go back to the beginning and promote evaluation as a profession.
Gene Shackman (no picture could be found) originally organized these brief pieces and is willing to share them. Gene is an applied sociologist and director of the Global Social Change Research Project. His first contribution was in December 2010; the most current, November 2012.
Hope these help.
Although this was the CES fourth post (in July, 2011), I believe it is something that evaluators and those who woke up and found out they were evaluators need before any of the other booklets. Even though there will probably be strange and unfamiliar words in the booklet, it provides a foundation. Every evaluator will know some of these words; some will be new; some will be context specific. Every evaluator needs to have a comprehensive glossary of terminology. The glossary was compiled originally by the International Development Evaluation Association. It is available for down load in English, French, and Arabic and is 65 pages.
CES is also posting a series (five as of this post) that Gene Shackman put together. The first booklet, posted by CES in December, 2010 is called “What is program evaluation?” and is a 17 page booklet introducing program evaluation. Shackman tells us that “this guide is available as a set of smaller pamphlets…” here.
In January, 2011, CES published the second of these booklets. Evaluation questions addresses the key questions about program evaluation and is three pages long.
CES posted the third booklet in April, 2011. It is called “What methods to use” and can be found here. Shackman discusses briefly the benefits and limitations of qualitative and quantitative methods, the two main categories of answering evaluation questions. A third approach that has gained credibility is mixed methods.
The next booklet, posted by CES in October 2012, is on surveys. It “…explains what they are, what they are usually used for, and what typical questions are asked… as well as the pros and cons of different sampling methods.
The most recent booklet just posted (November, 2012) is about qualitative methods such as focus groups and interviews.
One characteristic of these five booklets is the additional resources that Shackman lists for each of the topics. I have my favorites (and I’ve mentioned them from time to tine; those new to the field need to develop favorite sources.
What is important is that you embrace the options…this is only one way to look at evaluation.
I spent much of the last week thinking about what I would write on November 7, 2012.
Would I know anything before I went to bed? Would I like what I knew? Would I breathe a sigh of relief?
Yesterday is a good example that everyday we evaluate. (What is the root of the word evaluation?) We review a program (in this case the candidates); we determine the value (what they say they believe); we develop a rubric (criteria); we support those values and that criteria; and we apply those criteria (vote). Yesterday over 117 million people did just that. Being a good evaluator I can’t just talk about the respondents without talking about the total population–the total number of possible respondents. One guess estimates that 169 million people are registered to vote - 86 million Democrat – 55 million Republican – 28 million others registered. The total response rate for this evaluation was 69.2%. Very impressive–especially given the long lines. (Something the President said that needed fixing [I guess he is an evaluator, too.])
I am reminded that Senators and Representatives are elected to represent the voice of the people. Their job is to represent you. If they do not fulfill that responsibility, it is our responsibility to do something about it. If you don’t hold them accountable, you can’t complain about the outcome. Another evaluative activity. (Did I ever tell you that evaluation is a political activity…?) Our job as evaluators doesn’t stop when we cast our ballot; our job continues throughout the life of the program (in this case, the term in office). Our job is to use those evaluation results to make things better. Often, use is ignored. Often, the follow-through is missing. As evaluators, we need to come full circle.
Evaluation is an everyday activity.
I’ve been going to this conference since 1981 when Bob Ingle decided that the Evaluation Research Society and Evaluation Network needed to pool its resources and have one conference, Evaluation ’81. I was a graduate student. That conference changed my life. This was my professional home. I loved going and being there. I was energized; excited; delighted by what I learned, saw, and did.
Reflecting back over the 30+ years and all that has happened has provided me with insights and new awarenesses. This year was a bittersweet experience for me, for may reasons–not the least of them being Susan Kistler’s resignation from her role as AEA Executive Director. I remember meeting Susan and her daughter Emily in Chicago when Susan was in graduate school and Emily was three. Susan has helped make AEA what it is today. I will miss seeing her at the annual meeting. Because she lives on the east coast, I will rarely see her in person, now. There are fewer and fewer long time colleagues and friends at this meeting. And even though a very wise woman said to me, “Make younger friends”. Making younger friends isn’t easy when you are an old person (aka OWG) like me and see these new folks only once a year.
I will probably continue going until my youngest daughter, now a junior in high school, finishes college. What I bring home is less this year than last; and less last year than the year before. It is the people, certainly. I also find that the content challenges me less and less. Not that the sessions are not interesting or well presented–they are. I’m just not excited; not energized when I get back to the office. To me a conference is a “good” conference (ever the evaluator) if I met three new people with whom I wanted to maintain contact; spent time with three long time friends/colleagues; and brought home three new ideas. This year, not three new people; yes three long time friends; only one new idea. 4/9. I was delighted to hear that the younger folks were closer to the 9/9. Maybe I’m jaded.
The professional development session I attended (From Metaphor to Model) provided me with a visual for conceptualizing a complex program I’ll be evaluating. The plenary I attended with Oren Hesterman from the Fair Food Network in Detroit demonstrated how evaluative tools and good questions support food sustainability. What I found interesting was that during the question/comment session following the plenary, all the questions/comments were about food sustainability, NOT evaluation, even though Ricardo Millett asked really targeted evaluative questions. Food sustainability seems to be a really important topic–talk about a complex messy system. I also attended a couple of other sessions that really stood out and some that didn’t. Is attending this meeting important, even in my jaded view? Yes. It is how evaluators grow and change; even when the change is not the goal. Yes. The only constant is change. AEA provides professional development, in it pre and post sesssions as well as plenary and concurrent sessions. Evaluators need that.
Creativity is not an escape from disciplined thinking. It is an escape with disciplined thinking.” – Jerry Hirschberg – via @BarbaraOrmsby
The above quote was in the September 7 post of Harold Jarche’s blog. I think it has relevance to the work we do as evaluators. Certainly, there is a creative part to evaluation; certainly there is a disciplined thinking part to evaluation. Remembering that is sometimes a challenge.
So where in the process do we see creativity and where do we see disciplined thinking?
When evaluators construct a logic model, you see creativity; you also see disciplined thinking
When evaluators develop an implementation plan, you see creativity; you also see disciplined thinking.
When evaluators develop a methodology and a method, you see creativity; you also see disciplined thinking.
When evaluators present the findings for use, you see creativity; you also see disciplined thinking.
So the next time you say “give me a survey for this program”, think–Is a survey the best approach to determining if this program is effective; will it really answer my questions?
Creativity and disciplined thinking are companions in evaluation.
Bright ideas are often the result of “Aha” moments. Aha moments are “The sudden understanding or grasp of a concept…an event that is typically rewarding and pleasurable. Usually, the insights remain in our memory as lasting impressions.” –Rick Nauert PhD Senior News Editor for Psych Central.
How often have you had an “A-ha” moment when you are evaluating? A colleague had one, maybe several, that made an impression on her. Talk about building capacity–this did. She has agreed to share that experience, soon (the bright idea).
Not only did it make an impression on her, her telling me made an impression on me. I am once again reminded of how much I take evaluation for granted. Because evaluation is an everyday activity, I often assume that people know what I’m talking about. We all know what happens when we assume something……. I am also reminded how many people don’t know what I consider basic evaluation information, like constructing a survey item (Got Dillman on your shelf, yet?).
What is this symbol called? No, it is not the square root sign–although that is its function. “It’s called a radical…because it gets at the root…the definition of radical is: of or going to the root or origin.”–Guy McPherson
How radical are you? How does that relate to evaluation, you wonder? Telling truth to power is a radical concept (the definition here is departure from the usual or traditional); one to which evaluators who hold integrity sacrosanct adhere. (It is the third AEA guiding principle.) Evaluators often, if they are doing their job right, have to speak truth to power–because the program wasn’t effective, or it resulted in something different than what was planned, or it cost too much to replicate, or it just didn’t work out . Funders, supervisors, program leaders need to know the truth as you found it.
“Those who seek to isolate will become isolated themselves.” –Diederick Stoel This sage piece of advice is the lead for Jim Kirkpatrick’s quick tip for evaluating training activities. He says, “Attempting to isolate the impact of the formal training class at the start of the initiative is basically discounting and disrespecting the contributions of other factors…Instead of seeking to isolate the impact of your training, gather data on all of the factors that contributed to the success of the initiative, and give credit where credit is due. This way, your role is not simply to deliver training, but to create and orchestrate organizational success. This makes you a strategic business partner who contributes to your organization’s competitive advantage and is therefore indispensable.” Extension faculty conduct a lot of trainings and want to take credit for the training effectiveness. It is important to recognize that there may be other factors at work–mitigating factors; intermediate factors; even confounding factors. As much as Extension faculty want to isolate (i.e., take credit), it is important to share the credit.
Yesterday was the 236th anniversary of the US independence from England (and George III, in his infinite wisdom, is said to have said nothing important happened…right…oh, all right, how WOULD he have known anything had happened several thousand miles away?). And yes, I saw fireworks. More importantly, though, I thought a lot about what does independence mean? And then, because I’m posting here, what does independence mean for evaluation and evaluators?
In thinking about independence, I am reminded about intercultural communication and the contrast between individualism and collectivism. To make this distinction clear, think “I- centered” vs. “We-centered”. Think western Europe, US vs. Asia, Japan. To me, individualism is reflective of independence and collectivism is reflective of networks, systems if you will. When we talk about independence, the words “freedom” and “separate” and “unattached” are bandied about and that certainly applies to the anniversary celebrated yesterday. Yet, when I contrast it with collectivism and think of the words that are often used in that context (“interdependence”, “group”, “collaboration”), I become aware of other concepts.
Like, what is missing when we are independent? What have we lost being independent? What are we avoiding by being independent? Think “Little Red Hen”. And conversely, what have we gained by being collective, by collaborating, by connecting? Think “Spock and Good of the Many”.
There is in AEA a topical interest group of “Independent Consulting”. This TIG is home to those evaluators who function outside of an institution and who have made their own organization; who work independently, on contract. In their mission statement, they pro port to “Foster a community of independent evaluators…” So by being separate, are they missing community and need to foster that aspect? They insist that they are “…great at networking”, which doesn’t sound very independent; it sounds almost collective. A small example, and probably not the best.
I think about the way the western world is today; other than your children and/or spouse/significant other are you connected to a community? a network? a group? not just in membership (like at church or club); really connected (like in extended family–whether of the heart or of the blood)? Although the Independent Consulting TIG says they are great at networking and some even work in groups, are they connected? (Social media doesn’t count.) Is the “I” identity a product of being independent? It certainly is a characteristic of individualism. Can you measure the value, merit, worth of the work you do by the level of independence you possess? Do internal evaluators garner all the benefits of being connected. (As an internal evaluator, I’m pretty independent, even though there is a critical mass of evaluators where I work.)
Although being an independent evaluator has its benefits–less bias, different perspective (do I dare say, more objective?), is the distance created, the competition for position, the risk taking worth the lack of relational harmony that can accompany relationships? Is the US better off as its own country? I’d say probably. My musings only…what do you think?