A rubric is a way to make criteria (or standards) explicit and it does that in writing so that there can be no misunderstanding. It is found in many evaluative activities especially assessment of classroom work. (Misunderstanding is still possible because the English language is often not clear–something I won’t get into today; suffice it to say that a wise woman said words are important–keep that in mind when crafting a rubric.)
This week there were many events that required rubrics. Rubrics may have been implicit; they certainly were not explicit. Explicit rubrics were needed.
I’ll start with apologies for the political nature of today’s post.
Certainly, an implicit rubric for this event can be found in this statement:
Only it was not used. When there are clear examples of inappropriate behavior; behavior that my daughters’ kindergarten teacher said was mean and not nice, a rubric exists. Simple rubrics are understood by five year olds (was that behavioir mean OR was that behavior nice). Obviously 46 senators could only hear the NRA; they didn’t hear that the behavior (school shootings) was mean.
Boston provided us with another example of the mean vs. nice rubric. Bernstein got the concept of mean vs. nice.
There were lots of rubrics, however implicit, for that event. The NY Times reported that helpers (my word) ran TOWARD those in need not away from the site of the explosion (violence). There were many helpers. A rubric existed, however implicit.
I’m no longer worked up–just determined and for that I need a rubric. This image may not give me the answer; it does however give me pause.
For more information on assessment and rubrics see: Walvoord, B. E. (2004). Assessment clear and simple. San Francisco: Jossey-Bass.
Today’s post is longer than I usually post. I think it is important because it captures an aspect of data analysis and evaluation use that many of us skip right over: How to present findings using the tools that are available. Let me know if this works for you.
Ann Emery blogs at Emery Evaluation. She challenged readers a couple of weeks ago to reproduce a bubble chart in either Excel or R. This week she posted the answer. She has given me permission to share that information with you. You can look at the complete post at Dataviz Copycat Challenge: The Answers.
I’ve also copied it here in a shortened format:
“Here’s my how-to guide. At the bottom of this blog post, you can download an Excel file that contains each of the submissions. We each used a slightly different approach, so I encourage you to study the file and see how we manipulated Excel in different ways.
Here’s that chart from page 7 of the State of Evaluation 2012 report. We want to see whether we can re-create the chart in the lower right corner. The visualization uses circles, which means we’re going to create a bubble chart in Excel.
To fool Excel into making circles, we need to create a bubble chart in Excel. Click here for a Microsoft Office tutorial. According to the tutorial, “A bubble chart is a variation of a scatter chart in which the data points are replaced with bubbles. A bubble chart can be used instead of a scatter chart if your data has three data series.”
We’re not creating a true scatter plot or bubble chart because we’re not showing correlations between any variables. Instead, we’re just using the foundation of the bubble chart design – the circles. But, we still need to envision our chart on an x-y axis in order to make the circles.
It helps to sketch this part by hand. I printed page 7 of the report and drew my x and y axes right on top of the chart. For example, 79% of large nonprofit organizations reported that they compile statistics. This bubble would get an x-value of 3 and a y-value of 5.
I didn’t use sequential numbering on my axes. In other words, you’ll notice that my y-axis has values of 1, 3, and 5 instead of 1, 2, and 3. I learned that the formatting seemed to look better when I had a little more space between my bubbles.
Open a new Excel file and start typing in your values. For example, we know that 79% of large nonprofit organizations reported that they compile statistics. This bubble has an x-value of 3, a y-value of 5, and a bubble size of 79%.
Go slowly. Check your work. If you make a typo in this step, your chart will get all wonky.
Highlight the three columns on the right – the x column, the y column, and the frequency column. Don’t highlight the headers themselves (x, y, and bubble size). Click on the “Insert” tab at the top of the screen. Click on “Other Charts” and select a “Bubble Chart.”
First, add the basic data labels. Right-click on one of the bubbles. A drop-down menu will appear. Select “Add Data Labels.” You’ll get something that looks like this:
Second, adjust the data labels. Right-click on one of the data labels (not on the bubble). A drop-down menu will appear. Select “Format Data Labels.” A pop-up screen will appear. You need to adjust two things. Under “Label Contains,” select “Bubble Size.” (The default setting on my computer is “Y Value.”) Next, under “Label Position,” select “Center.” (The default setting on my computer is “Right.)
Your basic bubble chart is finished! Now, you just need to fiddle with the formatting. This is easier said than done, and probably takes the longest out of all the steps.
Here’s how I formatted my bubble chart:
For more details about formatting charts, check out these tutorials.
Click here to download the Excel file that I used to create this bubble chart. Please explore the chart by right-clicking to see how the various components were made. You’ll notice a lot of text boxes on top of each other!”
Think about it. How does what is happening in the world affect your program? Your outcomes? Your goals?
When was the last time you applied that peripheral knowledge to what you are doing. Informational literacy is being aware of what is happening in the world. Knowing this information, even peripherally, adds to your evaluation capacity.
Now, this is not advocating that you need to read the NY Times daily (although I’m sure they would really like to increase their readership); rather it is advocating that you recognize that none of your programs (whether little p or big P) occur in isolation. What your participants know affects how the program is implemented. What you know affects how the programs are planned. That knowledge also affects the data collection, data analysis, and reporting. This is especially true for programs developed and delivered in the community, as are Extension programs.
Let me give you a real life example. I returned from Tucson, AZ and the capstone event for an evaluation capacity program I was leading. The event was an outstanding success–not only did it identify what was learned and what needed to be learned, it also demonstrated the value of peer learning. I was psyched. I was energized. I was in an automobile accident 24 hours after returning home. (The car was totaled–I no longer have a car; my youngest daughter and I experienced no serious injuries.) The accident was published in the local paper the following day. Several people saw the announcement; those same several people expressed their concern; some of those several people asked how they could help. Now this is a very small local event that had a serious effect on me and my work. (If I hadn’t had last week’s post already written, I don’t know if I could have written it.) Solving simple problems takes twice as long (at least). This informational literacy influenced those around me. Their knowing changed their behavior to me. Think of what September 11, 2001 did to people’s behavior; think about what the Pope’s RESIGNATION is doing to people’s behavior. Informational literacy. It is all evaluative. Think about it.
Graphic URL: http://www.otterbein.edu/resources/library/information_literacy/index.htm
Even though Oregon State is a co-sponsor for this program, being in Oregon in winter (i.e., now) is not the land of sunshine, and since Vitamin D is critical for everyone’s well being, I chose Tucson for our capstone event. Our able support person, Gretchen, chose the La Paloma, a wonderful site on the north side of Tucson. So even if it is not warm, it will be sunny. Why, we might even get to go swimming; if not swimming, certainly hiking. There are a lot of places to hike around Tucson…in Sabino Canyon ; near/around A Mountain (first year U of A students get to whitewash or paint the A) ; Saguaro National Park ; or maybe in one of the five (yes, five) mountain ranges surrounding Tucson. (If you are interested in other hikes, look here.)
We will be meeting Tuesday afternoon, all day Wednesday, and Thursday morning. Participants have spent the past 17 months participating in and learning about evaluation. They have identified a project/program (either big P or little p), and they participated in a series of modules, webinars, and office hours on topics used everyday in evaluating a project or program.We anticipate over 20 attendees from the cohorts. We have participants from five Extension program areas (Nutrition, Agriculture, Natural Resources, Family and Community Science, and 4-H), from ten western states (Oregon, Washington, California, Utah, Colorado, Idaho, New Mexico, Arizona, Wyoming, and Hawaii.), and all levels of familiarity with evaluation (beginner to expert).
I’m the evaluation specialist in charge of the program content (big P) and Jim Lindstrom (formerly of Washington State, currently University of Idaho) has been the professional development and technical specialist, and Gretchen Cuevas (OSU) has been our wonderful support person. I’m using Patton’s Developmental Evaluation Model to evaluate this program. Although some things were set at the beginning of the program (the topics for the modules and webinars, for example), other things were changed depending on feedback (readings, office hours). Although we expect that participants will grow their knowledge of evaluation, we do not know what specific and measurable outcomes will result (hence, developmental). We hope to run the program (available to Extension faculty in the Western Region) again in September 2013. Our goal is to build evaluation capacity in the Western Extension Region. Did we?
What have you listed as your goal(s) for 2013?
How is that goal related to evaluation?
One study suggests that you’re 10 times more likely alter a behavior successfully (i.e. get rid of a “bad” behavior; adopt a “good” behavior) than you would if you didn’t make resolution. That statement is evaluative; a good place to start. 10 times! Wow. Yet, even that isn’t a guarantee you will be successful.
How can you increase the likelihood that you will be successful?
So are you going to
And be grateful for the opportunity…gratitude is a powerful way to reinforce you and your goal setting.
These three questions have buzzed around my head for a while in various formats.
When I attend a conference, I wonder.
When I conduct a program, I wonder, again.
When I explore something new, I am reminded that perhaps someone else has been here and wonder, yet again.
After all, aren’t both of these statements (capacity building and engagement) relating to a “foreign country” and a different culture?
How does all this relate to evaluation? Read on…
Premise: Evaluation is an everyday activity. You evaluate everyday; all the time; you call it making decisions. Every time you make a decision, you are building capacity in your ability to evaluate. Sure, some of those decisions may need to be revised. Sure, some of those decisions may just yield “negative” results. Even so, you are building capacity. AND you share that knowledge–with your children (if you have them), with your friends, with your colleagues, with the random shopper in the (grocery) store. That is building capacity. Building capacity can be systematic, organized, sequential. Sometimes formal, scheduled, deliberate. It is sharing “What do I know that they don’t know (in the hope that they too will know it and use it).
Premise: Everyone knows something. In knowing something, evaluation happens–because people made decisions about what is important and what is not. To really engage (not just outreach which much of Extension does), one needs to “do as” the group that is being engaged. To do anything else (“doing to” or “doing with”) is simply outreach and little or no knowledge is exchanged. Doesn’t mean that knowledge isn’t distributed; Extension has been doing that for years. Just means that the assumption (and you know what assumptions do) is that only the expert can distribute knowledge. Who is to say that the group (target audience, participants) aren’t expert in at least part of what is being communicated. Probably are. It is the idea that … they know something that I don’t know (and I would benefit from knowing).
Premise: Everything, everyone is connected. Being prepared is the best way to learn something. Being prepared by understanding culture (I’m not talking only about the intersection of race and gender; I’m talking about all the stereotypes you carry with you all the time) reinforces connections. Learning about other cultures (something everyone can do) helps dis-spell stereotypes and mitigate stereotype threats. And that is an evaluative task. Think about it. I think it captures the What do all of us need to know that few of us knows?” question.
The US elections are over; the analysis is mostly done; the issues are still issues. Well come, the next four years. As Dickens said, It is the best of times; it is the worst of times. Which? you ask–it all depends and that is the evaluative question of the day.
So what do you need to know now? You need to help someone answer the question, Is it effective? OR (maybe) Did it make a difference?
The Canadian Evaluation Society, the Canadian counter part to the American Evaluation Association has put together a series (six so far) of pamphlets for new evaluators. This week, I’ve decided to go back to the beginning and promote evaluation as a profession.
Gene Shackman (no picture could be found) originally organized these brief pieces and is willing to share them. Gene is an applied sociologist and director of the Global Social Change Research Project. His first contribution was in December 2010; the most current, November 2012.
Hope these help.
Although this was the CES fourth post (in July, 2011), I believe it is something that evaluators and those who woke up and found out they were evaluators need before any of the other booklets. Even though there will probably be strange and unfamiliar words in the booklet, it provides a foundation. Every evaluator will know some of these words; some will be new; some will be context specific. Every evaluator needs to have a comprehensive glossary of terminology. The glossary was compiled originally by the International Development Evaluation Association. It is available for down load in English, French, and Arabic and is 65 pages.
CES is also posting a series (five as of this post) that Gene Shackman put together. The first booklet, posted by CES in December, 2010 is called “What is program evaluation?” and is a 17 page booklet introducing program evaluation. Shackman tells us that “this guide is available as a set of smaller pamphlets…” here.
In January, 2011, CES published the second of these booklets. Evaluation questions addresses the key questions about program evaluation and is three pages long.
CES posted the third booklet in April, 2011. It is called “What methods to use” and can be found here. Shackman discusses briefly the benefits and limitations of qualitative and quantitative methods, the two main categories of answering evaluation questions. A third approach that has gained credibility is mixed methods.
The next booklet, posted by CES in October 2012, is on surveys. It “…explains what they are, what they are usually used for, and what typical questions are asked… as well as the pros and cons of different sampling methods.
The most recent booklet just posted (November, 2012) is about qualitative methods such as focus groups and interviews.
One characteristic of these five booklets is the additional resources that Shackman lists for each of the topics. I have my favorites (and I’ve mentioned them from time to tine; those new to the field need to develop favorite sources.
What is important is that you embrace the options…this is only one way to look at evaluation.
I spent much of the last week thinking about what I would write on November 7, 2012.
Would I know anything before I went to bed? Would I like what I knew? Would I breathe a sigh of relief?
Yesterday is a good example that everyday we evaluate. (What is the root of the word evaluation?) We review a program (in this case the candidates); we determine the value (what they say they believe); we develop a rubric (criteria); we support those values and that criteria; and we apply those criteria (vote). Yesterday over 117 million people did just that. Being a good evaluator I can’t just talk about the respondents without talking about the total population–the total number of possible respondents. One guess estimates that 169 million people are registered to vote - 86 million Democrat – 55 million Republican – 28 million others registered. The total response rate for this evaluation was 69.2%. Very impressive–especially given the long lines. (Something the President said that needed fixing [I guess he is an evaluator, too.])
I am reminded that Senators and Representatives are elected to represent the voice of the people. Their job is to represent you. If they do not fulfill that responsibility, it is our responsibility to do something about it. If you don’t hold them accountable, you can’t complain about the outcome. Another evaluative activity. (Did I ever tell you that evaluation is a political activity…?) Our job as evaluators doesn’t stop when we cast our ballot; our job continues throughout the life of the program (in this case, the term in office). Our job is to use those evaluation results to make things better. Often, use is ignored. Often, the follow-through is missing. As evaluators, we need to come full circle.
Evaluation is an everyday activity.
What is the difference between need to know and nice to know? How does this affect evaluation? I got a post this week on a blog I follow (Kirkpatrick) that talks about how much data does a trainer really need? (Remember that Don Kirkpatrick developed and established an evaluation model for professional training back in the 1954 that still holds today.)
Most Extension faculty don’t do training programs per se, although there are training elements in Extension programs. Extension faculty are typically looking for program impacts in their program evaluations. Program improvement evaluations, although necessary, are not sufficient. Yes, they provide important information to the program planner; they don’t necessarily give you information about how effective your program has been (i.e., outcome information). (You will note that I will use the term “impacts” interchangeably with “outcomes” because most Extension faculty parrot the language of reporting impacts.)
OK. So how much data do you really need? How do you determine what is nice to have and what is necessary (need) to have? How do you know?
Kirkpatrick also advises to avoid redundant questions. That means questions asked in a number of ways and giving you the same answer; questions written in positive and negative forms. The other question that I always include because it will give me a way to determine how my program is making a difference is a question on intention including a time frame. For example, “In the next six months do you intend to try any of the skills you learned to day? If so, which one.” Mazmaniam has identified the best predictor of behavior change (a measure of making a difference) is stated intention to change. Telling someone else makes the participant accountable. That seems to make the difference.
Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowits, M. P. (1998). Information about barriers to planned change: A Randomized controlled trail involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8).
P.S. No blog next week; away on business.
The topic of complexity has appeared several times over the last few weeks. Brian Pittman wrote about it in an AEA365; Charles Gasper used it as a topic for his most recent blog. Much food for thought, especially as it relates to the work evaluators do.
Simultaneously, Harold Jarche talks about connections. To me connections and complexity are two side of the same coin. Something which is complex typically has multiple parts. Something which has multiple parts is connected to the other parts. Certainly the work done by evaluators has multiple parts; certainly those parts are connected to each other. The challenge we face is logically defending those connections and in doing so, make explicit the parts. Sound easy? Its not.
That’s why I stress modeling your project before you implement it. If the project is modeled, often the model leads you to discover that what you thought would happen because of what you do, won’t. You have time to fix the model, fix the program, and fix the evaluation protocol. If your model is defensible and logical, you still may find out that the program doesn’t get you where you want to go. Jonny Morell writes about this in his book, Evaluation in the face of uncertainty. There are worse things than having to fix the program or fix the evaluation protocol before implementation. Keep in mind that connections are key; complexity is everywhere. Perhaps you’ll have an Aha! moment.
I’ll be on holiday and there will not be a post next week. Last week was an odd week–an example of complexity and connections leading to unanticipated outcomes.