A rubric is a way to make criteria (or standards) explicit and it does that in writing so that there can be no misunderstanding. It is found in many evaluative activities especially assessment of classroom work. (Misunderstanding is still possible because the English language is often not clear–something I won’t get into today; suffice it to say that a wise woman said words are important–keep that in mind when crafting a rubric.)
This week there were many events that required rubrics. Rubrics may have been implicit; they certainly were not explicit. Explicit rubrics were needed.
I’ll start with apologies for the political nature of today’s post.
Certainly, an implicit rubric for this event can be found in this statement:
Only it was not used. When there are clear examples of inappropriate behavior; behavior that my daughters’ kindergarten teacher said was mean and not nice, a rubric exists. Simple rubrics are understood by five year olds (was that behavioir mean OR was that behavior nice). Obviously 46 senators could only hear the NRA; they didn’t hear that the behavior (school shootings) was mean.
Boston provided us with another example of the mean vs. nice rubric. Bernstein got the concept of mean vs. nice.
There were lots of rubrics, however implicit, for that event. The NY Times reported that helpers (my word) ran TOWARD those in need not away from the site of the explosion (violence). There were many helpers. A rubric existed, however implicit.
I’m no longer worked up–just determined and for that I need a rubric. This image may not give me the answer; it does however give me pause.
For more information on assessment and rubrics see: Walvoord, B. E. (2004). Assessment clear and simple. San Francisco: Jossey-Bass.
Harold Jarche shared in his blog a comment by a participant in one of his presentations. The comment is:
Knowledge is evolving faster than can be codified in formal systems and is depreciating in value over time.
This is really important for those of us who love the printed work (me) and teach (me and you). A statement like this tells us that we are out of date the moment we open our mouths; those institutions on which we depended for information (schools, libraries, even churches) are now passe.
The exponential growth of knowledge is much like that of population. I think this graphic image of population (by Waldir) is pretty telling (click on the image to read the fine print). The evaluative point that this brings home to me is the delay in making information available.
Do you (like me) when you say, “Look it up”, think web, not press, books, library, hard copy? Do you (like me) wonder how and where this information originated when the information is so cutting edge? Do you (like me) wonder how to keep up or even if you can? Books take over a year to come to fruition (I think the 2 year frame is more representative). Journal manuscripts take 6 to 9 months on a quick journal turn around. Blogs are faster and they express opinion; could they be a source of information?
I’ve decided to go to an advanced qualitative data seminar this summer as part of my professional development because I’m using more and more qualitative data (I still use quantitative data, too). It is supposed to be cutting edge. The book on which the seminar is based won’t be published until next month (April). How much information has been developed since that book went to press? How much information will be shared at the seminar? Or will that seminar be old news (and like old news, be ready for fish)? The explosion of information like the explosion of population, may be a good thing; or not. The question is what is being done with that knowledge? How is it being used? Or is it? Is the knowledge explosion an excuse for people to be information illiterate? To become focused (read narrow) in their field? What are you doing with what I would call miscellaneous information that is gathered unsystematically? What are you doing with information now–how are you using it for professional development–or are you?
Think about it. How does what is happening in the world affect your program? Your outcomes? Your goals?
When was the last time you applied that peripheral knowledge to what you are doing. Informational literacy is being aware of what is happening in the world. Knowing this information, even peripherally, adds to your evaluation capacity.
Now, this is not advocating that you need to read the NY Times daily (although I’m sure they would really like to increase their readership); rather it is advocating that you recognize that none of your programs (whether little p or big P) occur in isolation. What your participants know affects how the program is implemented. What you know affects how the programs are planned. That knowledge also affects the data collection, data analysis, and reporting. This is especially true for programs developed and delivered in the community, as are Extension programs.
Let me give you a real life example. I returned from Tucson, AZ and the capstone event for an evaluation capacity program I was leading. The event was an outstanding success–not only did it identify what was learned and what needed to be learned, it also demonstrated the value of peer learning. I was psyched. I was energized. I was in an automobile accident 24 hours after returning home. (The car was totaled–I no longer have a car; my youngest daughter and I experienced no serious injuries.) The accident was published in the local paper the following day. Several people saw the announcement; those same several people expressed their concern; some of those several people asked how they could help. Now this is a very small local event that had a serious effect on me and my work. (If I hadn’t had last week’s post already written, I don’t know if I could have written it.) Solving simple problems takes twice as long (at least). This informational literacy influenced those around me. Their knowing changed their behavior to me. Think of what September 11, 2001 did to people’s behavior; think about what the Pope’s RESIGNATION is doing to people’s behavior. Informational literacy. It is all evaluative. Think about it.
Graphic URL: http://www.otterbein.edu/resources/library/information_literacy/index.htm
Even though Oregon State is a co-sponsor for this program, being in Oregon in winter (i.e., now) is not the land of sunshine, and since Vitamin D is critical for everyone’s well being, I chose Tucson for our capstone event. Our able support person, Gretchen, chose the La Paloma, a wonderful site on the north side of Tucson. So even if it is not warm, it will be sunny. Why, we might even get to go swimming; if not swimming, certainly hiking. There are a lot of places to hike around Tucson…in Sabino Canyon ; near/around A Mountain (first year U of A students get to whitewash or paint the A) ; Saguaro National Park ; or maybe in one of the five (yes, five) mountain ranges surrounding Tucson. (If you are interested in other hikes, look here.)
We will be meeting Tuesday afternoon, all day Wednesday, and Thursday morning. Participants have spent the past 17 months participating in and learning about evaluation. They have identified a project/program (either big P or little p), and they participated in a series of modules, webinars, and office hours on topics used everyday in evaluating a project or program.We anticipate over 20 attendees from the cohorts. We have participants from five Extension program areas (Nutrition, Agriculture, Natural Resources, Family and Community Science, and 4-H), from ten western states (Oregon, Washington, California, Utah, Colorado, Idaho, New Mexico, Arizona, Wyoming, and Hawaii.), and all levels of familiarity with evaluation (beginner to expert).
I’m the evaluation specialist in charge of the program content (big P) and Jim Lindstrom (formerly of Washington State, currently University of Idaho) has been the professional development and technical specialist, and Gretchen Cuevas (OSU) has been our wonderful support person. I’m using Patton’s Developmental Evaluation Model to evaluate this program. Although some things were set at the beginning of the program (the topics for the modules and webinars, for example), other things were changed depending on feedback (readings, office hours). Although we expect that participants will grow their knowledge of evaluation, we do not know what specific and measurable outcomes will result (hence, developmental). We hope to run the program (available to Extension faculty in the Western Region) again in September 2013. Our goal is to build evaluation capacity in the Western Extension Region. Did we?
What have you listed as your goal(s) for 2013?
How is that goal related to evaluation?
One study suggests that you’re 10 times more likely alter a behavior successfully (i.e. get rid of a “bad” behavior; adopt a “good” behavior) than you would if you didn’t make resolution. That statement is evaluative; a good place to start. 10 times! Wow. Yet, even that isn’t a guarantee you will be successful.
How can you increase the likelihood that you will be successful?
So are you going to
And be grateful for the opportunity…gratitude is a powerful way to reinforce you and your goal setting.
These three questions have buzzed around my head for a while in various formats.
When I attend a conference, I wonder.
When I conduct a program, I wonder, again.
When I explore something new, I am reminded that perhaps someone else has been here and wonder, yet again.
After all, aren’t both of these statements (capacity building and engagement) relating to a “foreign country” and a different culture?
How does all this relate to evaluation? Read on…
Premise: Evaluation is an everyday activity. You evaluate everyday; all the time; you call it making decisions. Every time you make a decision, you are building capacity in your ability to evaluate. Sure, some of those decisions may need to be revised. Sure, some of those decisions may just yield “negative” results. Even so, you are building capacity. AND you share that knowledge–with your children (if you have them), with your friends, with your colleagues, with the random shopper in the (grocery) store. That is building capacity. Building capacity can be systematic, organized, sequential. Sometimes formal, scheduled, deliberate. It is sharing “What do I know that they don’t know (in the hope that they too will know it and use it).
Premise: Everyone knows something. In knowing something, evaluation happens–because people made decisions about what is important and what is not. To really engage (not just outreach which much of Extension does), one needs to “do as” the group that is being engaged. To do anything else (“doing to” or “doing with”) is simply outreach and little or no knowledge is exchanged. Doesn’t mean that knowledge isn’t distributed; Extension has been doing that for years. Just means that the assumption (and you know what assumptions do) is that only the expert can distribute knowledge. Who is to say that the group (target audience, participants) aren’t expert in at least part of what is being communicated. Probably are. It is the idea that … they know something that I don’t know (and I would benefit from knowing).
Premise: Everything, everyone is connected. Being prepared is the best way to learn something. Being prepared by understanding culture (I’m not talking only about the intersection of race and gender; I’m talking about all the stereotypes you carry with you all the time) reinforces connections. Learning about other cultures (something everyone can do) helps dis-spell stereotypes and mitigate stereotype threats. And that is an evaluative task. Think about it. I think it captures the What do all of us need to know that few of us knows?” question.
The US elections are over; the analysis is mostly done; the issues are still issues. Well come, the next four years. As Dickens said, It is the best of times; it is the worst of times. Which? you ask–it all depends and that is the evaluative question of the day.
So what do you need to know now? You need to help someone answer the question, Is it effective? OR (maybe) Did it make a difference?
The Canadian Evaluation Society, the Canadian counter part to the American Evaluation Association has put together a series (six so far) of pamphlets for new evaluators. This week, I’ve decided to go back to the beginning and promote evaluation as a profession.
Gene Shackman (no picture could be found) originally organized these brief pieces and is willing to share them. Gene is an applied sociologist and director of the Global Social Change Research Project. His first contribution was in December 2010; the most current, November 2012.
Hope these help.
Although this was the CES fourth post (in July, 2011), I believe it is something that evaluators and those who woke up and found out they were evaluators need before any of the other booklets. Even though there will probably be strange and unfamiliar words in the booklet, it provides a foundation. Every evaluator will know some of these words; some will be new; some will be context specific. Every evaluator needs to have a comprehensive glossary of terminology. The glossary was compiled originally by the International Development Evaluation Association. It is available for down load in English, French, and Arabic and is 65 pages.
CES is also posting a series (five as of this post) that Gene Shackman put together. The first booklet, posted by CES in December, 2010 is called “What is program evaluation?” and is a 17 page booklet introducing program evaluation. Shackman tells us that “this guide is available as a set of smaller pamphlets…” here.
In January, 2011, CES published the second of these booklets. Evaluation questions addresses the key questions about program evaluation and is three pages long.
CES posted the third booklet in April, 2011. It is called “What methods to use” and can be found here. Shackman discusses briefly the benefits and limitations of qualitative and quantitative methods, the two main categories of answering evaluation questions. A third approach that has gained credibility is mixed methods.
The next booklet, posted by CES in October 2012, is on surveys. It “…explains what they are, what they are usually used for, and what typical questions are asked… as well as the pros and cons of different sampling methods.
The most recent booklet just posted (November, 2012) is about qualitative methods such as focus groups and interviews.
One characteristic of these five booklets is the additional resources that Shackman lists for each of the topics. I have my favorites (and I’ve mentioned them from time to tine; those new to the field need to develop favorite sources.
What is important is that you embrace the options…this is only one way to look at evaluation.
What is the difference between need to know and nice to know? How does this affect evaluation? I got a post this week on a blog I follow (Kirkpatrick) that talks about how much data does a trainer really need? (Remember that Don Kirkpatrick developed and established an evaluation model for professional training back in the 1954 that still holds today.)
Most Extension faculty don’t do training programs per se, although there are training elements in Extension programs. Extension faculty are typically looking for program impacts in their program evaluations. Program improvement evaluations, although necessary, are not sufficient. Yes, they provide important information to the program planner; they don’t necessarily give you information about how effective your program has been (i.e., outcome information). (You will note that I will use the term “impacts” interchangeably with “outcomes” because most Extension faculty parrot the language of reporting impacts.)
OK. So how much data do you really need? How do you determine what is nice to have and what is necessary (need) to have? How do you know?
Kirkpatrick also advises to avoid redundant questions. That means questions asked in a number of ways and giving you the same answer; questions written in positive and negative forms. The other question that I always include because it will give me a way to determine how my program is making a difference is a question on intention including a time frame. For example, “In the next six months do you intend to try any of the skills you learned to day? If so, which one.” Mazmaniam has identified the best predictor of behavior change (a measure of making a difference) is stated intention to change. Telling someone else makes the participant accountable. That seems to make the difference.
Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowits, M. P. (1998). Information about barriers to planned change: A Randomized controlled trail involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8).
P.S. No blog next week; away on business.
Yesterday was the 236th anniversary of the US independence from England (and George III, in his infinite wisdom, is said to have said nothing important happened…right…oh, all right, how WOULD he have known anything had happened several thousand miles away?). And yes, I saw fireworks. More importantly, though, I thought a lot about what does independence mean? And then, because I’m posting here, what does independence mean for evaluation and evaluators?
In thinking about independence, I am reminded about intercultural communication and the contrast between individualism and collectivism. To make this distinction clear, think “I- centered” vs. “We-centered”. Think western Europe, US vs. Asia, Japan. To me, individualism is reflective of independence and collectivism is reflective of networks, systems if you will. When we talk about independence, the words “freedom” and “separate” and “unattached” are bandied about and that certainly applies to the anniversary celebrated yesterday. Yet, when I contrast it with collectivism and think of the words that are often used in that context (“interdependence”, “group”, “collaboration”), I become aware of other concepts.
Like, what is missing when we are independent? What have we lost being independent? What are we avoiding by being independent? Think “Little Red Hen”. And conversely, what have we gained by being collective, by collaborating, by connecting? Think “Spock and Good of the Many”.
There is in AEA a topical interest group of “Independent Consulting”. This TIG is home to those evaluators who function outside of an institution and who have made their own organization; who work independently, on contract. In their mission statement, they pro port to “Foster a community of independent evaluators…” So by being separate, are they missing community and need to foster that aspect? They insist that they are “…great at networking”, which doesn’t sound very independent; it sounds almost collective. A small example, and probably not the best.
I think about the way the western world is today; other than your children and/or spouse/significant other are you connected to a community? a network? a group? not just in membership (like at church or club); really connected (like in extended family–whether of the heart or of the blood)? Although the Independent Consulting TIG says they are great at networking and some even work in groups, are they connected? (Social media doesn’t count.) Is the “I” identity a product of being independent? It certainly is a characteristic of individualism. Can you measure the value, merit, worth of the work you do by the level of independence you possess? Do internal evaluators garner all the benefits of being connected. (As an internal evaluator, I’m pretty independent, even though there is a critical mass of evaluators where I work.)
Although being an independent evaluator has its benefits–less bias, different perspective (do I dare say, more objective?), is the distance created, the competition for position, the risk taking worth the lack of relational harmony that can accompany relationships? Is the US better off as its own country? I’d say probably. My musings only…what do you think?
Once again, it is the whole ‘balance’ thing…(we) live in ordinary life and that ordinary life is really the only life we have…I’ll take it. It has some great moments…
These wise words come from the insights of Buddy Stallings, Episcopal priest in charge of a large parish in a large city in the US. True, I took them out of context; the important thing is that they resonated with me from an evaluation perspective.
Too often, faculty and colleagues come to me and wonder what the impact is of this or that program. I wonder, What do they mean? What do they want to know? Are they only using words they have heard–the buzz words? I ponder how this fits into their ordinary life. Or are they outside their ordinary life, pretending in a foreign country?
A faculty member at Oregon State University equated history to a foreign country. I was put in a mind that evaluation is a foreign country to many (most) people, even though everyone evaluates every day, whether they know it or not. Individuals visit that contry because they are required to visit; to gather information; to report what they discovered. They do this with out any special preparation. Visiting a foreign country entails preparation (at least it does for me). A study of customs, mores, foods, language, behavior, tools (I’m sure I’m missing something important in this list) is needed; not just necessary, mandatory. Because although the foreign country may be exotic and unique and novel to you, it is ordinary life for everyone who lives there. The same is true for evaluation. There are customs; students are socialized to think and act in a certain way. Mores are constantly being called into question; language, behaviors, tools, which not known to you in your ordinary life, present themselves. You are constantly presented with opportunities to be outside your ordinary life. Yet, I wonder what are you missing by not seeing the ordinary; by pretending that it is extraordinary? By not doing the preparation to make evaluation part of your ordinary life, something you do without thinking.
So I ask you, What preparation have you done to visit this foreign country called EVALUATION? What are you currently doing to increase your understanding of this country? How does this visit change your ordinary life or can you get those great moments by recognizing that this is truly the only life you have? So I ask you, What are you really asking when you ask, What are the impacts?
All of this has significant implications for capacity building.