A rubric is a way to make criteria (or standards) explicit and it does that in writing so that there can be no misunderstanding.  It is found in many evaluative activities especially assessment of classroom work.  (Misunderstanding is still possible because the English language is often not clear–something I won’t get into today; suffice it to say that a wise woman said words are important–keep that in mind when crafting a rubric.)

 

This week there were many events that required rubrics. Rubrics may have been implicit; they certainly were not explicit.  Explicit rubrics were needed.

 

I’ll start with apologies for the political nature of today’s post.

Yesterday’s  activity of the US Senate is an example where a rubric would be valuable.  Gabby  Giffords said it best:  

Certainly, an implicit rubric for this event can be found in this statement:

  Only it was not used.  When there are clear examples of inappropriate behavior; behavior that my daughters’ kindergarten teacher said was mean and not nice, a rubric exists.  Simple rubrics are understood by five year olds (was that behavioir mean OR was that behavior nice).  Obviously 46 senators could only hear the NRA; they didn’t hear that the behavior (school shootings) was mean.

Boston provided us with another example of the mean vs. nice rubric.  Bernstein got the concept of mean vs. nice.

Music is nice; violence is mean.

Helpers are nice; bullying is mean. 

There were lots of rubrics, however implicit, for that event.    The NY Times reported that helpers (my word) ran TOWARD those in need not away from the site of the explosion (violence).   There were many helpers.  A rubric existed, however implicit.

I want to close with another example of a rubric: 

I’m no longer worked up–just determined and for that I need a rubric.  This image may not give me the answer; it does however give me pause.

 

For more information on assessment and rubrics see: Walvoord, B. E. (2004).  Assessment clear and simple.  San Francisco: Jossey-Bass.

 

 

Today is the first full day of spring…this morning when I biked to the office it rained (not unlike winter…) and it was cold (also, not unlike winter)…although I just looked out the window and it is sunny so maybe spring is really here.  Certainly the foliage tells us it is spring–forsythia, flowering quince, ornamental plum trees; although the crocuses are spent, daffodils shine from front yards; tulips are in bud, and daphne–oh, the daphne–is in its glory.

I’ve already posted this week; next week is spring break at OSU and at the local high school.  I won’t be posting.  So I leave you with this thought:  Evaluation is an everyday activity, one you and I do often without thinking; make evaluation systematic and think about the merit and worth.  Stop and smell the flowers.

 

Harold Jarche shared in his blog a comment by a participant in one of his presentations.  The comment is:

Knowledge is evolving faster than can be codified in formal systems and is depreciating in value over time.

 

This is really important for those of us who love the printed work (me) and teach (me and you).  A statement like this tells us that we are out of date the moment we open our mouths; those institutions on which we depended for information (schools, libraries, even churches) are now passe.

 

The exponential growth of knowledge is much like that of population.   I think this graphic image of population (by Waldir) is pretty telling (click on the image to read the fine print).  The evaluative point that this brings home to me is the delay in making information available.

O

Do you (like me) when you say, “Look it up”, think web, not press, books, library, hard copy?  Do you (like me) wonder how and where this information originated when the information is so cutting edge?  Do you (like me) wonder how to keep up or even if you can?  Books take over a year to come to fruition (I think the 2 year frame is more representative).  Journal manuscripts take 6 to 9 months on a quick journal turn around.  Blogs are faster and they express opinion; could they be a source of information?

I’ve decided to go to an advanced qualitative data seminar this summer as part of my professional development because I’m using more and more qualitative data (I still use quantitative data, too).  It is supposed to be cutting edge.  The book on which the seminar is based won’t be published until next month (April).  How much information has been developed since that book went to press?  How much information will be shared at the seminar?  Or will that seminar be old news (and like old news, be ready for fish)?  The explosion of information like the explosion of population, may be a good thing; or not.  The question is what is being done with that knowledge?  How is it being used?  Or is it?  Is the knowledge explosion an excuse for people to be information illiterate? To become focused (read narrow) in their field?   What are you doing with what I would call miscellaneous information that is gathered unsystematically?  What are you doing with information now–how are you using it for professional development–or are you?

 

Today’s post is longer than I usually post.  I think it is important because it captures an aspect of data analysis and evaluation use that many of us skip right over:  How to present findings using the tools that are available.  Let me know if this works for you.

 

Ann Emery blogs at Emery Evaluation.  She challenged readers a couple of weeks ago to reproduce a bubble chart in either Excel or R.  This week she posted the answer.  She has given me permission to share that information with you.  You can look at the complete post at Dataviz Copycat Challenge:  The Answers.

 

I’ve also copied it here in a shortened format:

“Here’s my how-to guide. At the bottom of this blog post, you can download an Excel file that contains each of the submissions. We each used a slightly different approach, so I encourage you to study the file and see how we manipulated Excel in different ways.

Step 1: Study the chart that you’re trying to reproduce in Excel.

Here’s that chart from page 7 of the State of Evaluation 2012 report. We want to see whether we can re-create the chart in the lower right corner. The visualization uses circles, which means we’re going to create a bubble chart in Excel.

dataviz_challenge_original_chart

Step 2: Learn the basics of making a bubble chart in Excel.

To fool Excel into making circles, we need to create a bubble chart in Excel. Click here for a Microsoft Office tutorial. According to the tutorial, “A bubble chart is a variation of a scatter chart in which the data points are replaced with bubbles. A bubble chart can be used instead of a scatter chart if your data has three data series.”

We’re not creating a true scatter plot or bubble chart because we’re not showing correlations between any variables. Instead, we’re just using the foundation of the bubble chart design – the circles. But, we still need to envision our chart on an x-y axis in order to make the circles.

Step 3: Sketch your bubble chart on an x-y axis.

It helps to sketch this part by hand. I printed page 7 of the report and drew my x and y axes right on top of the chart. For example, 79% of large nonprofit organizations reported that they compile statistics. This bubble would get an x-value of 3 and a y-value of 5.

I didn’t use sequential numbering on my axes. In other words, you’ll notice that my y-axis has values of 1, 3, and 5 instead of 1, 2, and 3. I learned that the formatting seemed to look better when I had a little more space between my bubbles.

dataviz_challenge_x-y_axis_example

Step 4: Fill in your data table in Excel.

Open a new Excel file and start typing in your values. For example, we know that 79% of large nonprofit organizations reported that they compile statistics. This bubble has an x-value of 3, a y-value of 5, and a bubble size of 79%.

Go slowly. Check your work. If you make a typo in this step, your chart will get all wonky.

dataviz_challenge_data_table

Step 5: Insert a bubble chart in Excel.

Highlight the three columns on the right – the x column, the y column, and the frequency column. Don’t highlight the headers themselves (x, y, and bubble size). Click on the “Insert” tab at the top of the screen. Click on “Other Charts” and select a “Bubble Chart.”
dataviz_challenge_insert_chart

You’ll get something that looks like this:
dataviz_challenge_chart_1

Step 6: Add and format the data labels.

First, add the basic data labels. Right-click on one of the bubbles. A drop-down menu will appear. Select “Add Data Labels.” You’ll get something that looks like this:

dataviz_challenge_chart_2

Second, adjust the data labels. Right-click on one of the data labels (not on the bubble). A drop-down menu will appear. Select “Format Data Labels.” A pop-up screen will appear. You need to adjust two things. Under “Label Contains,” select “Bubble Size.” (The default setting on my computer is “Y Value.”) Next, under “Label Position,” select “Center.” (The default setting on my computer is “Right.)

dataviz_challenge_chart_3

Step 7: Format everything else.

Your basic bubble chart is finished! Now, you just need to fiddle with the formatting. This is easier said than done, and probably takes the longest out of all the steps.

Here’s how I formatted my bubble chart:

  • I formatted the axes so that my x-values ranged from 0 to 10 and my y-values ranged from 0 to 6.
  • I inserted separate text boxes for each of the following: the small, medium, and large organizations; the quantitative and qualitative practices; and the type evaluation practice (e.g., compiling statistics, feedback forms, etc.) I also made the text gray instead of black.
  • I increased the font size and used bold font.
  • I changed the color of the bubbles to blue, light green, and red.
  • I made the gridlines gray instead of black, and I inserted a white text box on top of the top and bottom gridlines to hide them from sight.

Your final bubble chart will look something like this:
state_of_evaluation_excel

For more details about formatting charts, check out these tutorials.

Bonus

Click here to download the Excel file that I used to create this bubble chart. Please explore the chart by right-clicking to see how the various components were made. You’ll notice a lot of text boxes on top of each other!”

Just spent the last 40 minutes reading comments that people have made to my posts.  Some were interesting; some were advertising (aka marketing) their own sites; one suggested I might revisit the “about” feature of my blog and express why I blog (other than it is part of my work).  So I revisited my “about” page, took out conversation, and talked about the reality as I’ve experienced it for the last three plus years.  So check out the about page–I also updated info about me and my family.  The comment about updating my “about” page was a good one.  It is an evaluative activity; one that was staring me in the face and I hadn’t realized it.  I probably need to update my photo as well…next time…:)

 

 

In a conversation with a colleague on the need for IRB when what was being conducted was evaluation not research, I was struck by two things:

  1. I needed to discuss the protections provided by IRB  (the next timely topic??) and
  2. the difference between evaluation and research needed to be made clear.

Leaving number 1 for another time, number 2 is the topic of the day.

A while back, AEA 365 did a post on the difference between evaluation and research (some of which is included below) from a graduate students perspective.  Perhaps providing other resources would be valuable.

To have evaluation grouped with research is at worst a travesty; at best unfair.  Yes, evaluation uses research tools and techniques.  Yes, evaluation contributes to a larger body of knowledge (and in that sense seeks truth, albeit contextual).  Yes, evaluation needs to have institutional review board documentation.  So in many cases, people could be justified in saying evaluation and research are the same.

NOT.

Carol Weiss   (1927-2013, she died in January) has written extensively on this difference and  makes the distinction clearly.  Weiss’s first edition of Evaluation Research  was published in 1972.She revised this volume in 1998 and issued it under the title of Evaluation. (Both have subtitles.)

She says that evaluation applies social science research methods and makes the case that it is intent of the study which makes the difference between evaluation and research.  She lists the following differences (pp 15 – 17, 2nd ed.):

  1. Utility;
  2. Program-driven questions;
  3. Judgmental quality;
  4. Action setting;
  5. Role Conflicts;
  6. Publication; and
  7. Allegiance.

 

(For those of you who are still skeptical, she also lists similarities.)  Understanding and knowing the difference between evaluation and research matters.  I recommend her books.

Gisele Tchamba who wrote the AEA365 post says the following: 

  1. Know the difference.  I came to realize that practicing evaluation does not preclude doing pure research. On the contrary, the methods are interconnected but the aim is different (I think this mirrors Weiss’s concept of intent).
  2. The burden of explaining. Many people in academia vaguely know the meaning of evaluation. Those who think they do mistake evaluation for assessment in education. Whenever I meet with people whose understanding of evaluation is limited to educational assessment, I use Scriven’s definition and emphasis words like “value, merit, and worth”.
  3. Distinguishing between evaluation and social science research.  Theoretical and practical experiences are helpful ways to distinguish between the two disciplines. Extensive reading of evaluation literature helps to see the difference.

She also sites a Trochim definition that is worth keeping in mind as it captures the various unique qualities of evaluation.  Carol Weiss mentioned them all in her list (above):

  •  “Evaluation is a profession that uses formal methodologies to provide useful empirical evidence about public entities (such as programs, products, performance) in decision making contexts that are inherently political and involve multiple often conflicting stakeholders, where resources are seldom sufficient, and where time-pressures are salient”.

Resources:

One of the expectations for the evaluation capacity building program that just finished is that the program findings will be written up for publication in scientific journals.

Easy to say.  Hard to do.

Writing is HARD.

To that end, I’m going to dig out my old notes from when I taught technical writing to graduate students, medical students, residents, and young faculty and give a few highlights.

  1. Writing only happens when words are put on paper (or typed into a computer).  Thinking about writing (I do that a lot) doesn’t count as writing.  The words don’t have to be perfect; good writing happens with multiple revisions.
  2. Schedule time for writing; write it in your planner.  You are making an appointment with yourself and writing.   At 10:00am every MWF I will write for one hour; then stop.  Protect this time.  You protect your program time; you need to protect your writing time.
  3. Keep in mind paper organization.  Generally, the IMRAD structure works for all manuscripts.  IMRAD stands for Introduction; Methods, Results, And Discussion.  Introduction is the literature review and ends with the research question.  Methods section is how the program, experiment, research was conducted in EXCRUCIATING detail.  Another evaluator should be able to pick up your manuscript and replicate your program.  Results are what you discovered, the lessons learned, the what worked and didn’t work.  They are quantitative and/or qualitative.  The Discussion is where you get to speculate; it highlights your conclusions and discusses the implications.  It also ties back to the literature.  If you have done the reporting correctly, you will have gone from the general to the specific back to the general.  Think two triangles placed together with their points (apex) touching.
  4. Follow the five Cs.  This is the single most important piece of advice (after number 2 above) about writing.   The five Cs are  Clarity, Coherence, Conciseness, Correctness, and Consistency.  If you keep those five Cs in mind, you will write well.  The writing is clear–you have not obfuscated the material.  The writing is coherent–it makes sense.  The writing is concise–you do not babble on or use jargon.  The writing is correct–you remember that the word data is a plural noun and takes a plural verb (use proper grammar and syntax).  The writing is consistent–you call your participants the same thing all the way through (no it is not boring).
  5. Start with the section you know best.  That may be what is  most familiar; it may be what is the  most recent; it may be what is the most concrete.  What ever you do, DO NOT start with the abstract; write it last.
  6. Have a style guide on your desk.  Most social sciences use APA; some use MLA or Chicago Style.  Have one (or more) on your desk.  Use it.  Follow and use the style that the journal requires.  That means you have read the “Instructions to authors” somewhere in the publication.
  7. Once you have finished the manuscript, READ IT OUT LOUD TO YOUR SELF.
  8. Run a spell and grammar check on the manuscript–it won’t catch everything; it will only catch most errors.
  9. Have more than one person read the manuscript AFTER you have read it out loud to your self.
  10. Persist.  More than one manuscript has been published because the author has persisted with the journal

Happy writing.

One of the outcomes of learning about evaluation is informational literacy.

Think about it.  How does what is happening in the world affect your program?  Your outcomes?  Your goals?

When was the last time you applied that peripheral knowledge to what you are doing.  Informational literacy is being aware of what is happening in the world.  Knowing this information, even peripherally, adds to your evaluation capacity.

Now, this is not advocating that you need to read the NY Times daily (although I’m sure they would really like to increase their readership); rather it is advocating that you recognize that none of your programs (whether little p or big P) occur in isolation. What your participants know affects how the program is implemented.  What you know affects how the programs are planned.  That knowledge also affects the data collection, data analysis, and reporting.  This is especially true for programs developed and delivered in the community, as are Extension programs.

Let me give you a real life example.  I returned from Tucson, AZ and the capstone event for an evaluation capacity program I was leading.  The event was an outstanding success–not only did it identify what was learned and what needed to be learned, it also demonstrated the value of peer learning.  I was psyched.  I was energized.  I was in an automobile accident 24 hours after returning home.  (The car was totaled–I no longer have a car; my youngest daughter and I experienced no serious injuries.)  The accident was published in the local paper the following day.  Several people saw the announcement; those same several people expressed their concern; some of those several people asked how they could help.  Now this is a very small local event that had a serious effect on me and my work.   (If I hadn’t had last week’s post already written, I don’t know if I could have written it.)  Solving simple problems takes twice as long (at least).  This informational literacy influenced those around me.  Their knowing changed their behavior to me.  Think of what September 11, 2001 did to people’s behavior; think about what the Pope’s RESIGNATION is doing to people’s behavior.  Informational literacy.  It is all evaluative.  Think about it.

 

Graphic URL: http://www.otterbein.edu/resources/library/information_literacy/index.htm

I try to keep politics out of my blogs.  Unfortunately, (or fortunately, depending on your world view), evaluation is a political activity.  Recently there have been several posts by others that remind me that evaluation is a political activity.  I try to point out how everyday activities are evaluative.

One is the growing discussion (debate?) about gun regulation.  Recently, the Morning Joe show included a clip that was picked up by MoveOn.org.  If you haven’t seen it, you need to.  Although the evaluative criteria are not clear, the outcome is and each commentator addressing the issue with a different lens (go here to view the clip).

In addition, a colleague of mine posted on her blog another blogger’s work (we are all connected, you know) that demonstrates the difficulty evaluators have being responsive to a client, especially one with whom you do not share the value in question (see Genuine Evaluation).  If you put your evaluator’s hat aside, the original post could be viewed as funny.

How many times have you smelled the milk and decided it was past prime?  Or seen mold growing on the yogurt?  This food blog also has many evaluative aspects (insert use by date blog).  Check it out.

 

I’m back from Tucson where it was warm and sunny–I wore shorts!  The best gift that I got serendipitously was the observation of peer learning from the participants.  Now I have to compile an evaluation of the program because I want to know what the participants thought, systematically.  I took a lot of notes and I know what needs to be added; what worked; what didn’t.  I got a lot of spontaneous and unsolicited comments about the value of the program–so OK–I’ve got the qualitative feed back (e.g., 18 months ago I wouldn’t have thought of this; knowing I’m not alone in the questions I have helps; I can now find an  answer…).  Once I get the quantitative feedback, I’ll triangulate the comments, the quantitative data, and any other data I have.  I am hoping to USE these findings to offer the program again.  More on that later.

 

An update on my making a difference query.  I’ve gotten a couple of responses and NO examples.  One response was about not using page views as a measure of success; instead use average time viewing a page.  A lot of responses think that this is a marketing blog.  Since evaluation is such a big part of marketing, I can see how that fits.  Only, this is an evaluation blog.  I’m not posting the survey.  It has been closed for weeks and weeks.  I was hoping for examples about how it changed your thinking, practice, world view.

 

Also, just so you know, I was in an auto accident 24 hours after I returned from Tucson.  Mersedes and I have aches and pains and NO serious injuries.  We do not have a car any more.  Talk about evaluating an activity–think about what you would do without a car (or if you don’t have one, what you would do with one).  I had to.

On January 22, 23, and 24, a group of would be evaluators will gather in Tucson, AZ at the Westin La Paloma Resort.

Even though Oregon State is a co-sponsor for this program, being in Oregon in winter (i.e., now) is not the land of sunshine, and since Vitamin D is critical for everyone’s well being, I chose Tucson for our capstone event.  Our able support person, Gretchen, chose  the La Paloma, a wonderful site on the north side of Tucson.  So even if it is not warm, it will be sunny.  Why, we might even get to go swimming; if not swimming,  certainly hiking.  There are a lot of places to hike around Tucson…in Sabino Canyon ; near/around A Mountain (first year U of A students get to whitewash or paint the A)  ; Saguaro National Park ; or maybe in one of the five (yes, five) mountain ranges surrounding Tucson.  (If you are interested in other hikes, look here.)

We will be meeting Tuesday afternoon, all day Wednesday, and Thursday morning.  Participants have spent the past 17 months participating in and learning about evaluation.  They have identified a project/program (either big P or little p), and they participated in a series of modules, webinars, and office hours on topics used everyday in evaluating a project or program.We anticipate over 20 attendees from the cohorts.  We have participants from five Extension program areas (Nutrition, Agriculture, Natural Resources, Family and Community Science, and 4-H), from ten western states (Oregon, Washington, California, Utah, Colorado, Idaho, New Mexico, Arizona, Wyoming, and Hawaii.), and all levels of familiarity with evaluation (beginner to expert).

I’m the evaluation specialist in charge of the program content (big P) and Jim Lindstrom (formerly of Washington State, currently University of Idaho) has been the professional development and technical specialist, and Gretchen Cuevas (OSU) has been our wonderful support person.  I’m using Patton’s Developmental Evaluation Model to evaluate this program.  Although some things were set at the beginning of the program (the topics for the modules and webinars, for example), other things were changed depending on feedback (readings, office hours). Although we expect that participants will grow their knowledge of evaluation, we do not know what specific and measurable outcomes will result (hence, developmental). We hope to run the program (available to Extension faculty in the Western Region) again in September 2013.  Our goal is to build evaluation capacity in the Western Extension Region.  Did we?