Recently, I came across a blog post by Daniel Green, who is the head of strategic media partnerships at the Bill and Melinda Gates Foundation. He coauthored this post with Mayur Patel, vice president of strategy and assessment at the Knight Foundation. I mention this because those two foundations have contributed $3.25 million in seed funding “…to advance a better understanding of audience engagement and media impact…”. They are undertaking an ambitious project to develop a rubric (of sorts) to determine “…how media influences the ways people think and act, and contributes to broader societal changes…”. Although it doesn’t specifically say, I include social media in the broad use of “media”. The blog post talks about broader agenda–that of informed and engaged communities. These foundations believe that an informed and engaged communities will strengthen “… democracy and civil society to helping address some of the world’s most challenging social problems.”
Or in other words, what difference is being made, which is something I wonder about all the time. (I’m an evaluator, after all, and I want to know what difference is made.)
Although there are strong media forces out there (NYTimes, NPR, BBC, the Guardian, among others), I wonder about the strength and effect of social media (FB, Twitter, LinkedIn, blogs, among others). Anecdotally, I can tell you that social media is everywhere and IS changing the way people think and act. I watch my now 17 y/o who uses the IM feature on her social media to communicate with her friends, set up study dates, find out homework assignments, not the phone like I did. I watch my now 20 y/o multitask–talk to me on Skype and read and respond to her FB entry. She uses IM as much as her sister. I know that social media was instrumental in the Arab spring. I know that major institutions have social media connections (FB, Twitter, LinkedIn, etc.). Social media is everywhere. And we have no good way to determine if it is making a difference and what that difference is.
For something so ubiquitous (social media), why is there no way to evaluate social media other than through the use of analytics? I’ve been asking that question since I first posted my query “Is this blog making a difference?” back in March 2012. Since I’ve been posting since December 2009, that gave me over 2 years from which to gather data. That is a luxury when it comes to programming, especially when many programs often are a few hours in duration and an evaluation is expected.
I hope that this project provides useful information for those of us who have come kicking and screaming to social media and have seen the light. Even though they are talking about the world of media, I’m hoping that they can come up with measures that address the social aspect of media. The technology provided IS useful; the question is what difference is it making?
We are four months into 2013 and I keep asking the question “Is this blog making a difference?” I’ve asked for an analytic report to give me some answers. I’ve asked you readers for your stories.
Let’s hear it for SEOs and how they pick up that title–I credit that with the number of comments I’ve gotten. I AM surprised at the number of comments I have gotten since January (hundreds, literally). Most say things like, “of course it is making a difference.” Some compliment me on my writing style. Some are in a foreign language which I cannot read (I am illiterate when it comes to Cyrillic, Arabic, Greek, Chinese, and other non-English alphabets). Some are marketing–wanting ping backs to their recently started blogs for some product. Some have commented specifically on the content (sample size and confidence intervals); some have commented on the time of year (vernal equinox). Occasionally, I get a comment like the comment below and I keep writing.
The questions of all questions… Do I make a difference? I like how you write and let me answer your question. Personally I was supposed to be dead ages ago because someone tried to kill me for the h… of it … Since then (I barely survived) I have asked myself the same question several times and every single time I answer with YES. Why? Because I noticed that whatever you do, there is always someone using what you say or do to improve their own life. So, I can answer the question for you: Do you make a difference? Yes, you do, because there will always be someone who uses your writings to do something positive with it. So, I hope I just made your day! And needless to say, keep the blog posts coming!
Enough update. New topic: I just got a copy of the third edition of Miles and Huberman (my to go reference for qualitative data analysis). Wait you say–Miles and Huberman are dead–yes, they are. Johnny Saldana (there needs to be a~ above the “n” in his name only I don’t know how to do that with this keyboard) was approached by Sage to be the third author and revise and update the book. A good thing, I think. Miles and Huberman’s second edition was published in 1994. That is almost 20 years. I’m eager to see if it will hold as a classic given that there are many other books on qualitative coding in press currently. (The spring research flyer from Gilford lists several on qualitative inquiry and analysis from some established authors.)
I also recently sat in on a research presentation of a candidate for a tenure track position here at OSU who talked about how the analysis of qualitative data was accomplished. Took me back to when I was learning–index cards and sticky notes. Yes, there are marvelous software programs out there (NVivo, Ethnograph, N*udist); I will support the argument that the best way to learn about your qualitative data is to immerse yourself in it with color coded index cards and sticky notes. Then you can use the software to check your results. Keep in mind, though, that you are the PI and you will bring many biases to the analysis of your data.
Harold Jarche says in his April 21 post, “What I’ve learned about blogging is that you have to do it for yourself. Most of my posts are just thoughts that I want to capture.” What an interesting way to look at blogging. Yes, there is content; yes, there is substance. What there is most are captured thoughts. Thoughts committed to “paper” before they fly away. How many times have you said to yourself–if only…because you don’t remember what you were thinking; where you were going. It may be a function of age; it may be a function of the times; it may be a function of other things as well (too little sleep, too much information, lack of f0cus).
When I blog on evaluation, I want to provide content that is meaningful. I want to provide substance (as I understand it) in the field of evaluation. Most of all, I want to capture what I’m thinking at the moment (like now). Last week was a good example of capturing thoughts. I wasn’t making up the rubric content; it is real. All evaluation needs to have criteria against which the “program” is judged for merit and worth. How else can you determine the value of something? So I ask you: What criteria do you use in the moment you decide? (and a true evaluator will say, “It depends…”)
A wise man (Elie Wiesel) said, “A man’s (sic) life, really, is not made up of years but of moments, all of which are fertile and unique.” Even though he has not laid out explicitly his rubric, it is clear what makes them have merit and worth– “moments which are fertile and unique”. An interesting way to look at life, eh?
Jarche gives us a 10 year update about his experience blogging. He is asking a question I’ve been asking: He asks what has changed and what has he learned in the past 10 years. He talks about metrics (spammers and published posts). I can do that. He doesn’t talk about analytics (although I’m sure he could) and I don’t want to talk about analytics, either. Some comments on my blog suggest that I look at length of time spent on a page…that seems like a reasonable metric. What I really want to hear is what has changed (Jarche talks about what has changes as being perpetual beta). Besides the constantly changing frontier of social media, I go back to the comment by Elie Wiesel–moments that are fertile and unique. How many can you say you’ve had today? One will make my day–one will get my gratitude. Today I am grateful for being able to blog.
A rubric is a way to make criteria (or standards) explicit and it does that in writing so that there can be no misunderstanding. It is found in many evaluative activities especially assessment of classroom work. (Misunderstanding is still possible because the English language is often not clear–something I won’t get into today; suffice it to say that a wise woman said words are important–keep that in mind when crafting a rubric.)
This week there were many events that required rubrics. Rubrics may have been implicit; they certainly were not explicit. Explicit rubrics were needed.
I’ll start with apologies for the political nature of today’s post.
Certainly, an implicit rubric for this event can be found in this statement:
Only it was not used. When there are clear examples of inappropriate behavior; behavior that my daughters’ kindergarten teacher said was mean and not nice, a rubric exists. Simple rubrics are understood by five year olds (was that behavioir mean OR was that behavior nice). Obviously 46 senators could only hear the NRA; they didn’t hear that the behavior (school shootings) was mean.
Boston provided us with another example of the mean vs. nice rubric. Bernstein got the concept of mean vs. nice.
There were lots of rubrics, however implicit, for that event. The NY Times reported that helpers (my word) ran TOWARD those in need not away from the site of the explosion (violence). There were many helpers. A rubric existed, however implicit.
I’m no longer worked up–just determined and for that I need a rubric. This image may not give me the answer; it does however give me pause.
For more information on assessment and rubrics see: Walvoord, B. E. (2004). Assessment clear and simple. San Francisco: Jossey-Bass.
In a conversation with a colleague on the need for IRB when what was being conducted was evaluation not research, I was struck by two things:
Leaving number 1 for another time, number 2 is the topic of the day.
A while back, AEA 365 did a post on the difference between evaluation and research (some of which is included below) from a graduate students perspective. Perhaps providing other resources would be valuable.
To have evaluation grouped with research is at worst a travesty; at best unfair. Yes, evaluation uses research tools and techniques. Yes, evaluation contributes to a larger body of knowledge (and in that sense seeks truth, albeit contextual). Yes, evaluation needs to have institutional review board documentation. So in many cases, people could be justified in saying evaluation and research are the same.
Carol Weiss (1927-2013, she died in January) has written extensively on this difference and makes the distinction clearly. Weiss’s first edition of Evaluation Research was published in 1972.She revised this volume in 1998 and issued it under the title of Evaluation. (Both have subtitles.)
She says that evaluation applies social science research methods and makes the case that it is intent of the study which makes the difference between evaluation and research. She lists the following differences (pp 15 – 17, 2nd ed.):
(For those of you who are still skeptical, she also lists similarities.) Understanding and knowing the difference between evaluation and research matters. I recommend her books.
Gisele Tchamba who wrote the AEA365 post says the following:
She also sites a Trochim definition that is worth keeping in mind as it captures the various unique qualities of evaluation. Carol Weiss mentioned them all in her list (above):
These three questions have buzzed around my head for a while in various formats.
When I attend a conference, I wonder.
When I conduct a program, I wonder, again.
When I explore something new, I am reminded that perhaps someone else has been here and wonder, yet again.
After all, aren’t both of these statements (capacity building and engagement) relating to a “foreign country” and a different culture?
How does all this relate to evaluation? Read on…
Premise: Evaluation is an everyday activity. You evaluate everyday; all the time; you call it making decisions. Every time you make a decision, you are building capacity in your ability to evaluate. Sure, some of those decisions may need to be revised. Sure, some of those decisions may just yield “negative” results. Even so, you are building capacity. AND you share that knowledge–with your children (if you have them), with your friends, with your colleagues, with the random shopper in the (grocery) store. That is building capacity. Building capacity can be systematic, organized, sequential. Sometimes formal, scheduled, deliberate. It is sharing “What do I know that they don’t know (in the hope that they too will know it and use it).
Premise: Everyone knows something. In knowing something, evaluation happens–because people made decisions about what is important and what is not. To really engage (not just outreach which much of Extension does), one needs to “do as” the group that is being engaged. To do anything else (“doing to” or “doing with”) is simply outreach and little or no knowledge is exchanged. Doesn’t mean that knowledge isn’t distributed; Extension has been doing that for years. Just means that the assumption (and you know what assumptions do) is that only the expert can distribute knowledge. Who is to say that the group (target audience, participants) aren’t expert in at least part of what is being communicated. Probably are. It is the idea that … they know something that I don’t know (and I would benefit from knowing).
Premise: Everything, everyone is connected. Being prepared is the best way to learn something. Being prepared by understanding culture (I’m not talking only about the intersection of race and gender; I’m talking about all the stereotypes you carry with you all the time) reinforces connections. Learning about other cultures (something everyone can do) helps dis-spell stereotypes and mitigate stereotype threats. And that is an evaluative task. Think about it. I think it captures the What do all of us need to know that few of us knows?” question.
CAVEAT: This may be too political for some readers.
Sometimes, there are ideas that appear in other blogs that may or may not be directly related to my work in evaluation. Because I read them, I see evaluative relations and think they are important enough to pass along. Today is one of those days. I’ll try to connect the dots between what I read and share here and evaluation. (For those of you who are interested in the Connect the Dots, a major event day on climate change and weather on May 5, 2012, go here.)
First, Valerie Williams, AEA365 blog, April18, 2012 says, “…Many environmental education programs struggle with the question of whether environmental education is a means to an end (e.g. increased stewardship) or an end itself. This question has profound implications for how programs are evaluated, and specifically the measures used to determine program success.”
I think that many educational programs (whether environmentally focused or not) struggle with this question. Is the program a means to an end or the end itself? I am reminded of programs which are instituted for cost savings and then the program designers want that program evaluated. Means or end?
Williams also offers comments about evaluability assessment–that evaluation task that helps evaluators decide whether to evaluate a new programs, especially if that new program’s readiness for evaluation is in question. (She provides resources if you are interested.) She offers reasons for conducting an evaluability assessment. Specifically:
Evauability assessment is a topic for future discussion.
Second, a colleague offered the following CDC reference and says, “The purpose of this workbook is to help public health program managers, administrators, and evaluators develop an effective evaluation plan in the context of the planning process. It is intended to assist in developing an evaluation plan but is not intended to serve as a complete resource on how to implement program evaluation.” I offer it here because I know that evaluation plans are often added after the program has been implemented. Although it has as a focus pubic health programs, one source familiar with this work commented that there is enough in the workbook that can be applied to a variety of settings. Check it out; the link is below
Next, Nigerian novelist Chimamanda Ngozi Adichie is quoted as saying, “The single story creates stereotypes, and the problem with stereotypes is not that they are untrue, but that they are incomplete. They make one story become the only story.”
Then it behooves us all to remember this–are we using the story because it captures the effect or because it is the only story? If only story, is it promoting a stereotype? Adichie, though a novelist, may be an evaluator at heart.
Finally, there is this quote, also from an AEA365 blog (Steve Mayer) “There are elements of Justice and Injustice everywhere – in society, in reform efforts, and in the evaluation of reform efforts. The choice of outcomes to be assessed is a political act. “Noticing progress” probably takes us further than “measuring impact,” always being mindful of who benefits.”
We often are stuck on “measuring impact”; after all, isn’t that what everyone wants to know. If world peace is the ultimate impact, then what is the likelihood of measuring that? I think that “noticing progress” (i.e., change) will take us much further because of the justice it can capture (or not–and that is telling). And by capturing “noticing progress”, we can make explicit who benefits.
This runs long today.
My oldest daughter graduated from High School Monday. Now, she is facing the reality of life after high school–the emotional let down, the lack of structure; the loss of focus. I remember what it was like to commence…another word for beginning. I think I was depressed for days. The question becomes evaluative when one thinks of planning, which is what she has to do now. In planning, she needs to think: What excites me? What are my passions? How will I accomplish the what? How will I connect again to the what? How will I know I’m successful?
Ellen Taylor-Powell, former Distinguished Evaluation Specialist at the University of Wisconsin Extension, talks about planning on the professional development website at UWEX. (There are many other useful publications on this site…I urge you to check them out.) This publication has four sections: focusing the evaluation, collecting the information, using the information, and managing the evaluation. I want to talk more about focusing the evaluation–because that is key when beginning, whether it is the next step in your life, the next program you want to implement, or the next report you want to write.
This section of the publication asks you to identify what you are going to evaluate, the purpose of the evaluation, who and how they will use the evaluation, what questions you want to answer, what information you need to answer those questions, develop a time-line, and, finally, identify what resources you will need. I see this as puzzle assembly–one where you do not necessarily have a picture to guide you. Not unlike a newly commenced graduate–finding a focus is putting together a puzzle.–you won’t know what the picture is, where you are going, until you focus and develop a plan. For me, that means putting the puzzle together. It means finding the what and the so what. It is always the first place to commence.
Having spent the last week reviewing two manuscripts for a journal editor, it became clear to me that writing is an evaluative activity.
The criteria for good writing is meeting the 5 Cs: Clarity, Coherence, Conciseness, Correctness, and Consistency.
Evaluators write–they write survey questions, summaries of findings, reports, journal manuscripts. If they do not employ the 5 Cs to communicate to a naive audience what is important, then the value (remember the root for evaluation is value) of their writing is lost, often never to be reclaimed.
In a former life, I taught scientific/professional writing to medical students, residents, junior professors, and other graduate students. I found many sources that were useful and valuable to me. The conclusion to which I came is that taking a scientific/professional (or non-fiction) writing course is an essential tool to have as an evaluator. So I set about collecting useful (and, yes, valuable) resources. I offer them here.
Probably the single resource that every evaluator needs to have on hand is Strunk and White’s slim volume called “The Elements of Style”. It is in the 4th edition–I still use the 3rd. Recently, a 50th anniversary edition was published that is a fancy version of the 4th edition. Amazon has the 50th anniversary edition as well as the 4th edition–the 3rd ed is out of print.
You also need the style guide (APA, MLA, Biomedical Editors, Chicago) that is used by the journal to which you are submitting your manuscript. Choose one. Stick with it. I have the 6th edition of the APA guide on my desk. It is on line as well.
Access to a dictionary and a thesaurus (now conveniently available on line and through computer software) is essential. I prefer the hard copy Webster’s (I love the feel of books), yet would recommend the on-line version of the Oxford English Dictionary.
There are a number of helpful writing books (in no particular order or preference):
I will share Will Safire’s 17 lighthearted looks at grammar and good usage another day.
Merry Christmas–the greeting for the upcoming holiday–Hanukkah ended December 18 (I hope your was very happy–mine was); Solstice was last night (and the sun returned today–a feat in Oregon, in winter, so Solstice was truly blessed);
Kwanzaa won’t happen until Dec 26–and the greeting there is Habari Gani (Swahili for “What’s the news?”).
Now, how do I get an evaluation topic from that opening…hmmm…perhaps a gift…yes…a gift.
The gift I give y’all is this:
Think about your blessings.
Think about the richness of your life.
Think about those for whom you care.
And remember…even those thoughts are evaluative because you know how blessed you are; because you know how rich (we are not talking money here…) your life is; because you have people in your life for whom you care AND who care for you.
The light returns regardless of the tradition you follow, and that, too, is evaluative–because you can ask yourself is the light enough–and if it isn’t you CAN figure out how to solve that problem.
Next week, I’ll suggest some New Year’s resolutions–evaluative, of course with no self-deception–you CAN do evaluation!