Having written about evaluation history previously, I identified those who contributed, not those who could be called evaluation pioneers; rather those who had influenced my thinking. I think it is noteworthy to mention those evaluation pioneers who set the field on the path we see today, those whom I didn’t mention and need to be. As a memorial (it is Memorial Day weekend , after all), Michael Patton (whom I’ve mentioned previously) is coordinating an AEA365 to identify and honor those evaluation pioneers who are no longer with us. (Thank you, Michael). The AEA365 link above will give you more details. I’ve also linked the mentioned evaluation pioneers that have been remembered. Some of these pioneers I’ve mentioned before; all are giants in the field; some are dearly loved as well. All those listed below have died. Patton talks about the recent-dead, the sasha, and the long-dead, the zamani. He cites the Historian James W. Loewen when he makes this distinction. Some of the listed are definitely the sasha (for me); some are zamani (for me). Perhaps photos will help (for whom photos could be found) and dates. There are Read the rest of this entry »
Mistakes are a great educator when one is honest enough to admit them and willing to learn from them.
– Alexander Solzhenitsyn
Even after 30+ years of evaluation, I make mistakes . It may be a mistake that occurs in the planning and modeling; it may be a mistake that occurs in the implementation, monitoring, and delivery; or a mistake in data management (qualitative or quantitative); or more than likely, a mistake in the use of the findings.
Probably the biggest mistake I have ever made was making an assumption at the planning stage. Read the rest of this entry »
Focus groups are a wonderful data gathering collection methodology. Not only are there different skills to learn for interviewing, analysis gives you the opportunity to explore qualitative data analysis. (It is all related after all.)
Now, I will confess that I’ve only ordered the 5th edition of the Krueger and Casey book (I don’t have it). I’m eager to see what is new. So I’ll settle for the 4th edition and try and regale you with information you may not know. (I will talk in a future post about the ways virtual focus groups are envisioned.)
Focus group describes (although sometimes incorrectly) a variety of group processes. Krueger and Casey give the reader a sense of to what to pay attention and to what is based on faulty data. So starting at the beginning, let’s look at an overview of what exactly is a focus group.
Groups are experiences that affect the individual throughout life and are used for planning, decision making, advising, learning, sharing, self-help, problem solving, among others. Yet group membership often leaves the individual Read the rest of this entry »
Previously, I talked about Survey’s (even though I posted it April 27, 2016). Today, I’ll collect all the posts about focus groups and add a bit more.
2010/01/05 Talks about the type of questions to use in a Focus Group
2010/01/27 One of three topics mentioned
2010/09/09 Talks about focus groups in terms of analyzing a conversation
2011/05/31 Talks about focus groups in the context of sampling
2011/06/23 Mentions Krueger, my go to
2013/11/15 Mentions focus groups
2014/10/23 Mentions focus groups and an individual with information
2015/02/11 Mentions focus groups…
2015/05/08 Virtual focus groups
Although focus groups are a mentioned throughout many of my posts, there are few that are exclusively devoted to focus groups. That surprises me. I need to talk more about focus groups. I especially need to talk about what I found when I did the virtual focus groups, more than with the specific post. From the interest at AEA last year, there needs to be much discussion.
So OK. More about focus groups.
Although Dick Krueger is my go to reference for focus groups (I studied with him, after all), there are other books on focus groups. (I just discovered that Krueger and Casey have also revised and published a 5th edition.)
The others for example (in no particular order),
Mary Marczak and Meg Sewell have an introduction to focus groups here (it is shorter that reading the book by Krueger and Casey).
I think it is important to remember that focus groups:
Next time: More on focus groups.
NOTE: This was written last week. I didn’t have time to post. Enjoy.
Methodology, aka implementation, monitoring, and deliveryis important. What good is it if you just gather the first findings that come to mind. Being rigorous here is just as important as when you are planning and modeling the program. So I’ve searched the last six years of blogs posts and gathered some of them for you. They are all about Survey, a form of methodology. Survey is a methodology that is often used by Extension, as it is easy to use. However, organizing the survey, getting the survey’s back, and dealing with non-response are problematic (another post, another time).
The previous posts are organized by date from the oldest to the most recent:
2016/04/21 (today’s post isn’t hyperlinked)
Just a few words on surveys today: A colleague asked about an evaluation survey for a recent conference. It will be an online survey probably using the University system, Qualtrics. My colleague jotted down a few ideas. The thought occurred to me that this book (by Ellen Taylor-Powell and Marcus Renner) would be useful. On page ten of this book, it asks for the type of information that is needed and wanted. It lists five types of possible information:
Thinking through these five categories made all the difference for my colleague. (Evaluation was a new area.) I had forgotten about how useful this booklet is for people being exposed to evaluation for the first time and to surveys, as well. I recommend it.
The WECT program arbitrarily divided the WECT program into four parts. Those “modules” are:
“Speak your mind, even if your voice shakes.”
The Gray Panthers is a group of people advocating for the rights of oldsters (among other things). Aging is the brunt of many jokes. At least in the US. Unfortunately.
Another long time friend relayed the NPR story about aging, which says anchovies, rosemary, vino, and leisure are the answers. Now I’m not saying that anchovies, rosemary, vino, and leisure are the reason evaluation as a discipline has come as far as it has in the last 50+ years; I’m just saying that perhaps we need to look a little deeper than just the surface. I think Maggie Kuhn says it clearly: “Speak your mind, even if your voice shakes.”
Stand up for what you believe! (even if your voice shakes).
I believe that evaluation makes a difference.
I believe that there is a need for evaluation.
So how will you stand up today? What choice will you make? Speak your mind unambiguously!
New Topic: I learned today that Will Shadish died on March 27, 2016.
Will was very active as a quantitative psychologist and an evaluator. We served AEA together. I will miss him.
I am a social scientist. I look for the social in the science of what I do.
I am an evaluator as a social scientist. I want to determine the merit, worth, value of what I do. I want to know that the program I’m evaluating (or offering) made a difference. (After all, the root of evaluation is value.)
Keeping that in mind has resulted (over the years) in the comment, “no wonder she is the evaluator” when I ask an evaluative question. So I was surprised when I read a comment by a reader that implied that it didn’t matter. The reader said, “The ugly truth is, it does not matter if it makes a difference. Somewhere down the road someone will see your post and may be it will be useful for him.” (Now you must know that I’ve edited the comment, although the entire comment doesn’t support my argument: Evaluators need to know if the program made a difference.)
So the thought occurred to me, what if it didn’t make a difference? What if the program has no value? No worth? No merit? What if by evaluating the program you find that it won’t be useful for the participant? What does that say about you as an evaluator? You as a program designer? You as an end user? Is it okay for the post to be useful “somewhere down the road”? Is blogging truly “a one way channel to transfer any information you have over the web.” How long can a social-scientist-always-looking-at-the-social continue to work when the information goes out and rarely comes back? I do not know. I do know that blogging is hard work. After six and one-half years of writing this blog almost weekly, writer’s block is my constant companion. (although being on a computer, I do not have a pile of paper, just blank screens). So I’m turning to you, readers:
Does it make a difference whether I write this blog or not?
Am I abdicating my role as an evaluator when I write the blog?
I don’t know. Over the years I have gotten some interesting comments (other than the “nice job” “keep up the work” types of comments). I will pause (not in my writing; I’ll continue to do that) and think about this. After all, I am an evaluator wanting to know what difference this program makes.
Today, I’m going to talk about evaluation use that is, the using of evaluation findings. Now, Michael Patton wrote the book (actually more than one) on the topic. And I highly recommend that book (and the shorter version, Essentials of Utilization-Focused Evaluation [461 pages including the index as opposed to 667]).
I firmly believe that there is no point in conducting an evaluation if the final report of that evaluation sits on someone’s shelf and IS NEVER USED! Not just read (hopefully!), USED to make the program better. To make a difference.
Today, though, I want to talk about how that final report is put together. It doesn’t matter if it is an info-graphic, a dash-board, an executive summary, a 300-page document, it all has to be your best effort. So I want to talk about your best effort.
That best effort is accurate, not only reporting the findings, also the spelling, the grammar, the syntax.
For example: The word “data” is a plural word and takes a plural noun. Yep. Check the dictionary folks. Websters Seventh New Collegiate Dictionary says (under the entry data) plural of DATUM. (I’ll bet you didn’t know that the plural of OPUS is OPERA. Just another example of the peculiarities of the English language.) The take away here: When in doubt, check it out!
When I put together a final report (regardless of the format), I use the 5Cs as a guideline. (I also use it as a basis of reviewing manuscripts.) Those 5Cs are: Clarity. Coherence. Conciseness. Correctness. Consistency. Following the 5Cs results in a product in which I can be proud.
How do you use your evaluation report? Keep these things in mind!
The Highest Appreciation
– John F. Kennedy
Gratitude must be a habit. Each day needs to be began and ended with gratefulness. Then if you can live by that gratefulness, you will utter the words and be grateful. That is what evaluation is all about–holding to the higher ground. Not just doing something to get it done; doing something (in this case the evaluation) because it is right as you know it today, in this moment, under these circumstances.
Doing evaluation just for the sake of evaluating, because it would be nice to know, is not the answer. Yes, it may be nice to know; does it make a difference? Does the program (policy, performance, product, project, etc.) make a difference in the lives of the participants. As a social scientist, it is important for me to look at the “social” side of what I do; that means dealing with people, the participants, you know the social part. I want to determine what the participants are thinking, feeling, doing. That means, I must walk my talk. And be grateful.
There are lots of resources available that help the nascent evaluator do just that. My recommendation is to start with Jody Fitzpatrick’s volume . I would also check out the American Evaluation Association site. There is a lot of information available to non-members (becoming a member is worth the cost). Then depending on what you specifically want to know, let me know. I’ll suggest references to you.