nelson mandela 1Nelson Mandela died last week (Thursday, actually) at the age of 95.  Invictus is the name of a movie which recounts the poem below.  While in prison on Robbon Island, he recited this poem to other prisoners and was empowered by the self-mastery message in it.  It is a powerful poem.  Mandela was a powerful person.  We and the world were blessed that he was with us for 95 years; that he was the master of his fate and captain of his soul.

 

Invictus

Out of the night that covers me,
Black as the pit from pole to pole,
I thank whatever gods may be
For my unconquerable soul.

In the fell clutch of circumstance
I have not winced nor cried aloud.
Under the bludgeonings of chance
My head is bloody, but unbowed.

Beyond this place of wrath and tears
Looms but the horror of the shade,
And yet the menace of the years
Finds and shall find me unafraid.

It matters not how strait the gate,
How charged with punishments the scroll,
I am the master of my fate:
I am the captain of my soul.

~~William  Ernest Henley

 

When I read this poem and think of Mandela (aka Madiba).  I also think of the evaluator’s guiding principles, especially the last three: Honesty/Integrity, Respect for People, and Responsibilities for General and Public Welfare.  Mandela could have been an evaluator as even the first two principles could apply (Systematic Inquiry and Competence).  He was certainly competent and he did systematic inquiry.  He used these principles in an arena other than evaluation.  Yet by doing what he did, he was able to determine the merit and worth of what he did.  The world was lucky to have him for so long.  He was the change he wished to see; and he changed the world.

I was reminded about the age of this blog (see comment below).  Then it occurred to me:  I’ve been writing this blog since December 2009.  That is 4 years of almost weekly posts.  And even though evaluation is my primary focus, I occasionally get on my soap box and do something different (White Christmas Pie, anyone?).  My other passion besides evaluation is food and cooking.  I gave a Latke party on Saturday and the food was pretty–and it even tasted good.  I was more impressed by the visual appeal of my table; my guests were more impressed by the array of tastes, flavors, and textures.  I’d say the evening was a success.  This blog is a metaphor for that table.  Sometimes I’m impressed with the visual appeal; sometimes I’m impressed with the content.  Today is an anniversary.  Four years.  I find that amazing (visual appeal).  The quote below (a comment offered by a reader on the post “Is this blog making a difference?”, a post I made a long time ago) is about content.

“Judging just from the age of your blog I must speculate that you’ve done something right. If not then I doubt you’d still be writing regularly. Evaluation of your progress is important but pales in comparison to the importance of writing fresh new content on a regular basis. Content that can be found no place else is what makes a blog truly useful and indeed helps it make a difference.”

Audit or evaluation?

I’m an evaluator; I want to know what difference the “program” is making in the lives of the participants.  The local school district where I live, work, and send my children to school has provided middle school children with iPads iPad.  They want to “audit” their use.  I commend the school district for that initiative (both giving the iPads as well wanting to determine the effectiveness).  I wonder if they really want to know what difference the electronics are making in the lives of the students.  I guess I need to go re-read Tom Schwandt’s 1988 book, “Linking Auditing and Metaevaluation”, a book he wrote with Ed Halpern, Tom Schwandt book  as well as see what has happened in the last 25 years (and it is NOT that I do not have anything else to read…smiley).  I think it is important to note the sentence (taken from the forward), “Nontraditional studies are found not only in education, but also in…divers fields …(and the list they provide is a who’s who in social science).  The problem of such studies is “establishing their merit”.  That is always a problem with evaluation–establishing the merit, worth, value of a program (study).

We could spend a lot of time debating the  merit, worth, value of using electronics in the pursuit of learning.  (In fact, Jeffrey Selingo writes about the need to personalize instruction using electronics in his 2013 book “College (Un)bound”college unbound by jeffry selingo–very readable, recommended.)   I do not think counting the number of apps or the number of page views is going to answer the question posed.  I do not think counting the number of iPads returned in working condition will either.  This is an interesting experiment.  How , reader, would you evaluate the merit, worth, value of giving iPads to middle school children?  All ideas are welcome–let me know because I do not have an answer, only an idea.

As a bonus, there are two posts this week (I won’t be posting next week…Thanksgivukkah thanksgivukkah image).

I keep getting comments about my post “Is this post making a difference” and the subsequent posts related to that.  (The survey is closed–has been for a long time.)  Although I was looking for some tangible measure of difference (i.e., what is the merit, worth, value of this blog), I find that this post elicits more positive comments than not.  Although I still get the random advertising post on that blog, I mostly get thought provoking comments about how it is or can be making a difference.  I also get the comment at least weekly that says my blog isn’t making any difference to the reader (perhaps the reader needs to follow the blog across time rather than just reading this post) …sigh…

One comment I received this week said, “I can confirm that the information that you share has: 1) made sense; 2) made a difference; 3)  been worthwhile.  Keep it up; don’t stop.”  Nice.  Not specific; still nice.  That the blog is being read is important.  That people take the time to comment is important.  That this venue is valuable to many is important.

I get positive comments on my writing; I get positive comments on my content.  I’m an evaluator.  I want to know what difference this information is making in your life.  This is a program and it needs to be evaluated.  And number of page views isn’t the answer.

So I ask you to keep commenting, that way I know the blog is being read.  That way I know I am making a difference.

For the first time in my lifetime the first day of Hanukkah is also Thanksgiving.  The pundits are are sagely calling the event Thanksgivukkah.thanksgivukkah image  According to this referenced source, the first day of Hanukkah will not happen again for over 70,000 years.  However, according to another source, this overlap could happen again in 2070 and 2165.  Although I do not think I’ll be around in 2070, my children could be (they are 17 and 20 of this writing).  I find this phenomenon really interesting–Thanksgiving usually starts the US holiday season and Hanukkah falls later, during Advent.  Not so this year.  I wonder how people combine latkes and Thanksgiving (even without the turkey).  Loaded latkes? Thanksgivukkah latkes (My appreciation to Kia.)

So I’m sure you are wondering, HOW EXACTLY DOES THIS RELATE TO EVALUATION?

I decided that it was time to revisit my blog title, Evaluation is an Everyday Activity. Every day you evaluate something.  Although you do not necessarily articulate out loud the criteria against which you are determining merit, worth, and value, you have those criteria.  I have them for latkes AND Thanksgiving.  Our latkes must be crispy; of winter vegetables including potatoes.  This allows me to use a variety of winter vegetables I may have gotten in my CSA.  (Beet latkes? Sweet potato latkes?  Celeriac latkes?  You bet!)   Our Thanksgiving is to have foods for which we are truly thankful.  That allows us to think about gratitude.  Each year our menu is different because each year we are thankful for different things.  (I must confess, however, we always have pie–pumpkin, which I make from home grown pumpkin/squash, and chocolate pecan, which is an original old family recipe.)  One year when we put all the food on the table, all the food was green.  We didn’t plan it that way; it just happened because they were foods for which we were thankful.  This year, we will have mashed potatoes (by the Queen of mashed potatoes), Celebration Filo, both the gluten-free (made with rice wrappers and no onion, garlic, or dairy) and glutened versions (the version which we renamed and is in the link above), and something else that will probably be green.  This year I’m thankful for my gluten-free; dairy-free friend who will join us for Thanksgiving and I’m working up alternatives to accommodate her and still satisfy the rest of us.

So you see, even when I’m thinking about Thanksgiving, latkes, and gratitude, I’m thinking about evaluation.  What merit does the “program” have?  What is its worth?  What is its value?  Those are all evaluative questions that apply to Thanksgiving (and latkes and gratitude). Thanksgiving 2

So you see, Evaluation is an Everyday Activity.

I won’t be blogging next week.  Enjoy.  Be grateful.Thanksgiving

 

 

Variables.

We all know about independent variables, and dependent variables.  Probably even learned about moderator variables, control variables and intervening variables.  Have you heard of confounding variables?  Variables over which you have no (or very little) control.  They present as a positive or negative correlation with the dependent and independent variable.  This spurious relationship plays havoc with analyses, program outcomes, and logic models.  You see them often in social programs.

Ever encounter one? (Let me know).  Need an example?  Here is one a colleague provided.  There was a program developed to assist children removed from their biologic  mothers (even though the courts typically favor mothers) to improve the children’s choices and chances of success.  The program had included training of key stakeholders (including judges, social service, potential foster parents).  The confounding variable that wasn’t taken into account was the sudden appearance of the biological father.  Judges assumed that he was no longer present (and most of the time he wasn’t); social service established fostering without taking into consideration the presence of the biological father; potential foster parents were not allerted in their training of the possibility.  Needless to say, the program failed.  When biologic fathers appeared (as often happened), the program had no control over the effect they had.  Fathers had not been included in the program’s equation.

Reviews.

Recently, I was asked to review a grant proposal, the award would result in several hundred thousand dollars (and in today’s economy, no small change).  The PI’s passion came through in the proposal’s text.  However, the PI and the PI’s colleagues did some major lumping in the text that confounded the proposed outcomes.  I didn’t see how what was being proposed would result in what was said to happen.  This is an evaluative task.  I was charged to with evaluating the proposal on technical merit, possibility of impact (certainly not world peace), and achievability.  The proposal was lofty and meant well.  The likelihood that it would accomplish what it proposed was unclear, despite the PI’s passion.  When reviewing a proposal, it is important to think big picture as well as small picture.  Most proposals will not be sustainable after the end of funding.  Will the proposed project be able to really make an impact (and I’m not talking here about world peace).

Conversations.

I attended a meeting recently that focused on various aspects of diversity.  (Now among the confounding here is what does one mean by diversity; is it only the intersection of gender and race/ethnicity?  Or something bigger, more?)  One of the presenters talked about how just by entering into the conversation, the participants would be changed.  I wondered, how can that change be measured?  How would you know that a change took place?  Any ideas?  Let me know.

Focus groups.

A colleague asked whether a focus group could be conducted via email.  I had never heard of such a thing (virtual, yes; email, no).  Dick Krueger and Mary Ann Casey only talk about electronic reporting in their 4th edition of their Focus Group book. krueger 4th ed  If I go to Wikipedia (keep in mind it is a wiki…), there is a discussion of online focus groups.  Nothing offered about email focus groups.  So I ask you, readers, is it a focus group if it is conducted by email?

 

 

 

I had a topic all ready to write about then I got sick.  I’m sitting here typing this trying to remember what that topic was, to no avail. That topic went the way of much of my recent memory; another day, perhaps.

I do remember the conversation with my daughter about correlation.  She had a correlation of .3 something with a probability of 0.011 and didn’t understand what that meant.  We had a long discussion of causation and attribution and correlation.

We had another long conversation about practical v. statistical significance, something her statistics professor isn’t teaching.  She isn’t learning about data management in her statistics class either.  Having dealt with both qualitative and quantitative data for a long time, I have come to realize that data management needs to be understood long before you memorize the formulas for the various statistical tests you wish to perform.  What if the flood happens????lost data

So today I’m telling you about data management as I understand it, because the flood  did actually happen and, fortunately, I didn’t loose my data.  I had a data dictionary.

Data dictionary.  The first step in data management is a data dictionary.   There are other names for this, which escape me right now…know that a hard copy of how and what you have coded is critical.  Yes, make a back up copy on your hard drive…have a hard copy because the flood might happen. (It is raining right now and it is Oregon in November.)

Take a hard copy of your survey, evaluation form, qualitative data coding sheet and mark on it what every code notation you used means.  I’d show you an example of what I do, only they are at the office and I am home sick without my files.  So, I’ll show you a clip art instead…data management    smiley.  No, I don’t use cards any more for my data (I did once…most of you won’t remember that time…), I do make a hard copy with clear notations.  I find my self doing that with other things to make sure I code the response the same way.  That is what a data dictionary allows you to do–check yourself.

Then I run a frequencies and percentages analysis.  I use SPSS (because that is what I learned first).  I look for outliers, variables that are miscoded, and system generated missing data that isn’t missing.  I look for any anomaly in the data, any humon error (i. e. my error).  Then I fix it.  Then I run my analyses.

There are probably more steps than I’ve covered today.  These are the first steps that absolutely must be done BEFORE you do any analyses.  Then you have a good chance of keeping your data safe.

What follows is a primer, one of the first things evaluators learn when developing a program.  This is something that cannot be said enough.  Program evaluation is about the program.  NOT about the person who leads the program; NOT about the policy about the program; NOT about the people who are involved in the program.  IT IS ABOUT THE PROGRAM!

Phew.  Now that I’ve said that.  I’ll take a deep breath and elaborate.

 

“Anonymity, or at least a lack of face-to-face dialogue, leads people to post personal attacks…” (This was said by Nina Bahadur, Associate Editor, HuffPost Women.)  Although she was speaking about blogs, not specifically program evaluation, this applies to program evaluations.  Evaluations are handed out at the end of a program.  Program evaluations do not ask for identifying information and often lead to personal attacks.  Personal attacks are not helpful to the program lead, the program, or the participants learning.

The program lead really wants to know ABOUT THE PROGRAM, not slams about what s/he did or didn’t do; say or didn’t say.  There are some things about a program over which the program lead doesn’t have any control–the air handling at the venue; the type of chairs used; the temperature of the room; sometimes, even the venue.  The program lead does have control over the choice of venue (usually), the caterer (if food is offered), the materials (the program) offered to the participants, how s/he looks (grumpy or happy; serious or grateful)–I’ve just learned that how the “teacher” looks at the class makes a big difference in participants learning.

What a participant must remember is that they agreed to participate.   It may have been a requirement of their job; it may have been encouraged by their boss; it may have been required by their boss.  What ever the reason, they agreed to participate.  They must be accountable for their participation.  Commenting on those things over which the program lead has no control may make then feel better in the short run; it doesn’t do any good to improve the program or to determine if the program made a difference–that is had merit, worth, value.  (Remember the root word of evaluation is VALUE.)

Personal grousing doesn’t add to the program’s value.  The question that must be remembered when filling out an evaluation is, “Would this comment be said in real life (not on paper)? Would you tell the person this comment?”  If not, it doesn’t belong in your evaluation.  Program leads want to build a good and valuable program.  The only way they can do is to receive critical feedback about the program.  So if the food stinks and the program lead placed the order with the caterer, tell the program lead not to use the caterer again, don’t tell the program lead that her/his taste in food is deplorable–how does that improve the program?  If the chairs are uncomfortable, tell the program lead to tell the venue that the chairs were found by participants to be uncomfortable as the program lead didn’t deliberately make the chairs uncomfortable.  If there wasn’t enough time for sharing, tell the program lead to increase the sharing time because sometimes sharing of personal experiences is just what is needed to make the program meaningful to participants.

People often ask me what is a good indicator of impact…I usually answer world peace…then I get serious.

I won’t get into language today.  Impact–long term outcome.  For purposes of today, they are both the same:  CHANGE in the person or change in the person’s behavior.

Paul Mazmanian, a medical educator at Virginia Commonwealth University School of Medicine, wanted to determine whether practicing physicians who received only clinical information at a traditional continuing medical education lecture would alter their clinical behavior at the same rate as physicians who received clinical information AND information about barriers to behavioral change.  What he found is profound.  Information about barriers to change did not change the physician’s clinical behavior.  That is important.  Sometimes research yields information that is very useful.  This is the case here.  Mazmanian, etal. (see complete citation below) found (drum roll, please) that both groups of physicians were statistically significantly MORE likely to change their clinical behavior if they indicated their INTENT TO CHANGE their behavior immediately following the lecture they received.

The authors concluded that stated intention to change was important in changing behavior.

We as evaluators can ask the same question: Do you intend to make a behavior change and if so, what specific change.

Albert Bandura talks about self-efficacy.  That is often measured by an individual’s confidence to be able to implement a change.  By pairing the two questions (How confident are you that…and Do you intend to make a change…) evaluators can often capture an indicator of behavior change; that indicator of behavior change is often the best case for long-term outcome.

 

I’ll be at AEA this week.  Next week, I’m moving offices.  I won’t be blogging.

Citation:

Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowitz, M. P. (1998). Information about barriers to planned  change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8), 882-886.

There has been quite a bit written about data visualization, a topic important to evaluators who want their findings used.  Michael Patton talks about evaluation use in his 4th edition of utilization-focused evaluation. Patton's utilization focused evaluation  He doesn’t however list data visualization in the index; so he may talk about it somewhere–it isn’t obvious.

The current issue of New Directions for Evaluation data visualization NDE is devoted to data visualization and it is the first part (implying, I hope, for at least a part 2).  Tarek Azzam and Stephanie Evergreen are the guest editors.  This volume (the first on this topic in 15 years) sets the stage (chapter 1) and talks about quantitative data visualization and quantitative data visualization.  The last chapter talks about the tools that are available to the evaluator and there are many and they are various.  I cannot do them justice in this space; read about them in the NDE volume.  (If you are an AEA member, the volume is available on line.)

freshspectrum, a blog by Chris Lysy, talks about INTERACTIVE data visualization with illustrations.

Stephanie Evergreen, the co-guest editor of the above NDE, also blogs and in her October 2 post, talks about “Design for Federal Proposals (aka Design in a Black & White Environment)”.  More on data visualization.

The data visualizer that made the largest impact on me was Hans Rosling in his TED talks.  Certainly the software he uses makes the images engaging.  If he didn’t understand his data the way he does, he wouldn’t be able to do what he does.

Data visualization is everywhere.  There will be multiple sessions at the AEA conference next week.  If you can, check them out–get there early as they will fill quickly.

When I did my dissertation, there were several soon-to-be-colleagues who were irate that I did a quantitative study on qualitative data.  (I was looking at cognitive bias, actually.)  I needed to reduce my qualitative data so that I could represent it quantitatively.  This approach to coding is called magnitude coding.  Magnitude coding is just one of the 25 first cycle coding methods that Johnny Saldaña (2013) talks about in his book, The coding manual for qualitative researchers coding manual--johnny saldana (see pages 72-77).  (I know you cannot read the cover title–this is just to give you a visual; if you want to order it, which I recommend, go to Sage Publishers, Inc.)  Miles and Huberman (1994) also address this topic.miles and huberman qualitative data

So what is magnitude coding? It is a form of coding that “consists of and adds a supplemental alphanumeric or symbolic code or sub-code to an existing coded datum…to indicate its intensity, frequency, direction, presence , or evaluative content” (Saldaña, 2013, p. 72-73).  It could also indicate the absence of the characteristic of interest.  Magnitude codes can be qualitative or quantitative and/or nominal.  These codes enhance the description of your data.

Saldaña provides multiple examples that cover many different approaches.  Magnitude codes can be words or abbreviations that suggest intensity or frequency or codes can be numbers which do the same thing.  These codes can suggest direction (i.e., positive or negative, using arrows).  They can also use symbols like a plus (+) or a minus (), or other symbols indicating presence or absence of a characteristic.  One important factor for evaluators to consider is that magnitude coding also suggests evaluative content, that is , did the content demonstrate merit, worth, value?  (Saldaña also talks about evaluation coding; see page 119.)

Saldaña gives an example of analysis showing a summary table.  Computer assisted qualitative data analysis software (CAQDAS)  and Microsoft Excel can also provide summaries.  He notes “that is very difficult to sidestep quantitative representation and suggestions of magnitude in any qualitative research” (Saldaña, 2013, p. 77).  We use quantitative phrases all the time–most, often, extremely, frequently, seldom, few, etc.  These words tend “to enhance the ‘approximate accuracy’ and texture of the prose” (Saldaña, 2013, p. 77).

Making your qualitative data quantitative is only one approach to coding, an approach that is sometimes very necessary.