What follows is a primer, one of the first things evaluators learn when developing a program.  This is something that cannot be said enough.  Program evaluation is about the program.  NOT about the person who leads the program; NOT about the policy about the program; NOT about the people who are involved in the program.  IT IS ABOUT THE PROGRAM!

Phew.  Now that I’ve said that.  I’ll take a deep breath and elaborate.

 

“Anonymity, or at least a lack of face-to-face dialogue, leads people to post personal attacks…” (This was said by Nina Bahadur, Associate Editor, HuffPost Women.)  Although she was speaking about blogs, not specifically program evaluation, this applies to program evaluations.  Evaluations are handed out at the end of a program.  Program evaluations do not ask for identifying information and often lead to personal attacks.  Personal attacks are not helpful to the program lead, the program, or the participants learning.

The program lead really wants to know ABOUT THE PROGRAM, not slams about what s/he did or didn’t do; say or didn’t say.  There are some things about a program over which the program lead doesn’t have any control–the air handling at the venue; the type of chairs used; the temperature of the room; sometimes, even the venue.  The program lead does have control over the choice of venue (usually), the caterer (if food is offered), the materials (the program) offered to the participants, how s/he looks (grumpy or happy; serious or grateful)–I’ve just learned that how the “teacher” looks at the class makes a big difference in participants learning.

What a participant must remember is that they agreed to participate.   It may have been a requirement of their job; it may have been encouraged by their boss; it may have been required by their boss.  What ever the reason, they agreed to participate.  They must be accountable for their participation.  Commenting on those things over which the program lead has no control may make then feel better in the short run; it doesn’t do any good to improve the program or to determine if the program made a difference–that is had merit, worth, value.  (Remember the root word of evaluation is VALUE.)

Personal grousing doesn’t add to the program’s value.  The question that must be remembered when filling out an evaluation is, “Would this comment be said in real life (not on paper)? Would you tell the person this comment?”  If not, it doesn’t belong in your evaluation.  Program leads want to build a good and valuable program.  The only way they can do is to receive critical feedback about the program.  So if the food stinks and the program lead placed the order with the caterer, tell the program lead not to use the caterer again, don’t tell the program lead that her/his taste in food is deplorable–how does that improve the program?  If the chairs are uncomfortable, tell the program lead to tell the venue that the chairs were found by participants to be uncomfortable as the program lead didn’t deliberately make the chairs uncomfortable.  If there wasn’t enough time for sharing, tell the program lead to increase the sharing time because sometimes sharing of personal experiences is just what is needed to make the program meaningful to participants.

People often ask me what is a good indicator of impact…I usually answer world peace…then I get serious.

I won’t get into language today.  Impact–long term outcome.  For purposes of today, they are both the same:  CHANGE in the person or change in the person’s behavior.

Paul Mazmanian, a medical educator at Virginia Commonwealth University School of Medicine, wanted to determine whether practicing physicians who received only clinical information at a traditional continuing medical education lecture would alter their clinical behavior at the same rate as physicians who received clinical information AND information about barriers to behavioral change.  What he found is profound.  Information about barriers to change did not change the physician’s clinical behavior.  That is important.  Sometimes research yields information that is very useful.  This is the case here.  Mazmanian, etal. (see complete citation below) found (drum roll, please) that both groups of physicians were statistically significantly MORE likely to change their clinical behavior if they indicated their INTENT TO CHANGE their behavior immediately following the lecture they received.

The authors concluded that stated intention to change was important in changing behavior.

We as evaluators can ask the same question: Do you intend to make a behavior change and if so, what specific change.

Albert Bandura talks about self-efficacy.  That is often measured by an individual’s confidence to be able to implement a change.  By pairing the two questions (How confident are you that…and Do you intend to make a change…) evaluators can often capture an indicator of behavior change; that indicator of behavior change is often the best case for long-term outcome.

 

I’ll be at AEA this week.  Next week, I’m moving offices.  I won’t be blogging.

Citation:

Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowitz, M. P. (1998). Information about barriers to planned  change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8), 882-886.

There has been quite a bit written about data visualization, a topic important to evaluators who want their findings used.  Michael Patton talks about evaluation use in his 4th edition of utilization-focused evaluation. Patton's utilization focused evaluation  He doesn’t however list data visualization in the index; so he may talk about it somewhere–it isn’t obvious.

The current issue of New Directions for Evaluation data visualization NDE is devoted to data visualization and it is the first part (implying, I hope, for at least a part 2).  Tarek Azzam and Stephanie Evergreen are the guest editors.  This volume (the first on this topic in 15 years) sets the stage (chapter 1) and talks about quantitative data visualization and quantitative data visualization.  The last chapter talks about the tools that are available to the evaluator and there are many and they are various.  I cannot do them justice in this space; read about them in the NDE volume.  (If you are an AEA member, the volume is available on line.)

freshspectrum, a blog by Chris Lysy, talks about INTERACTIVE data visualization with illustrations.

Stephanie Evergreen, the co-guest editor of the above NDE, also blogs and in her October 2 post, talks about “Design for Federal Proposals (aka Design in a Black & White Environment)”.  More on data visualization.

The data visualizer that made the largest impact on me was Hans Rosling in his TED talks.  Certainly the software he uses makes the images engaging.  If he didn’t understand his data the way he does, he wouldn’t be able to do what he does.

Data visualization is everywhere.  There will be multiple sessions at the AEA conference next week.  If you can, check them out–get there early as they will fill quickly.

When I did my dissertation, there were several soon-to-be-colleagues who were irate that I did a quantitative study on qualitative data.  (I was looking at cognitive bias, actually.)  I needed to reduce my qualitative data so that I could represent it quantitatively.  This approach to coding is called magnitude coding.  Magnitude coding is just one of the 25 first cycle coding methods that Johnny Saldaña (2013) talks about in his book, The coding manual for qualitative researchers coding manual--johnny saldana (see pages 72-77).  (I know you cannot read the cover title–this is just to give you a visual; if you want to order it, which I recommend, go to Sage Publishers, Inc.)  Miles and Huberman (1994) also address this topic.miles and huberman qualitative data

So what is magnitude coding? It is a form of coding that “consists of and adds a supplemental alphanumeric or symbolic code or sub-code to an existing coded datum…to indicate its intensity, frequency, direction, presence , or evaluative content” (Saldaña, 2013, p. 72-73).  It could also indicate the absence of the characteristic of interest.  Magnitude codes can be qualitative or quantitative and/or nominal.  These codes enhance the description of your data.

Saldaña provides multiple examples that cover many different approaches.  Magnitude codes can be words or abbreviations that suggest intensity or frequency or codes can be numbers which do the same thing.  These codes can suggest direction (i.e., positive or negative, using arrows).  They can also use symbols like a plus (+) or a minus (), or other symbols indicating presence or absence of a characteristic.  One important factor for evaluators to consider is that magnitude coding also suggests evaluative content, that is , did the content demonstrate merit, worth, value?  (Saldaña also talks about evaluation coding; see page 119.)

Saldaña gives an example of analysis showing a summary table.  Computer assisted qualitative data analysis software (CAQDAS)  and Microsoft Excel can also provide summaries.  He notes “that is very difficult to sidestep quantitative representation and suggestions of magnitude in any qualitative research” (Saldaña, 2013, p. 77).  We use quantitative phrases all the time–most, often, extremely, frequently, seldom, few, etc.  These words tend “to enhance the ‘approximate accuracy’ and texture of the prose” (Saldaña, 2013, p. 77).

Making your qualitative data quantitative is only one approach to coding, an approach that is sometimes very necessary.