What follows is a primer, one of the first things evaluators learn when developing a program.  This is something that cannot be said enough.  Program evaluation is about the program.  NOT about the person who leads the program; NOT about the policy about the program; NOT about the people who are involved in the program.  IT IS ABOUT THE PROGRAM!

Phew.  Now that I’ve said that.  I’ll take a deep breath and elaborate.

 

“Anonymity, or at least a lack of face-to-face dialogue, leads people to post personal attacks…” (This was said by Nina Bahadur, Associate Editor, HuffPost Women.)  Although she was speaking about blogs, not specifically program evaluation, this applies to program evaluations.  Evaluations are handed out at the end of a program.  Program evaluations do not ask for identifying information and often lead to personal attacks.  Personal attacks are not helpful to the program lead, the program, or the participants learning.

The program lead really wants to know ABOUT THE PROGRAM, not slams about what s/he did or didn’t do; say or didn’t say.  There are some things about a program over which the program lead doesn’t have any control–the air handling at the venue; the type of chairs used; the temperature of the room; sometimes, even the venue.  The program lead does have control over the choice of venue (usually), the caterer (if food is offered), the materials (the program) offered to the participants, how s/he looks (grumpy or happy; serious or grateful)–I’ve just learned that how the “teacher” looks at the class makes a big difference in participants learning.

What a participant must remember is that they agreed to participate.   It may have been a requirement of their job; it may have been encouraged by their boss; it may have been required by their boss.  What ever the reason, they agreed to participate.  They must be accountable for their participation.  Commenting on those things over which the program lead has no control may make then feel better in the short run; it doesn’t do any good to improve the program or to determine if the program made a difference–that is had merit, worth, value.  (Remember the root word of evaluation is VALUE.)

Personal grousing doesn’t add to the program’s value.  The question that must be remembered when filling out an evaluation is, “Would this comment be said in real life (not on paper)? Would you tell the person this comment?”  If not, it doesn’t belong in your evaluation.  Program leads want to build a good and valuable program.  The only way they can do is to receive critical feedback about the program.  So if the food stinks and the program lead placed the order with the caterer, tell the program lead not to use the caterer again, don’t tell the program lead that her/his taste in food is deplorable–how does that improve the program?  If the chairs are uncomfortable, tell the program lead to tell the venue that the chairs were found by participants to be uncomfortable as the program lead didn’t deliberately make the chairs uncomfortable.  If there wasn’t enough time for sharing, tell the program lead to increase the sharing time because sometimes sharing of personal experiences is just what is needed to make the program meaningful to participants.

People often ask me what is a good indicator of impact…I usually answer world peace…then I get serious.

I won’t get into language today.  Impact–long term outcome.  For purposes of today, they are both the same:  CHANGE in the person or change in the person’s behavior.

Paul Mazmanian, a medical educator at Virginia Commonwealth University School of Medicine, wanted to determine whether practicing physicians who received only clinical information at a traditional continuing medical education lecture would alter their clinical behavior at the same rate as physicians who received clinical information AND information about barriers to behavioral change.  What he found is profound.  Information about barriers to change did not change the physician’s clinical behavior.  That is important.  Sometimes research yields information that is very useful.  This is the case here.  Mazmanian, etal. (see complete citation below) found (drum roll, please) that both groups of physicians were statistically significantly MORE likely to change their clinical behavior if they indicated their INTENT TO CHANGE their behavior immediately following the lecture they received.

The authors concluded that stated intention to change was important in changing behavior.

We as evaluators can ask the same question: Do you intend to make a behavior change and if so, what specific change.

Albert Bandura talks about self-efficacy.  That is often measured by an individual’s confidence to be able to implement a change.  By pairing the two questions (How confident are you that…and Do you intend to make a change…) evaluators can often capture an indicator of behavior change; that indicator of behavior change is often the best case for long-term outcome.

 

I’ll be at AEA this week.  Next week, I’m moving offices.  I won’t be blogging.

Citation:

Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowitz, M. P. (1998). Information about barriers to planned  change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8), 882-886.

There has been quite a bit written about data visualization, a topic important to evaluators who want their findings used.  Michael Patton talks about evaluation use in his 4th edition of utilization-focused evaluation. Patton's utilization focused evaluation  He doesn’t however list data visualization in the index; so he may talk about it somewhere–it isn’t obvious.

The current issue of New Directions for Evaluation data visualization NDE is devoted to data visualization and it is the first part (implying, I hope, for at least a part 2).  Tarek Azzam and Stephanie Evergreen are the guest editors.  This volume (the first on this topic in 15 years) sets the stage (chapter 1) and talks about quantitative data visualization and quantitative data visualization.  The last chapter talks about the tools that are available to the evaluator and there are many and they are various.  I cannot do them justice in this space; read about them in the NDE volume.  (If you are an AEA member, the volume is available on line.)

freshspectrum, a blog by Chris Lysy, talks about INTERACTIVE data visualization with illustrations.

Stephanie Evergreen, the co-guest editor of the above NDE, also blogs and in her October 2 post, talks about “Design for Federal Proposals (aka Design in a Black & White Environment)”.  More on data visualization.

The data visualizer that made the largest impact on me was Hans Rosling in his TED talks.  Certainly the software he uses makes the images engaging.  If he didn’t understand his data the way he does, he wouldn’t be able to do what he does.

Data visualization is everywhere.  There will be multiple sessions at the AEA conference next week.  If you can, check them out–get there early as they will fill quickly.

When I did my dissertation, there were several soon-to-be-colleagues who were irate that I did a quantitative study on qualitative data.  (I was looking at cognitive bias, actually.)  I needed to reduce my qualitative data so that I could represent it quantitatively.  This approach to coding is called magnitude coding.  Magnitude coding is just one of the 25 first cycle coding methods that Johnny Saldaña (2013) talks about in his book, The coding manual for qualitative researchers coding manual--johnny saldana (see pages 72-77).  (I know you cannot read the cover title–this is just to give you a visual; if you want to order it, which I recommend, go to Sage Publishers, Inc.)  Miles and Huberman (1994) also address this topic.miles and huberman qualitative data

So what is magnitude coding? It is a form of coding that “consists of and adds a supplemental alphanumeric or symbolic code or sub-code to an existing coded datum…to indicate its intensity, frequency, direction, presence , or evaluative content” (Saldaña, 2013, p. 72-73).  It could also indicate the absence of the characteristic of interest.  Magnitude codes can be qualitative or quantitative and/or nominal.  These codes enhance the description of your data.

Saldaña provides multiple examples that cover many different approaches.  Magnitude codes can be words or abbreviations that suggest intensity or frequency or codes can be numbers which do the same thing.  These codes can suggest direction (i.e., positive or negative, using arrows).  They can also use symbols like a plus (+) or a minus (), or other symbols indicating presence or absence of a characteristic.  One important factor for evaluators to consider is that magnitude coding also suggests evaluative content, that is , did the content demonstrate merit, worth, value?  (Saldaña also talks about evaluation coding; see page 119.)

Saldaña gives an example of analysis showing a summary table.  Computer assisted qualitative data analysis software (CAQDAS)  and Microsoft Excel can also provide summaries.  He notes “that is very difficult to sidestep quantitative representation and suggestions of magnitude in any qualitative research” (Saldaña, 2013, p. 77).  We use quantitative phrases all the time–most, often, extremely, frequently, seldom, few, etc.  These words tend “to enhance the ‘approximate accuracy’ and texture of the prose” (Saldaña, 2013, p. 77).

Making your qualitative data quantitative is only one approach to coding, an approach that is sometimes very necessary.

Before you know it, Evaluation ’13 will be here and thousands of evaluators will converge on Washington DC, the venue for this year’s AEA annual meeting.

The Local Arrangements Working Group (LAWG) is blogging this week in AEA365. (You might want to check out all the posts this week.)  There are A LOT of links in these posts (including related past posts) that are worth checking.  For those who have not been to AEA before or for those who have recently embraced evaluation, reading their posts are a wealth of information.

What I want to focus on today is the role of the local arrangements working group.  The Washington Evaluators group is working in tandem with AEA to organize the local part of the conference.  These folks live locally and know the area.  Often they include graduate students as well as seasoned evaluators.  (David Bernstein and Valerie Caracelli are the co-chairs of this year’s LAWG .)  They have a wealth of information in their committee.  (Scroll down to the “Please Check Back for Periodic Updates” to see the large committee–it really does take a village!)  They only serve for the current year and are truly local.  Next year in Denver, there will be a whole new LAWG.

Some things that the committee do include identifying (and evaluating) local restaurants, things to do in DC, and getting around DC.   Although these links provide valuable information, there are those of us (me… smiley) who are still technopeasants and do not travel with a smart phone, tablet, computer, or other electronic connectivity and would like hard copy of pertinent information.  (I want to pay attention to real people in real time–I acknowledge that I am probably an artifact, certainly a technology immigrant–see previous blog about civility.)

Restaurants change quicker than I can keep track–although I’m sure that there are still some which existed when I was in DC last for business.  I’m sure that today, most restaurants provide vegetarian, vegan, gluten-free options (it is, after all, the current trend).  That is very different from when I was there for the last AEA in 2002.  I did a quick search for vegetarian restaurants using the search options available at the LAWG/Washington Evaluators’ site–there were several…I also went to look at reviews…I wonder about the singular bad (very) review…was it just an off night or a true reflection?

There are so many things to do in DC…please take a day–the newer monuments are amazing–see them.

Getting around DC…use the Metro–it gets you to most places; it is inexpensive; it is SAFE!  It has been expanded to reach beyond the DC boundaries.  If nothing else, ride the Metro–you will be able to see a lot of DC.  You can get from Reagan-Washington NationalAirport to the conference venue (yes, you will have to walk 4 blocks and there may be some problem with a receipt–put the fare plus $0.05 on the Metro card and turn in the card).

The LAWG has done a wonderful job providing information to evaluators…check out their site.  See you in DC.

I blogged earlier this week on civility, community, compassion, and comfort.  I indicated that these are related to evaluation because it is part of the values of evaluation (remember the root of evaluation is value)–is it mean or is it nice…Harold Jarche talked today about these very issues phrasing it as doing the right thing…if you do the right thing, it is nice.  His blog post only reinforces the fact that evaluation is an everyday activity and that you (whether you are an evaluator or not) are the only person who can make a difference.  Yes, it usually takes a village.  Yes, you usually cannot see the impact of what you do (we can’t get easily to world peace).  Yes, you can be the change you want to see.  Yes, evaluation is an every day activity.  Make nice, folks.  Try a little civility; expand your community; remember compassion.  Comfort is the outcome. Comfort seems like a good outcome.  So does doing the right thing.

I know–how does this relate to evaluation?  Although I think it is obvious, perhaps it isn’t.

I’ll start with a little background.  In 1994, M. Scott Peck published  A World Waiting To Be Born: Civility Rediscovered. scott peck civility In that book he defined a problem (and there are many) facing the then 20th century person ( I think it applies to the 21st century person as well).  That problem  was incivility or the “…morally destructive patterns of  self-absorption, callousness, manipulativeness, and  materialism so ingrained in our routine behavior that we  do not even recognize them.”  He wrote this in 1994–well before the advent of the technology that has enabled humon to disconnect from fellow humon while being connected.  Look about you and count the folks with smart phones.  Now, I’ll be the first to agree that technology has enabled a myriad of activities that 20 years ago (when Peck was writing this book) were not even conceived by ordinary folks.  Then technology took off…and as a result, civility, community,  and, yes, even compassion went by the way.

Self-absorption, callousness, manipulativeness, materialism are all characteristics of the lack of, not only civility (as Peck writes), also loss of community and lack of compassion.  If those three (civility, community, compassion) are lost–where is there comfort?  Seems to me that these three are interrelated.

To expand–How many times have you used your smart phone to text someone across the room? (Was it so important you couldn’t wait until you could talk to him/her in person–face-to-face?) How often have you thought to yourself how awful an event is and didn’t bother to tell the other person?  How often did you say the good word? The right thing?  That is evaluation–in the everyday sense.  Those of us who call ourselves evaluators are only slightly different from those of you who don’t.  Although evaluators do evaluation for a living, everyone does it because evaluation is part of what gets us all through the day.

Ask your self as an evaluative task–was I nice or was I mean?  This reflects civility, compassion, and even community.–even very young children know that difference.  Civility and compassion can be taught to kindergarteners–ask the next five year old you see–was it nice or was it mean?  They will tell you.  They don’t lie.  Lying is a learned behavior–that, too, is evaluative.

You can ask your self guiding questions about community; about compassion; about comfort.  They are all evaluative questions because you are trying to determine if you have made a difference.  You CAN be the change you want to see in the world; you can be the change you want to be.  That, too is evaluative.  Civility.  Compassion.  Community.  Comfort. compassion 2

Wow!  25 First Cycle and 6 Second Cycle methods for coding qualitative data.

Who would have thought that there are so many methods of coding qualitative data.  I’ve been coding qualitative data for a long time and only now am I aware that what I was doing was, according to Miles and Huberman (1994), my go-to book for coding,  miles and huberman qualitative data is called “Descriptive Coding” although Johnny Saldana calls it “Attribute Coding”.  (This is discussed at length in his volume The Coding Manual for Qualitative Researchers.) coding manual--johnny saldana  I just thought I was coding; I was, just not as systematically as suggested by Saldana.

Saldana talks about First Cycle coding methods, Second Cycle coding methods and a hybrid method that lies between them.  He lists 25 First Cycle coding methods and spends over 120 pages discussing first cycle coding.

I’m quoting now.  He says that “First Cycle methods are those processes that happen during the initial coding of data and are divided into seven subcategories: Grammatical, Elemental, Affective, Literary and Language, Exploratory, Procedural and a final profile entitled Themeing the Data.  Second Cycle methods are a bit more challenging because they require such analytic skills as classifying, prioritizing, integrating, synthesizing, abstracting, conceptualizing, and theory building.”

He also insists that coding qualitative data is a iterative process; that data are coded and recoded.  Not just a one pass through the data.

Somewhere I missed the boat.  What occurs to me is that since I learned about coding qualitative data by hand because there were few CAQDAS (Computer Assisted Qualitative Data Analysis Software) available (something Saldana advocates for nascent qualitative researchers) is that the field has developed, refined, expanded, and become detailed.  Much work has been done that went unobserved by me.

He also talks about the fact that the study’s qualitative data may need more than one coding method–Yikes!  I thought there was only one.  Boy was I mistaken.  Reading the Coding Manual is enlightening (a good example of life long learning).  All this will come in handy when I collect the qualitative data for the evaluation I’m now planning.  Another take away point that is stressed in the coding manual and in the third edition of the Miles & Huberman book (with the co-author of Johnny Saldana) Qualitative data analysis ed. 3 is to start coding/reading the data as soon as it is collected.  Reading the data when you collect it allows you to remember what you observed/heard, allows/encourages  analytic memo writing (more on that in a separate post), and allows you to start building your coding scheme.

If you do a lot of qualitative data collection, you need these two books on your shelf.

 

“In reality, winning begins with accountability. You cannot sustain success without accountability. It is an absolute requirement!” (from walkthetalk.com.)

I’m quoting here.  I wish I had thought of this before I read it.  It is important in everyone’s life, and especially when evaluating.

 

Webster’s defines accountability as, “…“the quality or state of being accountable; an obligation (emphasis added) or willingness to accept responsibility for one’s actions.”  The business dictionary goes a little further and defines accountability as “…The obligation of an individual (or organization) (parentheses added) to account for its activities, accept responsibility for them, and to disclose the results in a transparent manner.”

It’s that last part to which evaluators need to pay special attention; the “disclose results in a transparent manner” part.  There is no one looking over your shoulder to make sure you do “the right thing”; that you read the appropriate document; that you report the findings you found not what you know the client wants to hear.  If you maintain accountability, you are successful; you will win.

AEA has a adopted a set of Guiding Principles Guiding principlesfor the organization and its members.  The principles are 1) Systematic inquiry; 2) Competence; 3) Integrity/Honesty; 4) Respect for people; and 5) Responsibilities for the General and Public Welfare.  I can see where accountability lies within each principle.  Can you?

AEA has also endorsed the Program Evaluation Standards  program evaluation standards of which there are five as well.  They are:  1) Utility, 2) Feasibility, 3) Proprietary, 4) Accuracy, and 5) Evaluation accountability.  Here, the developers were very specific and made accountability a specific category.  The Standard specifically states, “The evaluation accountability standards encourage adequate documentation of evaluations and a metaevaluative perspective focused on improvement and accountability for evaluation processes and products.”

You may be wondering about the impetus for this discussion of accountability (or, not…).  I have been reminded recently that only the individual can be accountable.  No outside person can do it for him or her.  If there is an assignment, it is the individual’s responsibility to complete the assignment in the time required.  If there is a task to be completed, it is the individual’s responsibility (and Webster’s would say obligation) to meet that responsibility.  It is the evaluator’s responsibility to report the results in a transparent manner–even if it is not what was expected or wanted.  As evaluator’s we are adults (yes, some evaluation is completed by youth; they are still accountable) and, therefore, responsible, obligated, accountable.  We are each one responsible–not the leader, the organizer, the boss.  Each of us.  Individually.  When you are in doubt about your responsibility, it is your RESPONSIBILITY to clarify that responsibility however works best for you.  (My rule to live by number 2:  Ask.  If you don’t ask, you won’t get; if you do, you might not get.)

Remember, only you are accountable for your behavior–No. One. Else.  Even in an evaluation.; especially in an evaluation

 

 

 

We are approaching Evaluation 2013 (Evaluation ’13).  This year October 16-19, with professional development sessions both before and after the conference.  One of the criteria that I use to determine a “good” conference is did I get three new ideasbright idea 3 (three is an arbitrary number).  One way to get a good idea to use outside the conference, in your work, in your everyday activities is to experience a good presentation.  Fortunately, in the last 15 years much has been written on how to give a good presentation both verbally and with visual support.  This week’s AEA365 blog (by Susan Kistler) talks about presentations as she tells us again about the P2i initiative sponsored by AEA.

I’ve delivered posters the last few years (five or six) and P2i talks about posters in the downloadable handout called, Guidelines for Posters.  Under the tab called (appropriately enough) Posters, P2i also offers information on research posters and a review of other posters as well as the above mentioned Guidelines for Posters.  Although more and more folks are moving to posters (until AEA runs out of room, all posters are on the program), paper presentations with the accompanying power point are still deriguere, the custom of professional conferences.   What P2i has to say about presentations will help you A LOT!!  Read it.

Read it especially if presenting in public, whether to a large group of people or not.  It will help you.  There are some really valuable points that are reiterated in the AEA365 as well as other places.  Check out the following TED talks, look especially for Nancy Durate and Hans Rosling.  A quick internet search yielded the following: About 241,000,000 results (0.43 seconds).  I entered the phrase, “how to make a good presentation“.  Some of the sites speak to oral presentations; some address visual presentations.  What most people do is try to get too much information on a slide (typically using Power point).  Prezi gives you one slide with multiple images imbedded within it.  It is cool.  There are probably other approaches as well.  In today’s world, there is no reason to read your presentation–your audience can do that.  Tell them!  (You know, tell them what they will hear, tell them, tell them what they heard…or something like that.)  If you have to read, make sure what they see is what they hear–see hear compatibility is still important, regardless of the media used.

Make an interesting presentation!  Give your audience at least one good idea!bright idea