What follows is a primer, one of the first things evaluators learn when developing a program.  This is something that cannot be said enough.  Program evaluation is about the program.  NOT about the person who leads the program; NOT about the policy about the program; NOT about the people who are involved in the program.  IT IS ABOUT THE PROGRAM!

Phew.  Now that I’ve said that.  I’ll take a deep breath and elaborate.

 

“Anonymity, or at least a lack of face-to-face dialogue, leads people to post personal attacks…” (This was said by Nina Bahadur, Associate Editor, HuffPost Women.)  Although she was speaking about blogs, not specifically program evaluation, this applies to program evaluations.  Evaluations are handed out at the end of a program.  Program evaluations do not ask for identifying information and often lead to personal attacks.  Personal attacks are not helpful to the program lead, the program, or the participants learning.

The program lead really wants to know ABOUT THE PROGRAM, not slams about what s/he did or didn’t do; say or didn’t say.  There are some things about a program over which the program lead doesn’t have any control–the air handling at the venue; the type of chairs used; the temperature of the room; sometimes, even the venue.  The program lead does have control over the choice of venue (usually), the caterer (if food is offered), the materials (the program) offered to the participants, how s/he looks (grumpy or happy; serious or grateful)–I’ve just learned that how the “teacher” looks at the class makes a big difference in participants learning.

What a participant must remember is that they agreed to participate.   It may have been a requirement of their job; it may have been encouraged by their boss; it may have been required by their boss.  What ever the reason, they agreed to participate.  They must be accountable for their participation.  Commenting on those things over which the program lead has no control may make then feel better in the short run; it doesn’t do any good to improve the program or to determine if the program made a difference–that is had merit, worth, value.  (Remember the root word of evaluation is VALUE.)

Personal grousing doesn’t add to the program’s value.  The question that must be remembered when filling out an evaluation is, “Would this comment be said in real life (not on paper)? Would you tell the person this comment?”  If not, it doesn’t belong in your evaluation.  Program leads want to build a good and valuable program.  The only way they can do is to receive critical feedback about the program.  So if the food stinks and the program lead placed the order with the caterer, tell the program lead not to use the caterer again, don’t tell the program lead that her/his taste in food is deplorable–how does that improve the program?  If the chairs are uncomfortable, tell the program lead to tell the venue that the chairs were found by participants to be uncomfortable as the program lead didn’t deliberately make the chairs uncomfortable.  If there wasn’t enough time for sharing, tell the program lead to increase the sharing time because sometimes sharing of personal experiences is just what is needed to make the program meaningful to participants.

There has been quite a bit written about data visualization, a topic important to evaluators who want their findings used.  Michael Patton talks about evaluation use in his 4th edition of utilization-focused evaluation. Patton's utilization focused evaluation  He doesn’t however list data visualization in the index; so he may talk about it somewhere–it isn’t obvious.

The current issue of New Directions for Evaluation data visualization NDE is devoted to data visualization and it is the first part (implying, I hope, for at least a part 2).  Tarek Azzam and Stephanie Evergreen are the guest editors.  This volume (the first on this topic in 15 years) sets the stage (chapter 1) and talks about quantitative data visualization and quantitative data visualization.  The last chapter talks about the tools that are available to the evaluator and there are many and they are various.  I cannot do them justice in this space; read about them in the NDE volume.  (If you are an AEA member, the volume is available on line.)

freshspectrum, a blog by Chris Lysy, talks about INTERACTIVE data visualization with illustrations.

Stephanie Evergreen, the co-guest editor of the above NDE, also blogs and in her October 2 post, talks about “Design for Federal Proposals (aka Design in a Black & White Environment)”.  More on data visualization.

The data visualizer that made the largest impact on me was Hans Rosling in his TED talks.  Certainly the software he uses makes the images engaging.  If he didn’t understand his data the way he does, he wouldn’t be able to do what he does.

Data visualization is everywhere.  There will be multiple sessions at the AEA conference next week.  If you can, check them out–get there early as they will fill quickly.

Wow!  25 First Cycle and 6 Second Cycle methods for coding qualitative data.

Who would have thought that there are so many methods of coding qualitative data.  I’ve been coding qualitative data for a long time and only now am I aware that what I was doing was, according to Miles and Huberman (1994), my go-to book for coding,  miles and huberman qualitative data is called “Descriptive Coding” although Johnny Saldana calls it “Attribute Coding”.  (This is discussed at length in his volume The Coding Manual for Qualitative Researchers.) coding manual--johnny saldana  I just thought I was coding; I was, just not as systematically as suggested by Saldana.

Saldana talks about First Cycle coding methods, Second Cycle coding methods and a hybrid method that lies between them.  He lists 25 First Cycle coding methods and spends over 120 pages discussing first cycle coding.

I’m quoting now.  He says that “First Cycle methods are those processes that happen during the initial coding of data and are divided into seven subcategories: Grammatical, Elemental, Affective, Literary and Language, Exploratory, Procedural and a final profile entitled Themeing the Data.  Second Cycle methods are a bit more challenging because they require such analytic skills as classifying, prioritizing, integrating, synthesizing, abstracting, conceptualizing, and theory building.”

He also insists that coding qualitative data is a iterative process; that data are coded and recoded.  Not just a one pass through the data.

Somewhere I missed the boat.  What occurs to me is that since I learned about coding qualitative data by hand because there were few CAQDAS (Computer Assisted Qualitative Data Analysis Software) available (something Saldana advocates for nascent qualitative researchers) is that the field has developed, refined, expanded, and become detailed.  Much work has been done that went unobserved by me.

He also talks about the fact that the study’s qualitative data may need more than one coding method–Yikes!  I thought there was only one.  Boy was I mistaken.  Reading the Coding Manual is enlightening (a good example of life long learning).  All this will come in handy when I collect the qualitative data for the evaluation I’m now planning.  Another take away point that is stressed in the coding manual and in the third edition of the Miles & Huberman book (with the co-author of Johnny Saldana) Qualitative data analysis ed. 3 is to start coding/reading the data as soon as it is collected.  Reading the data when you collect it allows you to remember what you observed/heard, allows/encourages  analytic memo writing (more on that in a separate post), and allows you to start building your coding scheme.

If you do a lot of qualitative data collection, you need these two books on your shelf.

 

“In reality, winning begins with accountability. You cannot sustain success without accountability. It is an absolute requirement!” (from walkthetalk.com.)

I’m quoting here.  I wish I had thought of this before I read it.  It is important in everyone’s life, and especially when evaluating.

 

Webster’s defines accountability as, “…“the quality or state of being accountable; an obligation (emphasis added) or willingness to accept responsibility for one’s actions.”  The business dictionary goes a little further and defines accountability as “…The obligation of an individual (or organization) (parentheses added) to account for its activities, accept responsibility for them, and to disclose the results in a transparent manner.”

It’s that last part to which evaluators need to pay special attention; the “disclose results in a transparent manner” part.  There is no one looking over your shoulder to make sure you do “the right thing”; that you read the appropriate document; that you report the findings you found not what you know the client wants to hear.  If you maintain accountability, you are successful; you will win.

AEA has a adopted a set of Guiding Principles Guiding principlesfor the organization and its members.  The principles are 1) Systematic inquiry; 2) Competence; 3) Integrity/Honesty; 4) Respect for people; and 5) Responsibilities for the General and Public Welfare.  I can see where accountability lies within each principle.  Can you?

AEA has also endorsed the Program Evaluation Standards  program evaluation standards of which there are five as well.  They are:  1) Utility, 2) Feasibility, 3) Proprietary, 4) Accuracy, and 5) Evaluation accountability.  Here, the developers were very specific and made accountability a specific category.  The Standard specifically states, “The evaluation accountability standards encourage adequate documentation of evaluations and a metaevaluative perspective focused on improvement and accountability for evaluation processes and products.”

You may be wondering about the impetus for this discussion of accountability (or, not…).  I have been reminded recently that only the individual can be accountable.  No outside person can do it for him or her.  If there is an assignment, it is the individual’s responsibility to complete the assignment in the time required.  If there is a task to be completed, it is the individual’s responsibility (and Webster’s would say obligation) to meet that responsibility.  It is the evaluator’s responsibility to report the results in a transparent manner–even if it is not what was expected or wanted.  As evaluator’s we are adults (yes, some evaluation is completed by youth; they are still accountable) and, therefore, responsible, obligated, accountable.  We are each one responsible–not the leader, the organizer, the boss.  Each of us.  Individually.  When you are in doubt about your responsibility, it is your RESPONSIBILITY to clarify that responsibility however works best for you.  (My rule to live by number 2:  Ask.  If you don’t ask, you won’t get; if you do, you might not get.)

Remember, only you are accountable for your behavior–No. One. Else.  Even in an evaluation.; especially in an evaluation

 

 

 

We are approaching Evaluation 2013 (Evaluation ’13).  This year October 16-19, with professional development sessions both before and after the conference.  One of the criteria that I use to determine a “good” conference is did I get three new ideasbright idea 3 (three is an arbitrary number).  One way to get a good idea to use outside the conference, in your work, in your everyday activities is to experience a good presentation.  Fortunately, in the last 15 years much has been written on how to give a good presentation both verbally and with visual support.  This week’s AEA365 blog (by Susan Kistler) talks about presentations as she tells us again about the P2i initiative sponsored by AEA.

I’ve delivered posters the last few years (five or six) and P2i talks about posters in the downloadable handout called, Guidelines for Posters.  Under the tab called (appropriately enough) Posters, P2i also offers information on research posters and a review of other posters as well as the above mentioned Guidelines for Posters.  Although more and more folks are moving to posters (until AEA runs out of room, all posters are on the program), paper presentations with the accompanying power point are still deriguere, the custom of professional conferences.   What P2i has to say about presentations will help you A LOT!!  Read it.

Read it especially if presenting in public, whether to a large group of people or not.  It will help you.  There are some really valuable points that are reiterated in the AEA365 as well as other places.  Check out the following TED talks, look especially for Nancy Durate and Hans Rosling.  A quick internet search yielded the following: About 241,000,000 results (0.43 seconds).  I entered the phrase, “how to make a good presentation“.  Some of the sites speak to oral presentations; some address visual presentations.  What most people do is try to get too much information on a slide (typically using Power point).  Prezi gives you one slide with multiple images imbedded within it.  It is cool.  There are probably other approaches as well.  In today’s world, there is no reason to read your presentation–your audience can do that.  Tell them!  (You know, tell them what they will hear, tell them, tell them what they heard…or something like that.)  If you have to read, make sure what they see is what they hear–see hear compatibility is still important, regardless of the media used.

Make an interesting presentation!  Give your audience at least one good idea!bright idea

I’m about to start a large scale project, one that will be primarily qualitative (it may end up being a mixed methods study; time will tell); I’m in the planning stages with the PI now.  I’ve done qualitative studies before–how could I not with all the time I’ve been an evaluator?  My go to book for qualitative data analysis has always been Miles and Huberman miles and huberman qualitative data (although my volume is black).  This is their second edition published in 1994.  I loved that book for a variety of reasons: 1) it provided me with a road map to process qualitative data; 2) it offered the reader an appendix for choosing a qualitative software program (now out of date); and 3) it was systematic and detailed in its description of display.  I was very saddened to learn that both the authors had died and there would not be a third edition.  Imaging my delight when I got the Sage flier of a third edition! Qualitative data analysis ed. 3  Of course I ordered it.  I also discovered that Saldana (the new third author on the third edition) has written another book on qualitative data that he sites a lot in this third edition (Coding manual for qualitative researchers coding manual--johnny saldana) and I ordered that volume as well.

Saldana, in the third edition, talks a lot about data display, one of the three factors that qualitative researchers must keep in mind.  The other two are data condensation and conclusion drawing/verification.  In their review, Sage Publications says, “The Third Edition’s presentation of the fundamentals of research design and data management is followed by five distinct methods of analysis: exploring, describing, ordering, explaining, and predicting.”  These five chapters are the heart of the book (in my thinking); that is not to say that the rest of the book doesn’t have gems as well–it does.  The chapter on “Writing About Qualitative Research” and the appendix are two.  The appendix (this time) is an “An Annotated Bibliography of Qualitative Research Resources”, which lists at least 32 different classifications of references that would be helpful to all manner of qualitative researchers.  Because it is annotated, the bibliography provides a one sentence summary of the substance of the book.  A find, to be sure.   Check out the third edition.

I will be attending a professional development session with Mr. Saldana next week.  It will be a treat to meet him and hear what he has to say about qualitative data.  I’m taking the two books with me…I’ll write more on this topic when I return.  (I won’t be posting next week).

 

 

 

Miscellaneous thought 1.

Yesterday, I had a conversation with a long time friend of mine.  When we stopped and calculated (which we don’t do very often), we realized that we have know each other since 1981.  We met at the first AEA (only it wasn’t AEA then) conference in Austin, TX.  I was a graduate student; my friend was a practicing professional/academic.  Although we were initially talking about other things evaluation; I asked my friend to look at an evaluation form I was developing.  I truly believe that having other eyes (a pilot if you will) view the document helps.  It certainly did in this case.  I feel really good about the form.  In the course of the conversation, my friend advocated strongly for a odd numbered scales.  My friend had good reasons, specifically

1) It tends to force more comparisons on the respondents; and

2)  if you haven’t given me a neutral  point I tend to mess up the scale on purpose because you are limiting my ability to tell you what I am thinking.

I, of course, had an opposing view (rule number 8–question authority).  I said, ” My personal preference is an even number scale to avoid a mid-point.  This is important because I want to know if the framework (of the program in question) I provided worked well with the group and a mid-point would provide the respondent with a neutral point of view, not a working or not working opinion.   An even number (in my case four points) can be divided into working and not working halves.  When I’m offered a middle point, I tend to circle that because folks really don’t want to know what I’m thinking.  By giving me an opt out/neutral/neither for or against option they are not asking my opinion or view point.”

Recently, I came across an aea365 post on just this topic.  Although this specific post was talking about Likert scales, it applies to all scaling that uses a range of numbers (as my friend pointed out).  The authors sum up their views with this comment, “There isn’t a simple rule regarding when to use odd or even, ultimately that decision should be informed by (a) your survey topic, (b) what you know about your respondents, (c) how you plan to administer the survey, and (d) your purpose. Take time to consider these four elements coupled with the advantages and disadvantages of odd/even, and you will likely reach a decision that works best for you.”  (Certainly knowing my friend like I do, I would be suspicious of responses that my friend submitted.)  Although they list advantages and disadvantages for odd and even responses, I think there are other advantages and disadvantages that they did not mentioned yet are summed up in their concluding sentence.

Miscellaneous thought 2.

I’m reading the new edition of Qualitative Data Analysis (QDA).  Qualitative data analysis ed. 3  This has always been my go to book for QDA and I was very sad when I learned that both of the original authors had died.  The new author, Johnny Saldana (who is also the author of The Coding Manual for Qualitative Researcherscoding manual--johnny saldana), talks (in the third person plural, active voice) about being a pragmatic realist.  That is an interesting concept.  They (because the new author includes the previous authors in his statement) say “that social phenomena exist not only in the mind but also in the world–and that some reasonably stable relationships can be found among the idiosyncratic messiness of life.”  Although I had never used those exact words before, I agree.  It is nice to know the label that applies to my world view.  Life is full of idiosyncratic messiness; probably why I think systems thinking is so important.  I’m reading this volume because I’ve been asked to write the review of one of my favorite books.  We will see if I can get through it between now and July 1 when the draft of the review is due.  Probably aught to pair it with Saldana’s other book; won’t happen between now and July 1.

I have a few thoughts about causation, which I will get to in a bit…first, though, I want to give my answers to the post last week.

I had listed the following and wondered if you thought they were a design, a method, or an approach. (I had also asked which of the 5Cs was being addressed–clarity or consistency.)  Here is what I think about the other question.

Case study is a method used when gathering qualitative data, that is, words as opposed to numbers.  Bob Stake, Robert Brinkerhoff, Robert Yin, and others have written extensively on this method.

Pretest-post test Control Group is (according to Campbell and Stanley, 1963) an example of  a true experimental design if a control group is used (pg. 8 and 13).  NOTE: if only one group is used (according to Campbell and Stanley, 1963), pretest-post test is considered a pre-experimental design (pg. 7 and 8); still it is a design.

Ethnography is a method used when gathering qualitative data often used in evaluation by those with training in anthropology.  David Fetterman is one such person who has written on this topic.

Interpretive is an adjective use to describe the approach one uses in an inquiry (whether that inquiry is as an evaluator or a researcher) and can be traced back to the sociologists Max Weber and Wilhem Dilthey in the later part of the 19th century.

Naturalistic is  an adjective use to describe an approach with a diversity of constructions and is a function of “…what the investigator does…” (Lincoln and Guba, 1985, pg.8).

Random Control Trials (RCT) is the “gold standard” of clinical trials, now being touted as the be all and end all of experimental design; its proponents advocate the use of RCT in all inquiry as it provides the investigator with evidence that X (not Y) caused Z.

Quasi-Experimental is a term used by Campbell and Stanley(1963) to denote a design where random assignment cannot be made for ethical or practical reasons be accomplished; this is often contrasted with random selection for survey purposes.

Qualitative is an adjective to describe an approach (as in qualitative inquiry), a type of data (as in qualitative data) or
methods (as in qualitative methods).  I think of qualitative as an approach which includes many methods.

Focus Group is a method of gathering qualitative data through the use of specific, structured interviews in the form of questions; it is also an adjective for defining the type of interviews or the type of study being conducted (Krueger & Casey, 2009, pg. 2)

Needs Assessment is method for determining priorities for the allocation of resources and actions to reduce the gap between the existing and the desired.

I’m sure there are other answers to the terms listed above; these are mine.  I’ve gotten one response (from Simon Hearn at BetterEvaluation).  If I get others, I’ll aggregate them and share them with you.  (Simon can check his answers against this post.

Now causation, and I pose another question:  If evaluation (remember the root word here is value) is determining if a program (intervention, policy, product, etc. ) made a difference, and determined the merit or worth (i.e., value) of that program (intervention, policy, product, etc.), how certain are you that your program (intervention, policy, program, etc.) caused the outcome?  Chris Lysy and Jane Davidson have developed several cartoons that address this topic.  They are worth the time to read them.

When I teach scientific writing (and all evaluators need to be able to communicate clearly verbally and in writing), I focus on the 5Cs:  letter c 1larity, 5Cs-2-Coherenceoherence, 5Cs-3-Concisenessonciseness, 5Cs-4-Consistencysonsistency, and 5Cs-5-Correctnessorrectness,   I’ve written about the 5Cs in a previous blog post, so I won’t belabor them here.  Suffice it to say that when I read a document that violates one (or more) of these 5Cs, I have to wonder.

Recently, I was reading a document where the author used design (first), then method, then approach.  In reading the context, I think (not being able to clarify) that the author was referring to the same thing–a method and used these different words in an effort to make the reading more entertaining where all it did was cause obfuscation, violating 5Cs-1-Claritylarity, one of the 5Cs     .

So I’ll ask you, reader.  Are these different?  What makes them different?  Should they have been used interchangeably in the document?  I went to my favorite thesaurus of evaluation terms (Scriven)Scriven book cover  (published by Sage) to see what he had to say, if anything.  Only “design” was listed and the definition said, “…process of stipulating the investigatory procedures to be followed in doing a certain evaluation…”  OK–investigatory procedure.

So, I’m going to list several terms used commonly in evaluation and research.  Think about what each is–design, method, approach.  I’ll provide my answers next week.  Let me know what you think each of the following is:

Case Study

Pretest-Posttest Control Group

Ethnography

Interpretive

Naturalistic

Random Control Trials (RCT)

Quasi-Experimental

Qualitative

Focus Group

Needs Assessment

 

 

 

I was reminded recently about the 1992 AEA meeting in Seattle, WA.  That seems like so long ago.  The hot topic of that meeting was whether qualitative data or quantitative data were best.  At the time I was a nascent evaluator having been in the field less that 10 years and absorbed debates like this as a dry sponge does water.  It was interesting; stimulating; exciting.  It felt cutting edge.

Now 20+ years later, I wonder what all the hype was about.  Now, there can be rigor in what ever data are collected, regardless of type (numbers or words); language has been developed to look at that rigor.   (Rigor can also escape the investigator regardless of the data collected; another post, another day.)  Words are important for telling stories (and there is a wealth of information on how story can be rigorous) and numbers are important for counting (and numbers have a long history of use–Thanks Don Campbell).  Using both (that is, mixed methods) makes really good sense when conducting an evaluation in community environments, work that I’ve done for most of my career (community-based work).

I was reading another evaluation blog (ACET) and found the following bit of information that I thought I’d share as it is relevant to looking at data.  This particular post (July, 2012) was a reflection of the author. (I quote from that blog).

  • § Utilizing both quantitative and qualitative data. Many of ACET’s evaluations utilize both quantitative (e.g., numerical survey items) and qualitative (e.g., open-ended survey items or interviews) data to measure outcomes. Using both types of data helps triangulate evaluation findings. I learned that when close-ended survey findings are intertwined with open-ended responses, a clearer picture of program effectiveness occurs. Using both types of data also helps to further explain the findings. For example, if 80% of group A “Strongly agreed” to question 1, their open-ended responses to question 2 may explain why they “Strongly agreed” to question 1.

Triangulation was a new (to me at least) concept in 1981 when a whole chapter was devoted to the topic in a volume dedicated to Donald Campbell, titled Scientific Inquiry and the Social Sciences. scientific inquiry and the social sciences   I have no doubt that this concept was not new; Crano, the author of this chapter titled “Triangulation and Cross-Cultural Research”, has three and one half pages of references listed that support the premise put forth in the chapter.  Mainly, that using data from multiple different sources may increase the understanding of the phenomena under investigation.  That is what triangulation is all about–looking at a question from multiple points of view; bringing together the words and the numbers and then offering a defensible explanation.

I’m afraid that many beginning evaluators forget that words can support numbers and numbers can support words.