The American Evaluation Association has opened its registration for the 2013 meeting in Washington DC.  This meeting promises to be attended by the most people yet.  Eleven years ago we were in D. C. and broke all attendance records to date.  I remember because that was my presidential year…the year that the evaluation profession started thinking that evaluation was a system; that everything we do is connected.  Several people have commented about AEA–that they didn’t know there was such an association; that they didn’t know about the conference; that they weren’t members.  So folks, here is the skinny on AEA (at least part of the skinny…).

The American Evaluation Association was officially founded in 1986 AEA logoas a combined organization of the Evaluation Research Society and Evaluation Network.  ERS was academic and EN was practitioner; merging the two was a challenge as each thought something would be lost.  This is a good example of where the whole is greater that the sum of its parts.  The differences were pronounced and debatable (now you only see AEA).  Robert (Bob) B. Ingle was the force behind the conference; he mounted the first EN/ERS conference in 1981 in Austin, Texas.  I was a graduate student.  I was in awe.  Although I had been to numerous  professional conferences before attending this first conference, I had never met any one like Bob Ingle.  His first comment to me once we connected after playing phone tag was, “You spell your name wrong!”  (Turns out he was the Scotland branch of the German house of Engel; my ancestors changed the spelling when they came out of Germany.)   I was a nascent graduate student in love with my studies and here comes this brusque, acerbic, and outrageous giant.  He became my good friend–I knew him from 1981 until he died in 1998.  He believed passionately in program evaluation.  I think he is smiling at the growth in the profession and the organization.  He knew a lot of us; he saw the association through the good times and the bad times.  I could end here and say, the rest is history…only there is so much to tell.  The association went from an all volunteer organization at its founding in 1986 to an organization of over 8,000 members run by an association management firm.  Susan Kistler (of Kistler Associates) was our executive director for the last 15 years.  (The association has transitioned to a new management firm [SmithBucklin] and a new executive director [Denise Roosendaal]).  Seeing the association transition is bittersweet;  growth is good, the loss of family feeling is sad.  The association is no longer feels intimate, family; it offers so much more to folks who are members.

David Bernstein is the co-chair of the local arrangements working group (LAWG) for this year’s conference.  He lead off a week of  AEA365 talking about the conference.  Read this post.  It tells you a lot about the conference. This week AEA365 is being written by the local arrangements working group.  The role of the local arrangements group is to make sure the folks who attend the conference have a good time, both at the conference and in DC.  DC is a wonderful city.  You cannot see it in a week; it is always changing.  Take a day if you have never been to see the city’s high points.  It is the nation’s capitol, after all, and there are many high points.

The members only AEA August newsletter also talks about registration with hyperlinks to the registration site, the conference program, and hotel accommodations.  (The members only newsletter is just one reason to join AEA.)  I’ve been going to AEA since 1981.  This is the first year I will not have a paper/poster/etc. on the program.  (I am doing a professional development session with Jim Altschuld, though; it is number 22).

Each year I attend AEA, I think of the three evaluative criteria that FOR ME makes a good conference:  See three long time friends; meet three new people who could become friends; and get three new ideas.  If I do all this, I usually come home energized.  I hope to see you there.

I’m about to start a large scale project, one that will be primarily qualitative (it may end up being a mixed methods study; time will tell); I’m in the planning stages with the PI now.  I’ve done qualitative studies before–how could I not with all the time I’ve been an evaluator?  My go to book for qualitative data analysis has always been Miles and Huberman miles and huberman qualitative data (although my volume is black).  This is their second edition published in 1994.  I loved that book for a variety of reasons: 1) it provided me with a road map to process qualitative data; 2) it offered the reader an appendix for choosing a qualitative software program (now out of date); and 3) it was systematic and detailed in its description of display.  I was very saddened to learn that both the authors had died and there would not be a third edition.  Imaging my delight when I got the Sage flier of a third edition! Qualitative data analysis ed. 3  Of course I ordered it.  I also discovered that Saldana (the new third author on the third edition) has written another book on qualitative data that he sites a lot in this third edition (Coding manual for qualitative researchers coding manual--johnny saldana) and I ordered that volume as well.

Saldana, in the third edition, talks a lot about data display, one of the three factors that qualitative researchers must keep in mind.  The other two are data condensation and conclusion drawing/verification.  In their review, Sage Publications says, “The Third Edition’s presentation of the fundamentals of research design and data management is followed by five distinct methods of analysis: exploring, describing, ordering, explaining, and predicting.”  These five chapters are the heart of the book (in my thinking); that is not to say that the rest of the book doesn’t have gems as well–it does.  The chapter on “Writing About Qualitative Research” and the appendix are two.  The appendix (this time) is an “An Annotated Bibliography of Qualitative Research Resources”, which lists at least 32 different classifications of references that would be helpful to all manner of qualitative researchers.  Because it is annotated, the bibliography provides a one sentence summary of the substance of the book.  A find, to be sure.   Check out the third edition.

I will be attending a professional development session with Mr. Saldana next week.  It will be a treat to meet him and hear what he has to say about qualitative data.  I’m taking the two books with me…I’ll write more on this topic when I return.  (I won’t be posting next week).

 

 

 

You implement a program.  You think it is effective; that it makes a difference; that it has merit and worth.  You develop a survey to determine the merit and worth of the program.  You send the survey out to the target audience which is an intact population–that is, all of the participants are in the target audience for the survey.  You get less than 4o% response rate.  What does that mean?  Can you use the results to say that the participants saw merit in the program?  Do the results indicate that the program has value; that it made a difference if only 40% let you know what they thought.

I went looking for some insights on non-responses and non-responders.  Of course, I turned to Dillman  698685_cover.indd(my go to book for surveys…smiley).  His bottom line: “…sending reminders is an integral part of minimizing non-response error” (pg. 360).

Dillman (of course) has a few words of advice.  For example, on page 360, he says, ” Actively seek means of using follow-up reminders in order to reduce non-response error.”  How do you not burden the target audience with reminders, which are “…the most powerful way of improving response rate…” (Dillman, pg. 360).  When reminders are sent they need to be carefully worded and relate to the survey being sent.  Reminders stress the importance of the survey and the need for responding.

Dillman also says (on page 361) to “…provide all selected respondents with similar amounts and types of encouragement to respond.”  Since most of the time incentives are not an option for you the program person, you have to encourage the participants in other ways.  So we are back to reminders again.

To explore the topic of non-response further, there is a booksurvey non-response (Groves, Robert M., Don A. Dillman, John Eltinge, and Roderick J. A. Little (eds.). 2002. Survey Nonresponse. Wiley-Interscience: New York) that deals with the topic. I don’t have it on my shelf, so I can’t speak to it.  I found it while I was looking for information on this topic.

I also went on line to EVALTALK and found this comment which is relevant to evaluators attempting to determine if the program made a difference:  “Ideally you want your non-response percents to be small and relatively even-handed across items. If the number of nonresponds is large enough, it does raise questions as to what is going for that particular item, for example, ambiguous wording or a controversial topic. Or, sometimes a respondent would rather not answer a question than respond negatively to it. What you do with such data depends on issues specific to your individual study.”  This comment was from Kathy Race of Race & Associates, Ltd.,  September 9, 2003.

A bottom line I would draw from all this is respond…if it was important to you to participate in the program then it is important for you to provide feedback to the program implementation team/person.

 

 


 

This Thursday, the U.S. celebrates THE national holiday. independence-2   I am reminded of all that comprises that holiday.  No, not barbeque and parades; fireworks and leisure.  Rather all the work that has gone on to assure that we as citizens CAN celebrate this independence day.  The founding fathers (and yes, they were old [or not so old] white men} took great risks to stand up for what they believed.  They did what I advocate- determined (through a variety of methods) the merit/worth/value of the program, and took a stand.  To me, it is a great example of evaluation as an everyday activity. We now live under that banner of the freedoms for which they stood.   independence

Oh, we may not agree with everything that has come down the pike over the years; some of us are quite vocal about the loss of freedoms because of events that have happened through no real fault of our own.  We just happened to be citizens of the U.S.  Could we have gotten to this place where we have the freedoms, obligations, responsibilities, and limitations without folks leading us?  I doubt it.  Anarchy is rarely, if ever, fruitful.  Because we believe in leaders (even if we don’t agree with who is leading), we have to recognize that as citizens we are interdependent; we can’t do it alone (little red hen notwithstandinglittle red hen).  Yes, the U.S. is known for the  strength that is fostered in the individual (independence).  Yet, if we really look at what a day looks like, we are interdependent on so many others for all that we do, see, hear, smell, feel, taste.  We need to take a moment and thank our farmer, our leaders, our children (if we have them as they will be tomorrow’s leaders), our parents (if we are so lucky to still have parents), and our neighbors for being part of our lives.  For fostering the interdependence that makes the U.S. unique.  Evaluation is an everyday activity; when was the last time you recognized that you can’t do anything alone?

Happy Fourth of July–enjoy your blueberry pie!blueberry pie natural light