I blogged earlier this week on civility, community, compassion, and comfort.  I indicated that these are related to evaluation because it is part of the values of evaluation (remember the root of evaluation is value)–is it mean or is it nice…Harold Jarche talked today about these very issues phrasing it as doing the right thing…if you do the right thing, it is nice.  His blog post only reinforces the fact that evaluation is an everyday activity and that you (whether you are an evaluator or not) are the only person who can make a difference.  Yes, it usually takes a village.  Yes, you usually cannot see the impact of what you do (we can’t get easily to world peace).  Yes, you can be the change you want to see.  Yes, evaluation is an every day activity.  Make nice, folks.  Try a little civility; expand your community; remember compassion.  Comfort is the outcome. Comfort seems like a good outcome.  So does doing the right thing.

I know–how does this relate to evaluation?  Although I think it is obvious, perhaps it isn’t.

I’ll start with a little background.  In 1994, M. Scott Peck published  A World Waiting To Be Born: Civility Rediscovered. scott peck civility In that book he defined a problem (and there are many) facing the then 20th century person ( I think it applies to the 21st century person as well).  That problem  was incivility or the “…morally destructive patterns of  self-absorption, callousness, manipulativeness, and  materialism so ingrained in our routine behavior that we  do not even recognize them.”  He wrote this in 1994–well before the advent of the technology that has enabled humon to disconnect from fellow humon while being connected.  Look about you and count the folks with smart phones.  Now, I’ll be the first to agree that technology has enabled a myriad of activities that 20 years ago (when Peck was writing this book) were not even conceived by ordinary folks.  Then technology took off…and as a result, civility, community,  and, yes, even compassion went by the way.

Self-absorption, callousness, manipulativeness, materialism are all characteristics of the lack of, not only civility (as Peck writes), also loss of community and lack of compassion.  If those three (civility, community, compassion) are lost–where is there comfort?  Seems to me that these three are interrelated.

To expand–How many times have you used your smart phone to text someone across the room? (Was it so important you couldn’t wait until you could talk to him/her in person–face-to-face?) How often have you thought to yourself how awful an event is and didn’t bother to tell the other person?  How often did you say the good word? The right thing?  That is evaluation–in the everyday sense.  Those of us who call ourselves evaluators are only slightly different from those of you who don’t.  Although evaluators do evaluation for a living, everyone does it because evaluation is part of what gets us all through the day.

Ask your self as an evaluative task–was I nice or was I mean?  This reflects civility, compassion, and even community.–even very young children know that difference.  Civility and compassion can be taught to kindergarteners–ask the next five year old you see–was it nice or was it mean?  They will tell you.  They don’t lie.  Lying is a learned behavior–that, too, is evaluative.

You can ask your self guiding questions about community; about compassion; about comfort.  They are all evaluative questions because you are trying to determine if you have made a difference.  You CAN be the change you want to see in the world; you can be the change you want to be.  That, too is evaluative.  Civility.  Compassion.  Community.  Comfort. compassion 2

I read a lot of blogs.  Some blogs are on evaluation; some are on education; some are on food; some are random (travel, health)…I get ideas from each of them, although not all at the same time.  Once I have an idea I write my blog.

Today, I’m drawing from several blogs to come up with these thoughts.  I read AEA 365 by Sheila B, Robinson (I’m a little behind), Harold Jarche blog (Life in perpetual beta), and Eval Central for Friday, September 6, 2013 (Eval Central compiles blogs related to evaluation).  These seemingly unrelated posts (all very interesting) talk about continued learning from different perspectives.

AEA365 talks about “Ancora imparo” (from Michelangelo) or still learning. (This saying is attributed to Michelangelo who was supposed to have said this in his 80s.)  Evaluators must continue to learn–the field is changing so fast.  Michael Scriven, who gave a keynote at the Australasian Evaluation Society  (as posted in Genuine Evaluation) talks about the field of evaluation “taking the core” to become the “inner circle” because evaluation is the “alpha discipline”.  In order to do that, evaluators must continue to learn (although Michael probably has a more eloquent approach).  Reading helps.  Talking helps.  Working helps.  From all of these, learning can occur.

Harold Jarche reviews a book by Charles Jennings, the foremost authority on the 70-20-10 frameworkcharles jennings 70-20-10 framework, talks about how this “holistic framework…is not a recipe” for learning, even though the numbers stand for 70% Experience, 20% Exposure, and 10% Education which make it sound like a recipe.   (Substitute flour, butter, sugar–makes a recipe for a type of short bread; even food makes its way into my blog…) As an evaluator, learning at work (experience) seems to be the same learning that Robinson talks about, not classroom learning.  Jennings  “…describes workplace learning as based on four key activities:

  1. Exposure to new and rich experiences.
  2. The opportunity to practice.
  3. Engaging in conversation and exchanges with each other.
  4. Making time to reflect on new observations, information, experiences, etc.”

Sounds like the same thing to me–reading, talking, working; continued learning–ancora imparo .

The last blog I read (the Eval Central post by Patricia Rogers in Better Evaluation blogs) reporting on another keynote at the Australasian Evaluation Society (this keynote by Nan Wehipihana).  The blog reports Nan Wehipeihana and her framework of increasing control by Indigenous communities in evaluation (Her “to” , “for”, “with”, “by” and “as” framework.)  This is a different way of looking at evaluation; one from which all evaluators can learn; one from which all evaluators can add to their own tool box.  (Check out the blog post for an informative graphic about this framework.)

Seems like everything I’m reading right now talks about continued learning, and not in the classroom.  Hmmm…learning is important if what Michael Scriven talks about is going to happen.

Wow!  25 First Cycle and 6 Second Cycle methods for coding qualitative data.

Who would have thought that there are so many methods of coding qualitative data.  I’ve been coding qualitative data for a long time and only now am I aware that what I was doing was, according to Miles and Huberman (1994), my go-to book for coding,  miles and huberman qualitative data is called “Descriptive Coding” although Johnny Saldana calls it “Attribute Coding”.  (This is discussed at length in his volume The Coding Manual for Qualitative Researchers.) coding manual--johnny saldana  I just thought I was coding; I was, just not as systematically as suggested by Saldana.

Saldana talks about First Cycle coding methods, Second Cycle coding methods and a hybrid method that lies between them.  He lists 25 First Cycle coding methods and spends over 120 pages discussing first cycle coding.

I’m quoting now.  He says that “First Cycle methods are those processes that happen during the initial coding of data and are divided into seven subcategories: Grammatical, Elemental, Affective, Literary and Language, Exploratory, Procedural and a final profile entitled Themeing the Data.  Second Cycle methods are a bit more challenging because they require such analytic skills as classifying, prioritizing, integrating, synthesizing, abstracting, conceptualizing, and theory building.”

He also insists that coding qualitative data is a iterative process; that data are coded and recoded.  Not just a one pass through the data.

Somewhere I missed the boat.  What occurs to me is that since I learned about coding qualitative data by hand because there were few CAQDAS (Computer Assisted Qualitative Data Analysis Software) available (something Saldana advocates for nascent qualitative researchers) is that the field has developed, refined, expanded, and become detailed.  Much work has been done that went unobserved by me.

He also talks about the fact that the study’s qualitative data may need more than one coding method–Yikes!  I thought there was only one.  Boy was I mistaken.  Reading the Coding Manual is enlightening (a good example of life long learning).  All this will come in handy when I collect the qualitative data for the evaluation I’m now planning.  Another take away point that is stressed in the coding manual and in the third edition of the Miles & Huberman book (with the co-author of Johnny Saldana) Qualitative data analysis ed. 3 is to start coding/reading the data as soon as it is collected.  Reading the data when you collect it allows you to remember what you observed/heard, allows/encourages  analytic memo writing (more on that in a separate post), and allows you to start building your coding scheme.

If you do a lot of qualitative data collection, you need these two books on your shelf.

 

“In reality, winning begins with accountability. You cannot sustain success without accountability. It is an absolute requirement!” (from walkthetalk.com.)

I’m quoting here.  I wish I had thought of this before I read it.  It is important in everyone’s life, and especially when evaluating.

 

Webster’s defines accountability as, “…“the quality or state of being accountable; an obligation (emphasis added) or willingness to accept responsibility for one’s actions.”  The business dictionary goes a little further and defines accountability as “…The obligation of an individual (or organization) (parentheses added) to account for its activities, accept responsibility for them, and to disclose the results in a transparent manner.”

It’s that last part to which evaluators need to pay special attention; the “disclose results in a transparent manner” part.  There is no one looking over your shoulder to make sure you do “the right thing”; that you read the appropriate document; that you report the findings you found not what you know the client wants to hear.  If you maintain accountability, you are successful; you will win.

AEA has a adopted a set of Guiding Principles Guiding principlesfor the organization and its members.  The principles are 1) Systematic inquiry; 2) Competence; 3) Integrity/Honesty; 4) Respect for people; and 5) Responsibilities for the General and Public Welfare.  I can see where accountability lies within each principle.  Can you?

AEA has also endorsed the Program Evaluation Standards  program evaluation standards of which there are five as well.  They are:  1) Utility, 2) Feasibility, 3) Proprietary, 4) Accuracy, and 5) Evaluation accountability.  Here, the developers were very specific and made accountability a specific category.  The Standard specifically states, “The evaluation accountability standards encourage adequate documentation of evaluations and a metaevaluative perspective focused on improvement and accountability for evaluation processes and products.”

You may be wondering about the impetus for this discussion of accountability (or, not…).  I have been reminded recently that only the individual can be accountable.  No outside person can do it for him or her.  If there is an assignment, it is the individual’s responsibility to complete the assignment in the time required.  If there is a task to be completed, it is the individual’s responsibility (and Webster’s would say obligation) to meet that responsibility.  It is the evaluator’s responsibility to report the results in a transparent manner–even if it is not what was expected or wanted.  As evaluator’s we are adults (yes, some evaluation is completed by youth; they are still accountable) and, therefore, responsible, obligated, accountable.  We are each one responsible–not the leader, the organizer, the boss.  Each of us.  Individually.  When you are in doubt about your responsibility, it is your RESPONSIBILITY to clarify that responsibility however works best for you.  (My rule to live by number 2:  Ask.  If you don’t ask, you won’t get; if you do, you might not get.)

Remember, only you are accountable for your behavior–No. One. Else.  Even in an evaluation.; especially in an evaluation

 

 

 

We are approaching Evaluation 2013 (Evaluation ’13).  This year October 16-19, with professional development sessions both before and after the conference.  One of the criteria that I use to determine a “good” conference is did I get three new ideasbright idea 3 (three is an arbitrary number).  One way to get a good idea to use outside the conference, in your work, in your everyday activities is to experience a good presentation.  Fortunately, in the last 15 years much has been written on how to give a good presentation both verbally and with visual support.  This week’s AEA365 blog (by Susan Kistler) talks about presentations as she tells us again about the P2i initiative sponsored by AEA.

I’ve delivered posters the last few years (five or six) and P2i talks about posters in the downloadable handout called, Guidelines for Posters.  Under the tab called (appropriately enough) Posters, P2i also offers information on research posters and a review of other posters as well as the above mentioned Guidelines for Posters.  Although more and more folks are moving to posters (until AEA runs out of room, all posters are on the program), paper presentations with the accompanying power point are still deriguere, the custom of professional conferences.   What P2i has to say about presentations will help you A LOT!!  Read it.

Read it especially if presenting in public, whether to a large group of people or not.  It will help you.  There are some really valuable points that are reiterated in the AEA365 as well as other places.  Check out the following TED talks, look especially for Nancy Durate and Hans Rosling.  A quick internet search yielded the following: About 241,000,000 results (0.43 seconds).  I entered the phrase, “how to make a good presentation“.  Some of the sites speak to oral presentations; some address visual presentations.  What most people do is try to get too much information on a slide (typically using Power point).  Prezi gives you one slide with multiple images imbedded within it.  It is cool.  There are probably other approaches as well.  In today’s world, there is no reason to read your presentation–your audience can do that.  Tell them!  (You know, tell them what they will hear, tell them, tell them what they heard…or something like that.)  If you have to read, make sure what they see is what they hear–see hear compatibility is still important, regardless of the media used.

Make an interesting presentation!  Give your audience at least one good idea!bright idea

Several folks read and commented on my previous AEA post.  That is heartening.  One comment was about what a new person should do at the conference.  That is today’s post.  I am fortunate because the Graduate Student and New Evaluator Topical Interest Group post in aea365 last week talked about just that.  I recommend you check it out.

Although I couldn’t find a “First Time Attendee” session in this year’s on line program, I could find a lot of other sessions which sounded interesting.  In the past, this session was offered just before the reception on Wednesday.  That was a long time ago (I found reference to it in the 1999 program), however, and AEA (like so many other things) has moved on, grown, and changed.  I remember attending several and contributing because I was a long time attendee.  They were informative, much like the aea365 blog post.

The one thing that the aea365 post didn’t mention that I think is important is children.  AEA is, and always has been, family friendly.  Children are welcome (by most; there are some curmudgeons, to be sure).  I am a single parent by choice.  I built my family through adoption.  From the time my oldest daughter (now 20) came home, I took her and then she and her sister (now 17) to AEA; we went as a family until 2007 when AEA was in Baltimore.  By the time they got into high school, it was harder to take them.  (I did take Mersedes last year to Minneapolis;  she isn’t coming to DC this year; this being her high school senior year…unhappy face)  They developed their own cohort of friends, people who still ask after them today when I go alone.

How did taking them actually work?  Most of the hotel venues used by AEA can recommend a sitting service, one that has been used by their guests.  It is worth the cost.  When I was President, the service was a life saver.  (I used White House Nannies in DC)  I paid an hourly rate when I needed coverage, which was about 30% of the time.  The rest of the time, the girls attended sessions, took their coloring books and sundries to the back of the room and played quietly while I did my professional thing.  There were multiple benefits in this arrangement:  They got to see Mom “working”; they got to see parts of DC they wouldn’t normally see (arrangements were made at the time of booking), they learned behaviors of a professional meeting, saw what was expected, and learned to talk to grown-ups.  They love to tell the story of sitting under the table in the back of the room, though I don’t know how many years they did that…I’m guessing a lot.  Some of my best friends are also their friends with whom they maintain relationships.  If you have a partner, bringing children is easier.  Some of my friends took their son/daughter and traded off responsibilities; their children also created a whole cadre of connections, some of which last to this day.  Being in DC is a wonderful opportunity for a young family; DC is an amazing city with lots to see and do.  I strongly urge you to take a day (or two) and enjoy the city and the conference with your family.

Some days I have no idea what to write.  This is one of those days.  So I thought I’d provide some thoughts that have been circulating around my brain for the past several days.  They are in no particular order and are of no particular importance.  They are just thoughts that relate to evaluation.

  1. Recently, I had the opportunity to attend a professional development session on qualitative data with Johnny Saldana.  Other than there wasn’t enough time to thoroughly cover the topic (it was the structure of the conference), I was once again impressed with the complexity of the topic.  Although it was titled “Advanced Qualitative Inquiry”, I thought a participant would need to know something about coding qualitative data in order to understand the this complexity.  Saldana did a masterful job–which is something given that he comes from a discipline so very different from evaluation (although some people draw a parallel between his discipline–theater–and evaluation; both being an interpretive disciplines).
  2. I’ve mentioned Saldana before–he was commissioned by SAGE to write the third edition of the Miles and Huberman classic, Qualitative Data Analysis.  The third edition was published in April, 2013..Qualitative data analysis ed. 3  Some of what Saldana shared with us in the session was from his book, Coding Manual for Qualitative Researchers. coding manual--johnny saldana  I have that book and have started reading it.  I’ll be submitting a review of it and the Qualitative Data Analysis to American Journal of Evaluation this fall.
  3. I haven’t gotten anything written on the four manuscripts I thought I’d be able to finish this summer.  I’ve about six weeks left (classes at OSU don’t start until the middle of September).  So much else has happened:  I’ve been asked to develop a chapter for a New Directions in Evaluation volume on Needs Assessment in the public sector; to co-lead a professional development session on Needs Assessment and Assets; and to review a manuscript on qualitative evaluation using a logic mode in addition to developing the evaluation I mentioned a while back.  This experience reinforces truly the need to block time for writing and to protect it.  I block time; I don’t tend to protect it.  Lesson learned.
  4. The evaluation model (of that major qualitative evaluation I will conduct) was presented at the county operations meeting Monday.  The group wasn’t opposed; they just weren’t exuberantly enthusiastic, either.  They had some interesting questions and interesting concerns that we will need to address.  It is hard to present a qualitative study to a group who have been trained to think of comparison groups and baselines.  We are using a developmental model (not summative so we don’t need a baseline; we want only to know the now of the experience) and we are using criteria not comparing groups against one another (criteria referenced NOT norm referenced).  More on this later…
  5. Surveys were another question that was posed to me–odd or even scales on the survey.  The answer there is, “It all depends.”  The “all depends” relates to many variables including cost and time and other resources.  The Kirkpatricks talk about aspects of evaluation in their blog this week.  They include a webinar of what to do when resources are scarce.

The American Evaluation Association has opened its registration for the 2013 meeting in Washington DC.  This meeting promises to be attended by the most people yet.  Eleven years ago we were in D. C. and broke all attendance records to date.  I remember because that was my presidential year…the year that the evaluation profession started thinking that evaluation was a system; that everything we do is connected.  Several people have commented about AEA–that they didn’t know there was such an association; that they didn’t know about the conference; that they weren’t members.  So folks, here is the skinny on AEA (at least part of the skinny…).

The American Evaluation Association was officially founded in 1986 AEA logoas a combined organization of the Evaluation Research Society and Evaluation Network.  ERS was academic and EN was practitioner; merging the two was a challenge as each thought something would be lost.  This is a good example of where the whole is greater that the sum of its parts.  The differences were pronounced and debatable (now you only see AEA).  Robert (Bob) B. Ingle was the force behind the conference; he mounted the first EN/ERS conference in 1981 in Austin, Texas.  I was a graduate student.  I was in awe.  Although I had been to numerous  professional conferences before attending this first conference, I had never met any one like Bob Ingle.  His first comment to me once we connected after playing phone tag was, “You spell your name wrong!”  (Turns out he was the Scotland branch of the German house of Engel; my ancestors changed the spelling when they came out of Germany.)   I was a nascent graduate student in love with my studies and here comes this brusque, acerbic, and outrageous giant.  He became my good friend–I knew him from 1981 until he died in 1998.  He believed passionately in program evaluation.  I think he is smiling at the growth in the profession and the organization.  He knew a lot of us; he saw the association through the good times and the bad times.  I could end here and say, the rest is history…only there is so much to tell.  The association went from an all volunteer organization at its founding in 1986 to an organization of over 8,000 members run by an association management firm.  Susan Kistler (of Kistler Associates) was our executive director for the last 15 years.  (The association has transitioned to a new management firm [SmithBucklin] and a new executive director [Denise Roosendaal]).  Seeing the association transition is bittersweet;  growth is good, the loss of family feeling is sad.  The association is no longer feels intimate, family; it offers so much more to folks who are members.

David Bernstein is the co-chair of the local arrangements working group (LAWG) for this year’s conference.  He lead off a week of  AEA365 talking about the conference.  Read this post.  It tells you a lot about the conference. This week AEA365 is being written by the local arrangements working group.  The role of the local arrangements group is to make sure the folks who attend the conference have a good time, both at the conference and in DC.  DC is a wonderful city.  You cannot see it in a week; it is always changing.  Take a day if you have never been to see the city’s high points.  It is the nation’s capitol, after all, and there are many high points.

The members only AEA August newsletter also talks about registration with hyperlinks to the registration site, the conference program, and hotel accommodations.  (The members only newsletter is just one reason to join AEA.)  I’ve been going to AEA since 1981.  This is the first year I will not have a paper/poster/etc. on the program.  (I am doing a professional development session with Jim Altschuld, though; it is number 22).

Each year I attend AEA, I think of the three evaluative criteria that FOR ME makes a good conference:  See three long time friends; meet three new people who could become friends; and get three new ideas.  If I do all this, I usually come home energized.  I hope to see you there.

I’m about to start a large scale project, one that will be primarily qualitative (it may end up being a mixed methods study; time will tell); I’m in the planning stages with the PI now.  I’ve done qualitative studies before–how could I not with all the time I’ve been an evaluator?  My go to book for qualitative data analysis has always been Miles and Huberman miles and huberman qualitative data (although my volume is black).  This is their second edition published in 1994.  I loved that book for a variety of reasons: 1) it provided me with a road map to process qualitative data; 2) it offered the reader an appendix for choosing a qualitative software program (now out of date); and 3) it was systematic and detailed in its description of display.  I was very saddened to learn that both the authors had died and there would not be a third edition.  Imaging my delight when I got the Sage flier of a third edition! Qualitative data analysis ed. 3  Of course I ordered it.  I also discovered that Saldana (the new third author on the third edition) has written another book on qualitative data that he sites a lot in this third edition (Coding manual for qualitative researchers coding manual--johnny saldana) and I ordered that volume as well.

Saldana, in the third edition, talks a lot about data display, one of the three factors that qualitative researchers must keep in mind.  The other two are data condensation and conclusion drawing/verification.  In their review, Sage Publications says, “The Third Edition’s presentation of the fundamentals of research design and data management is followed by five distinct methods of analysis: exploring, describing, ordering, explaining, and predicting.”  These five chapters are the heart of the book (in my thinking); that is not to say that the rest of the book doesn’t have gems as well–it does.  The chapter on “Writing About Qualitative Research” and the appendix are two.  The appendix (this time) is an “An Annotated Bibliography of Qualitative Research Resources”, which lists at least 32 different classifications of references that would be helpful to all manner of qualitative researchers.  Because it is annotated, the bibliography provides a one sentence summary of the substance of the book.  A find, to be sure.   Check out the third edition.

I will be attending a professional development session with Mr. Saldana next week.  It will be a treat to meet him and hear what he has to say about qualitative data.  I’m taking the two books with me…I’ll write more on this topic when I return.  (I won’t be posting next week).