The topic of complexity has appeared several times over the last few weeks.  Brian Pittman wrote about it in an AEA365; Charles Gasper used it as a topic for his most recent blog.  Much food for thought, especially as it relates to the work evaluators do.

Simultaneously, Harold Jarche talks about connections.  To me connections and complexity are two side of the same coin. Something which is complex typically has multiple parts.  Something which has multiple parts is connected to the other parts.  Certainly the work done by evaluators has multiple parts; certainly those parts are connected to each other.  The challenge we face is  logically defending those connections and in doing so, make explicit the parts.  Sound easy?  Its not.

 

That’s why I stress modeling your project before you implement it.  If the project is modeled, often the model leads you to discover that what you thought would happen because of what you do, won’t.  You have time to fix the model, fix the program, and fix the evaluation protocol.  If your model is defensible and logical, you still may find out that the program doesn’t get you where you want to go.  Jonny Morell writes about this in his book, Evaluation in the face of uncertaintyThere are worse things than having to fix the program or fix the evaluation protocol before implementation.  Keep in mind that connections are key; complexity is everywhere.  Perhaps you’ll have an Aha! moment.

 

I’ll be on holiday and there will not be a post next week.  Last week was an odd week–an example of complexity and connections leading to unanticipated outcomes.

 

Evaluation costs:  A few weeks ago, I posted a summary about evaluation costs. A recent AEA LinkedIn discussion was on the same topic (see this link).  If you have not linked to other evaluators, there are other groups besides AEA that have LinkedIn groups.  You might want to join one that is relevant.

New topic:  The video on surveys posted last week generated a flurry of comments (though not on this blog).  I think it is probably appropriate to revisit the topic of surveys.  As I decided to revisit this topic,  an AEA 365 post from the Wilder Research group talked about data coding related to longitudinal data.

Now, many surveys, especially Extension surveys, focus on cross sectional data not on longitudinal data.  They may, however, involve a large number of participants and the hot tips that are provided apply to coding surveys.  Whether the surveys Extension professionals develop involve 30, 300, or 3000 participants, these tips are important especially if the participants are divided into groups on some variable.  Although the hot tips in the Wilder post talk about coding, not surveys specifically, they are relevant to surveys and I’m repeating them here.   (I’ve also adapted the original tip to Extension use).

  • Anticipate different groups.  If you do this ahead of time, and write it down in a data dictionary or coding guide, your coding will be easier.  If the raw data are dropped, or for some other reason scrambled (like a flood, hurricane, or a sleepy night), you will be able to make sense out of the data quicker.
  • Sometimes there are preexisting identifying information (like location of the program) that have a logical code.  Use that code.
  • Precoding by the location sites helps keep the raw data organized and enables coding.

Over the rest of the year, I’ll be revisiting survey on a regular basis.  Survey is often used by Extension.  Developing a survey that provides you with information you want, can use, and makes sense is a useful goal.

New topic:  I’m thinking of varying the format of the blog or offering alternative formats with evaluation information.  I’m curious as to what would help you do your work better.  Below are a few options.  Let me know what you’d like.

  • Videos in blogs
  • Short concise (i.e., 10-15 minute) webinars
  • Guest writers/speakers/etc.
  • Other ideas

A few weeks ago I  mentioned that a colleague of mine shared with me some insights she had about survey development.  She had an Aha! moment.   We had a conversation about that Aha! Moment and video taped the conversation.  To see the video, click here.

 

In thinking about what Linda learned, I realized that Aha! Moments could be a continuing series…so watch for more.

Let me know what you think.  Feedback is always welcome.

Oh–I want to remind you about an excellent resource for surveys.  Dillman’s current book, Internet, mail, and mixed-mode surveys:  The tailored design method.  It is a Wiley publication by Don A. Dillman, Jolene D. Smyth, and Leah Melani Christian.  Needs to be on your desk if you do any kind of survey work.

You can control four things–what you say; what you do; and how you act and react (both  subsets of what you do).  So when is the best action a quick reaction and when are you not waiting (because waiting is an act of faith)?  And how is this an evaluation question?

The original post was in reference to an email response going astray (go see what his suggestions were); it is not likely that emails regarding an evaluation report will fall in that category.  Though not likely, it is possible.  So you send the report to someone who doesn’t want/need/care about the report and is really not a stakeholder, just on the distribution list that you copied from a previous post.  And ooops, you goofed.  Yet the report is important; some people who needed/wanted/cared about it got it.  You need to correct for those others.  You can remedy the situation by following his suggestion, “Alert senders right away when you (send or) receive sensitive (or not so sensitive) emails not intended for you, so the sender can implement serious damage control.” (Parenthetical added.)

 

Emails seem to be a topic of conversation this week.  A blog I follow regularly (Harold Jarche) cited two studies about the amount of time spent reading and dealing with email.  One of the studies he cites ( in the Atlantic Monthly), the average worker spends 28% of a days work time reading email.  Think of all the non-necessary email you get THAT  YOU READ.  How is that cluttering your life?  How is that decreasing your efficiency when it comes to the evaluation work you do?  Email is most of my work these days; used to be that the phone and face-to-face took up a lot of my time…not so much today.  I even use social media for capacity building; my browser is always open.  So between email and the web, a lot of time is spent intimate with technology.

 

The last thought I had for this week was the use of words–not unrelated to emails–especially as it relates to evaluation.  Evaluation is often referred to by efficacy (producing the desired effect), effective (producing the desired effect in specific conditions), efficiency (producing the desired effect in specific conditions with available resources), and fidelity (following the plan).  I wonder if someone would do an evaluation of what we do, would we be able to say we are effective and efficient, let alone faithful to the plan?