Four weeks ago (January 17, 2013), I asked if this blog was making a difference and asked that y’all post specific examples of how it is making that difference–I was/am looking for change, specifically.  I said I would summarize the responses and post a periodic update.  This is the first update.

I’ve gotten many (more than 50) posts on that blog.  They are interesting.  No one has offered me a specific example of how this blog is making a difference.  Several agree that page views are NOT an adequate measure of effectiveness.  Several (again) agreed that length of time of a visit might be a good indicator.  A few are reading the blog for marketing tips; a few are using the blog to entice me to go to their blog–I don’t think so, especially when the response is in another language that I have to translate.  (I’m sure this sounds elitist–not my intention, to be sure–rather just a time factor in finding a translator.)  Most comments are just encouraging me to keep up the writing because it is 1) clear; 2) quickly loaded; 3) they like /love the blog/blog content; or 4) can be applied to their marketing strategy and their blog (that actually may be a change, only I’d have to do a lot of research to know if their site benefited).  Some folks just make a comment that seems to be a non-sequitur.

So I really don’t know.  Judging from the comments (random though they may be), people seem to be reading it.  I am curious how many people regularly go to this blog–regularly like weekly, not once in a while).  If I’m representative, I go to other blogs regularly, though not the same blogs each time, so I’m probably one of those once in a while people–even with evaluation blogs.  There are so many out there and the number is growing.  What I’ve learned is that the title of an individual blog is what captures the folks.  Coming up with catchy titles is difficult; coming up with catchy titles which are maximized in search engines is even harder.

I didn’t post a survey this time;  maybe I should.  I will post another update in about a month.

One of the expectations for the evaluation capacity building program that just finished is that the program findings will be written up for publication in scientific journals.

Easy to say.  Hard to do.

Writing is HARD.

To that end, I’m going to dig out my old notes from when I taught technical writing to graduate students, medical students, residents, and young faculty and give a few highlights.

  1. Writing only happens when words are put on paper (or typed into a computer).  Thinking about writing (I do that a lot) doesn’t count as writing.  The words don’t have to be perfect; good writing happens with multiple revisions.
  2. Schedule time for writing; write it in your planner.  You are making an appointment with yourself and writing.   At 10:00am every MWF I will write for one hour; then stop.  Protect this time.  You protect your program time; you need to protect your writing time.
  3. Keep in mind paper organization.  Generally, the IMRAD structure works for all manuscripts.  IMRAD stands for Introduction; Methods, Results, And Discussion.  Introduction is the literature review and ends with the research question.  Methods section is how the program, experiment, research was conducted in EXCRUCIATING detail.  Another evaluator should be able to pick up your manuscript and replicate your program.  Results are what you discovered, the lessons learned, the what worked and didn’t work.  They are quantitative and/or qualitative.  The Discussion is where you get to speculate; it highlights your conclusions and discusses the implications.  It also ties back to the literature.  If you have done the reporting correctly, you will have gone from the general to the specific back to the general.  Think two triangles placed together with their points (apex) touching.
  4. Follow the five Cs.  This is the single most important piece of advice (after number 2 above) about writing.   The five Cs are  Clarity, Coherence, Conciseness, Correctness, and Consistency.  If you keep those five Cs in mind, you will write well.  The writing is clear–you have not obfuscated the material.  The writing is coherent–it makes sense.  The writing is concise–you do not babble on or use jargon.  The writing is correct–you remember that the word data is a plural noun and takes a plural verb (use proper grammar and syntax).  The writing is consistent–you call your participants the same thing all the way through (no it is not boring).
  5. Start with the section you know best.  That may be what is  most familiar; it may be what is the  most recent; it may be what is the most concrete.  What ever you do, DO NOT start with the abstract; write it last.
  6. Have a style guide on your desk.  Most social sciences use APA; some use MLA or Chicago Style.  Have one (or more) on your desk.  Use it.  Follow and use the style that the journal requires.  That means you have read the “Instructions to authors” somewhere in the publication.
  7. Once you have finished the manuscript, READ IT OUT LOUD TO YOUR SELF.
  8. Run a spell and grammar check on the manuscript–it won’t catch everything; it will only catch most errors.
  9. Have more than one person read the manuscript AFTER you have read it out loud to your self.
  10. Persist.  More than one manuscript has been published because the author has persisted with the journal

Happy writing.

One of the outcomes of learning about evaluation is informational literacy.

Think about it.  How does what is happening in the world affect your program?  Your outcomes?  Your goals?

When was the last time you applied that peripheral knowledge to what you are doing.  Informational literacy is being aware of what is happening in the world.  Knowing this information, even peripherally, adds to your evaluation capacity.

Now, this is not advocating that you need to read the NY Times daily (although I’m sure they would really like to increase their readership); rather it is advocating that you recognize that none of your programs (whether little p or big P) occur in isolation. What your participants know affects how the program is implemented.  What you know affects how the programs are planned.  That knowledge also affects the data collection, data analysis, and reporting.  This is especially true for programs developed and delivered in the community, as are Extension programs.

Let me give you a real life example.  I returned from Tucson, AZ and the capstone event for an evaluation capacity program I was leading.  The event was an outstanding success–not only did it identify what was learned and what needed to be learned, it also demonstrated the value of peer learning.  I was psyched.  I was energized.  I was in an automobile accident 24 hours after returning home.  (The car was totaled–I no longer have a car; my youngest daughter and I experienced no serious injuries.)  The accident was published in the local paper the following day.  Several people saw the announcement; those same several people expressed their concern; some of those several people asked how they could help.  Now this is a very small local event that had a serious effect on me and my work.   (If I hadn’t had last week’s post already written, I don’t know if I could have written it.)  Solving simple problems takes twice as long (at least).  This informational literacy influenced those around me.  Their knowing changed their behavior to me.  Think of what September 11, 2001 did to people’s behavior; think about what the Pope’s RESIGNATION is doing to people’s behavior.  Informational literacy.  It is all evaluative.  Think about it.

 

Graphic URL: http://www.otterbein.edu/resources/library/information_literacy/index.htm

I try to keep politics out of my blogs.  Unfortunately, (or fortunately, depending on your world view), evaluation is a political activity.  Recently there have been several posts by others that remind me that evaluation is a political activity.  I try to point out how everyday activities are evaluative.

One is the growing discussion (debate?) about gun regulation.  Recently, the Morning Joe show included a clip that was picked up by MoveOn.org.  If you haven’t seen it, you need to.  Although the evaluative criteria are not clear, the outcome is and each commentator addressing the issue with a different lens (go here to view the clip).

In addition, a colleague of mine posted on her blog another blogger’s work (we are all connected, you know) that demonstrates the difficulty evaluators have being responsive to a client, especially one with whom you do not share the value in question (see Genuine Evaluation).  If you put your evaluator’s hat aside, the original post could be viewed as funny.

How many times have you smelled the milk and decided it was past prime?  Or seen mold growing on the yogurt?  This food blog also has many evaluative aspects (insert use by date blog).  Check it out.

 

I’m back from Tucson where it was warm and sunny–I wore shorts!  The best gift that I got serendipitously was the observation of peer learning from the participants.  Now I have to compile an evaluation of the program because I want to know what the participants thought, systematically.  I took a lot of notes and I know what needs to be added; what worked; what didn’t.  I got a lot of spontaneous and unsolicited comments about the value of the program–so OK–I’ve got the qualitative feed back (e.g., 18 months ago I wouldn’t have thought of this; knowing I’m not alone in the questions I have helps; I can now find an  answer…).  Once I get the quantitative feedback, I’ll triangulate the comments, the quantitative data, and any other data I have.  I am hoping to USE these findings to offer the program again.  More on that later.

 

An update on my making a difference query.  I’ve gotten a couple of responses and NO examples.  One response was about not using page views as a measure of success; instead use average time viewing a page.  A lot of responses think that this is a marketing blog.  Since evaluation is such a big part of marketing, I can see how that fits.  Only, this is an evaluation blog.  I’m not posting the survey.  It has been closed for weeks and weeks.  I was hoping for examples about how it changed your thinking, practice, world view.

 

Also, just so you know, I was in an auto accident 24 hours after I returned from Tucson.  Mersedes and I have aches and pains and NO serious injuries.  We do not have a car any more.  Talk about evaluating an activity–think about what you would do without a car (or if you don’t have one, what you would do with one).  I had to.

I’m an evaluator.

I want to know if something makes a difference; if the change is for the better; if it has value, merit, worth.

After all, the root of evaluation is value.

I haven’t answered individually the numerous comments that have been posted.  I just continue to write and see what happens.  I’m hoping that some of what I’ve said over the past now over three years has 1) made sense; 2) made a difference; and 3) been worthwhile.  I also hope you reader have been able to use some of what you have read here.  I don’t know.

Someone is keeping track of my analytic measures; that’s wonderful.  Some blogs use that as a measure of making a difference; I don’t.  I look at what people say.  I read every comment even if I don’t respond.  A lot of folks say that the information has been interesting; that the blog is well written; that I should continue.  No one says how they use the material, or, for that matter, if they do.  So, reader, I have a challenge:

Post a comment about how you have used the information you have read here.  Post it next week when I won’t be blogging (see last week).  Let me know.  I’ll summarize the responses when I get back.  I won’t do this for very long–two, maybe three weeks; a month at most.  (When I posted previously a link to a quick on-line survey, I kept the survey open for only two weeks; not long enough for some folks.)

 

Other blog writers get comments not dissimilar to mine (I read a lot of blogs for ideas).  I don’t see that folks are actually giving the writer specific information on what difference the blog has made in the lives of the reader.  I must confess, I don’t let them know either.  So since this is a new year, and everyone is trying new behaviors, the new behavior I’m asking for here is Tell me what difference this blog has made/is making.

On January 22, 23, and 24, a group of would be evaluators will gather in Tucson, AZ at the Westin La Paloma Resort.

Even though Oregon State is a co-sponsor for this program, being in Oregon in winter (i.e., now) is not the land of sunshine, and since Vitamin D is critical for everyone’s well being, I chose Tucson for our capstone event.  Our able support person, Gretchen, chose  the La Paloma, a wonderful site on the north side of Tucson.  So even if it is not warm, it will be sunny.  Why, we might even get to go swimming; if not swimming,  certainly hiking.  There are a lot of places to hike around Tucson…in Sabino Canyon ; near/around A Mountain (first year U of A students get to whitewash or paint the A)  ; Saguaro National Park ; or maybe in one of the five (yes, five) mountain ranges surrounding Tucson.  (If you are interested in other hikes, look here.)

We will be meeting Tuesday afternoon, all day Wednesday, and Thursday morning.  Participants have spent the past 17 months participating in and learning about evaluation.  They have identified a project/program (either big P or little p), and they participated in a series of modules, webinars, and office hours on topics used everyday in evaluating a project or program.We anticipate over 20 attendees from the cohorts.  We have participants from five Extension program areas (Nutrition, Agriculture, Natural Resources, Family and Community Science, and 4-H), from ten western states (Oregon, Washington, California, Utah, Colorado, Idaho, New Mexico, Arizona, Wyoming, and Hawaii.), and all levels of familiarity with evaluation (beginner to expert).

I’m the evaluation specialist in charge of the program content (big P) and Jim Lindstrom (formerly of Washington State, currently University of Idaho) has been the professional development and technical specialist, and Gretchen Cuevas (OSU) has been our wonderful support person.  I’m using Patton’s Developmental Evaluation Model to evaluate this program.  Although some things were set at the beginning of the program (the topics for the modules and webinars, for example), other things were changed depending on feedback (readings, office hours). Although we expect that participants will grow their knowledge of evaluation, we do not know what specific and measurable outcomes will result (hence, developmental). We hope to run the program (available to Extension faculty in the Western Region) again in September 2013.  Our goal is to build evaluation capacity in the Western Extension Region.  Did we?

What have you listed as your goal(s) for 2013?

How is that goal related to evaluation?

One study suggests that you’re 10 times more likely alter a behavior successfully (i.e. get rid of a “bad” behavior; adopt a “good” behavior) than you would if you didn’t make resolution.  That statement is evaluative; a good place to start.  10 times!  Wow.  Yet, even that isn’t a guarantee you will be successful.

How can you increase the likelihood that you will be successful?

  1. Set specific goals.  Break the big goal into small steps; tie those small steps to a time line.  You want to read how many pages by when?  Write it down.  Keep track.
  2. Make it public.  Just like other intentions, if you tell someone there is an increased likelihood you will complete them.  I put it in my quarterly reports to my supervisors.
  3. Substitute “good” for “less than desirable”.  I know how hard it is to write (for example).  I have in the past and will this year again, schedule and protect a specified time to write those three articles that are sitting partly complete.  I’ve substituted “10:00 on Wednesdays and Fridays” for the vague “when I have a block of time I’ll get it done”.  The block of time never materializes.
  4. Keep track of progress.  I mentioned it in number 1; I’ll say it again:  Keep track; make a chart.  I’m going to get those manuscripts done by X data…my chart will reflect that

So are you going to

  1. Read something new to you (even if it is not new)?
  2. Write that manuscript from that presentation you made?
  3. Finish that manuscript you have started AND submit it for publication?
  4. Register for and watch a webinar on a topic you know little about?
  5. Explore a topic you find interesting?
  6. Something else?

Let me hear from you as to your resolutions; I’ll periodically give you an update.

 

And be grateful for the opportunity…gratitude is a powerful way to reinforce you and your goal setting.

 

Hanukkah ended last Saturday, two days after the December new moon.

Solstice happens Friday at 11:11 GMT (or the end of the world according to the Mayan calendar).

Christmas is next Tuesday.

Kwanzaa (a harvest festival that includes candles–light) starts the day after Christmas for seven days.

The twelfth day of Christmas occurs on January 6 (according to  some western Christian calendars).

All of these holidays are festivals of light…Enjoy.

And may 2013 bring you health, wealth, happiness, and the time to enjoy them.

 

(I’ll be gone next week hence two posts this week.)

At the end of January, participants in an evaluation capacity building program I lead will provide highlights of the evaluations they completed for this program.  That the event happens to be in Tucson and I happen to be able to get out of the wet and dreary northwest is no accident.  The event will capstone WECT (Western [Region] Evaluation Capacity Training–Say ‘west’) participants evaluations of the past 17 months.  Since each participant will be presenting their programs and the evaluations they did of those programs.  There will be a lot of data (hopefully).  The participants and those data could use (or not) a new and innovative take on data visualization.  Susan Kistler, AEA’s Executive Director, has blogged in AEA365 several times about data visualization.  Perhaps these reposts will help.

 

Susan Kistler says • “Colleagues, I wanted to return to this ongoing discussion. At this year’s conference (Evaluation ’12), I did a presentation on 25 low-cost/no-cost tech tools for data visualization and reporting. An outline of the tools covered and the slides may be accessed via the related aea365 post here http://aea365.org/blog/?p=7491. If you download the slides, each tool includes a link to access it, cost information, and in most cases supplementary notes and examples as needed.

A couple of the new ones that were favorites included wallwisher and poll everywhere. I also have on my to do list to explore both datawrapper and amCharts over the holidays.

But…am returning to you all to ask if there is anything out there that just makes you do your happy dance in terms of new low-cost, no-cost tools for data visualization and/or reporting. (This is a genuine request–if there is something out there, let Susan know.  You can comment on the blog, contact her through AEA (susan@eval.org), or let me know, I’ll forward it.

Susan also says in Saturday’s (December 15 , 2012) blog (and this would be very timely for WECT participants):

Enroll in the Free Knight Center’s Introduction to Infographics and Data Visualization: The course is online, and free, and will be offered between January 12 and February 23. According to the course information, we’ll learn the basics of:

“How to analyze and critique infographics and visualizations in newspapers, books, TV, etc., and how to propose alternatives that would improve them.

How to plan for data-based storytelling through charts, maps, and diagrams.

How to design infographics and visualizations that are not just attractive but, above all, informative, deep, and accurate.

The rules of graphic design and of interaction design, applied to infographics and visualizations.

Optional: How to use Adobe Illustrator to create infographics.”

 

What do I know that they don’t know?
What do they know that I don’t know?
What do all of us need to know that few of us knows?”

These three questions have buzzed around my head for a while in various formats.

When I attend a conference, I wonder.

When I conduct a program, I wonder, again.

When I explore something new, I am reminded that perhaps someone else has been here and wonder, yet again.

Thinking about these questions, I had these ideas

  • I see the first statement relating to capacity building;
  • The second statement  relating to engagement; and
  • The third statement (relating to statements one and two) relating to cultural competence.

After all, aren’t both of these statements (capacity building and engagement)  relating to a “foreign country” and a different culture?

How does all this relate to evaluation?  Read on…

Premise:  Evaluation is an everyday activity.  You evaluate everyday; all the time; you call it making decisions.  Every time you make a decision, you are building capacity in your ability to evaluate.  Sure, some of those decisions may need to be revised.  Sure, some of those decisions may just yield “negative” results.  Even so, you are building capacity.  AND you share that knowledge–with your children (if you have them), with your friends, with your colleagues, with the random shopper in the (grocery) store.  That is building capacity.  Building capacity can be systematic, organized, sequential.  Sometimes formal, scheduled, deliberate.  It is sharing “What do I know that they don’t know (in the hope that they too will know it and use it).

Premise:  Everyone knows something.  In knowing something, evaluation happens–because people made decisions about what is important and what is not.  To really engage (not just outreach which much of Extension does), one needs to “do as” the group that is being engaged.  To do anything else (“doing to” or “doing with”) is simply outreach and little or no knowledge is exchanged.  Doesn’t mean that knowledge isn’t distributed; Extension has been doing that for years.  Just means that the assumption (and you know what assumptions do) is that only the expert can distribute knowledge.  Who is to say that the group (target audience, participants) aren’t expert in at least part of what is being communicated.  Probably are.  It is the idea that … they know something that I don’t know (and I would benefit from knowing).

Premise:  Everything, everyone is connected.  Being prepared is the best way to learn something.  Being prepared by understanding culture (I’m not talking only about the intersection of race and gender; I’m talking about all the stereotypes you carry with you all the time) reinforces connections.  Learning about other cultures (something everyone can do) helps dis-spell stereotypes and mitigate stereotype threats.  And that is an evaluative task.  Think about it.  I think it captures the What do all of us need to know that few of us knows?” question.