NOTE: This was written last week. I didn’t have time to post. Enjoy.

 

Methodologymethodology 2, aka implementationimplementation, monitoringmonitoring-2, and deliverydeliveryis important. What good is it if you just gather the first findings that come to mind. Being rigorous here is just as important as when you are planning and modeling the program. So I’ve searched the last six years of blogs posts and gathered some of them for you. They are all about Survey, a form of methodology.survey image 3 Survey is a methodology that is often used by Extension, as it is easy to use. However, organizing the surveysurvey organization, getting the survey’s backsurvey return, and dealing with non-response are problematicnonresponse (another post, another time).

The previous posts are organized by date from the oldest to the most recent:

 

2010/02/10

2010/02/23

2010/04/09

2010/08/25

2012/08/09

2012/10/12

2013/03/13

2014/03/25

2014/04/15

2014/05/19

2015/06/29

2015/07/24

2015/12/07

2016/04/15

2016/04/21 (today’s post isn’t hyperlinked)

Just a few words on surveys today: A colleague asked about an evaluation survey for a recent conference. It will be an online survey probably using the University system, Qualtrics. My colleague jotted down a few ideas. The thought occurred to me that this book (by Ellen Taylor-Powell and Marcus Renner) would be useful. On page ten of this book, it asks for the type of information that is needed and wanted. It lists five types of possible information:

  1. Participant reaction (some measure of satisfaction);
  2. Teaching and facilitation (strengths and weaknesses of the presenter, who may (or may not) change the next time);
  3. Outcomes (what difference/benefits/intentions did the participant experience);
  4. Future programming (other educational needs/desires); and
  5. Participant background (who is attending and who isn’t can be answered here).

Thinking through these five categories made all the difference for my colleague. (Evaluation was a new area.) I had forgotten about how useful this booklet is for people being exposed to evaluation for the first time and to surveys, as well. I recommend it.

As promised last week, this week is (briefly) on implementation, monitoring, and delivering evaluation.

Implementation. To implement an evaluation, one needs to have a plan, often called a protocol.  Typically, this is a step-by-step list of what you will do to present the program to your target audience.  In presenting your program to your target audience, you will also include a step-by-step list of how you will gather evaluation information (data).  What is important about the plan is that it be specific enough to be replicated by other interested parties.  When a plan is developed, there is typically a specific design behind each type of data to be collected.  For example, specific knowledge change is often measured by a pretest-posttest design; behavioral change is often measured with a repeated measures design.  Campbell and Stanley, in their classic book, Experimental and quasi-experimental designs for research, present a wealth of information about designs that is useful in evaluation (as well as research).

There are numerous designs which will help develop the plan for the implementation of the program AND the evaluation.

Monitoring. Simply put, monitoring is watching to see if what you said would happen, actually does.  Some people think of monitoring as .  Although monitoring may seem like being watched, it is being watched with a plan.  When I first finished my doctorate and became an evaluator, I conceptualized evaluation simply as process, progress, product. This helped stakeholders understand what evaluation was all about.  The monitoring part of evaluation was answered when I asked, “Are we making progress?  Are we where we said we would be at the time we said we would be there?”  This is really important because sometimes, as Jonny Morell points out in his book, evaluation don’t always  go as planned, even with the best monitoring system.

Delivering.  Delivering is the nuts and bolts of what you are going to do.  It addresses the who, what, where, when, how, and why of the implementation plan.  All of these questions interrelate–for example, if you do not identify who will conduct the evaluation, often the evaluation is “squeezed in” at the end of a program because it is required.

In addition to answering these questions when delivering the evaluation, one thinks about the models, or evaluation approaches.  Stufflebeam, Madaus, and Kellaghan  (in Evaluation models:  Viewpoints on educational and human services evaluation) discuss various approaches and state that the approach used by the evaluator will provide a framework for conducting an evaluation as well as  presenting and using the evaluation results.