As promised last week, this week is (briefly) on implementation, monitoring, and delivering evaluation.

Implementation. To implement an evaluation, one needs to have a plan, often called a protocol.  Typically, this is a step-by-step list of what you will do to present the program to your target audience.  In presenting your program to your target audience, you will also include a step-by-step list of how you will gather evaluation information (data).  What is important about the plan is that it be specific enough to be replicated by other interested parties.  When a plan is developed, there is typically a specific design behind each type of data to be collected.  For example, specific knowledge change is often measured by a pretest-posttest design; behavioral change is often measured with a repeated measures design.  Campbell and Stanley, in their classic book, Experimental and quasi-experimental designs for research, present a wealth of information about designs that is useful in evaluation (as well as research).

There are numerous designs which will help develop the plan for the implementation of the program AND the evaluation.

Monitoring. Simply put, monitoring is watching to see if what you said would happen, actually does.  Some people think of monitoring as .  Although monitoring may seem like being watched, it is being watched with a plan.  When I first finished my doctorate and became an evaluator, I conceptualized evaluation simply as process, progress, product. This helped stakeholders understand what evaluation was all about.  The monitoring part of evaluation was answered when I asked, “Are we making progress?  Are we where we said we would be at the time we said we would be there?”  This is really important because sometimes, as Jonny Morell points out in his book, evaluation don’t always  go as planned, even with the best monitoring system.

Delivering.  Delivering is the nuts and bolts of what you are going to do.  It addresses the who, what, where, when, how, and why of the implementation plan.  All of these questions interrelate–for example, if you do not identify who will conduct the evaluation, often the evaluation is “squeezed in” at the end of a program because it is required.

In addition to answering these questions when delivering the evaluation, one thinks about the models, or evaluation approaches.  Stufflebeam, Madaus, and Kellaghan  (in Evaluation models:  Viewpoints on educational and human services evaluation) discuss various approaches and state that the approach used by the evaluator will provide a framework for conducting an evaluation as well as  presenting and using the evaluation results.

Print Friendly, PDF & Email

2 thoughts on “Implementation, monitoring, and delivering evaluation

  1. Salaam,

    You wrote:

    Campbell and Stanley, in their classic book, Experimental and quasi-experimental designs for research, present a wealth of information about designs that is useful in evaluation (as well as research).

    What is your mean when you wrote: \useful\ & \as well as research\ in above paragraph?

    Best

    Moein

  2. Hi Mohammed,
    Some scholars make a clear distinction among research, evaluation, and evaluation research. Because evaluation draws its tool set from social science research, it is important, I think, to recognize that the source of many evaluation tools come from other sources. Campbell and Stanley originally wrote their book as a chapter in Gage’s “Handbook of Research on Teaching”, published in 1963, looking at designs that are used in educational research, the validity of those designs, and treats to valid inference. They define experiment as “…that portion of research in which variables are manipulated and their effects upon other variables [are] observed.” These designs have over time proved useful when determining the value, or merit and worth of a program as well as establishing causal links.

Comments are closed.