Recently, I talked about how evaluations have changed.

They are still familiar yet they are different.

I have talked about formative and summative evaluation. (Thank you, Michael Scriven).

Those are two of the evaluation.

The other five are:

  1. Process Evaluation
  2. Outcome Evaluation
  3. Economic Evaluation
  4. Impact Evaluation
  5. Goals-based Evaluation

Yes, this discussion was from another blog.

So let’s discuss these other evaluations (which the author says you need to know to have an effective monitoring and evaluation system).

Choosing the evaluation for your program depends on where you are in the development of your program. If you are in the conceptualization phase there is one evaluation to use; the implementation phase uses others; and the end of the project will use yet a different evaluation or evaluations.

Going through them may help.

Conceptualization phase.

Formative evaluation typically is conducted during the development or improvements phase. By preventing waste and identify potential areas of concerns formative evaluation increases chances of success. It helps improve the program. Formative evaluation is often conducted more than once. It is usually contrasted with summative evaluation. For more information, see ; in fact, see it for all these evaluations, except as noted.

Implementation phase.

Process Evaluation usually refers to an evaluation of the treatment that focus entirely on variables between input and output data. It can also refer to the process component of the evaluation. Process evaluation occurs during the implementation phase.

Outcome Evaluation is often called “payoff evaluation.” Outcomes are often effects during the treatment. We would be wise to distinguish between immediate outcomes, middle (or end of treatment) outcomes, and long term outcomes. Outcome evaluation occurs during the implementation phase.

Economic Evaluation is also known as cost-benefit (or benefit-cost) analysis or cost-effectiveness analysis. For a detailed description of these types of evaluation see  OR . More and more, program designers are asked to do more with fewer resources and want to know what how efficient is the program.

Project closure (end) phase.

Impact Evaluation is an evaluation that focuses on outcomes. It occurs at the end of the project. Although it is desirable to do long term impact, there is often no funding available for that evaluation. The impact evaluation often back off the impact because there is no funding.

Summative Evaluation is conducted after the completion of the program, usually for the benefit of some external audience or funding agency. It should not be confused with outcome evaluation (an evaluation focused on outcomes rather than on process).

Goal-based Evaluation is any type of evaluation based on the goals and objectives of the program. It is done at the end of a program that is not on-going.  It often involves SMART objectives (Specific, Measurable, Attainable, Relavant, and Timely).

So what are you going to use to evaluate your program? You have a choice.

my .






You know the old saying about when you assume.

I’ve talked about assumptions here and here. (AEA365 talks about them here.)

Each of those times I was talking about assumptions, though not necessarily from the perspective of today’s post.

I still find that making assumptions is a mistake as well as a cognitive bias. And it does… .

Today, though, I want to talk about assumptions that evaluators can make, and in today’s climate, that is dangerous.

So, let me start with an example. Continue reading

Professional Development.

AEA365 shares some insights into its use in evaluating professional development.

The authors cite Thomas Guskey (1, 2). I didn’t know Thomas Guskey. I went looking.

Turns out, Donald Kirkpatrick (1924-2014) was the inspiration for the five level evaluation model of Thomas Guskey.

Kirkpatrick has four levels in his model (reaction, learning, behavior, results). I’ve talked about them before here and here. I won’t go into them again.

Guskey has added a fifth level. In the middle.

He talks about participant reaction (level 1) and participant learning  (level 2) (like Kirkpatrick).

His third level is different. Here he talks about organization support and change.

Then he adds two additional levels that are representative of Kirkpatrick’s model (level 3 and 4). He adds participant use of new knowledge and skills (Kirkpatrick’s behavior; Guskey’s level 4) and participant learning outcomes (Kirkpatrick’s results; Guskey’s level 5). Continue reading


People don’t want something truly new, they want the familiar done differently.

OK. I got this idea from a blog post on sushi, well, actually “California Roll”.

Made me think. Evaluation is a service; that service is familiar; over the years it is done differently.

That moves the profession along–like language drift, only evaluation drift.

It is valuable to know formative/summative. (Thank you, Michael Scriven.)

It is also valuable to know that evaluation wouldn’t be were it is today if you didn’t understand that concept and how it applies to what you are doing with your evaluation.

So evaluation is like sushi (California Roll). Evaluation takes what is familiar and repackages it into something that will advance the profession. Continue reading