Key Evaluation Questions.

 My friend and colleague, Patricia Rogers shared Better Evaluation discussion of Key Evaluation Questions. The opening paragraphs were enough to get me reading. She says:

Key Evaluation Questions (KEQs) are the high-level questions that an evaluation is designed to answer. Having an agreed set of KEQs makes it easier to decide what data to collect, how to analyze it, and how to report it.

KEQs should be developed by considering the type of evaluation being done, its intended users, its intended uses (purposes), and the evaluative criteria being used.

Our BetterEvaluation task page ‘Specify Key Evaluation Questions’ runs through some of the main considerations to keep in mind when developing your KEQs, and gives some useful examples and resources to help you.

Choosing Key Evaluation Questions.

How do we know we’re making a difference?
Are the tactics we are using appropriate?
What are the meaningful signs of progress?
How does impact relate?

In the graphic above (for which I am grateful although I could not find a source) the closest these steps come to asking “key evaluation questions” is the step “define evaluation questions”.

I want to reiterate: They are NOT the “not specific questions that are asked in an interview or a questionnaire”.

So if you have done all the steps listed above, who defines the evaluation questions?

Or does it (as Rogers suggests) depend on the type of evaluation being done? (To be fair, she also suggests that the questions make it easier to decide what data to collect, how to analyze it, and how to report it.)

Now I’ve talked about the type of evaluation being done before.

 

Perhaps the type of evaluation is the key here.

 

There are seven types of evaluation (formative, process, outcome, economic, impact, goal-based, and summative). Each of those seven types do address a specific activity; each has a specific purpose. So it would make sense that the questions that are asked are different. Does each of those types of evaluation need to answer the above questions?

 

If we are looking to improve the program (formative, usually) do we really need to know if we are making a difference?
If the question of “how” (process) the program was delivered, what do we really need to know? We do need to know if the tactics (approaches) we are using are appropriate.
For any of the other purposes, we do need if a difference occurs. And the only way to know what difference we made is to ask the participants.

Post Script

Oh, since I last posted, I was able to secure a copy of the third edition of Levin’s book on economic evaluation.  Henry Levin wrote it with Patrick J. McEwan, Clive Belfield, A. Brooks Bowden, and Robert Shand.  It is worth a read (as much has changed since the second edition, specifically with “questions of efficiency and cost-effectiveness”.)

 

Comments are closed.