Complexity.

I predict a bright future for complexity. Have you ever considered how complicated things can get, what with one thing always leading to another?

~~E. B. White, Quo VADIMUS? OR THE CASE FOR THE BICYCLE (Garden City. Publishing 1946)

 

Every thing is connected.

One thing does lead to another. And connections between them can be drawn. So let’s connect the dots.

In 2002, as AEA president, I chose for the theme of the meeting, Evaluation a Systematic Process that Reforms Systems.

Evaluation doesn’t stand in isolation; it is not something that is added on at the end as an afterthought.

Many program planners see evaluation that way, unfortunately. Only as an add on, at the end.

Contrary to may peoples’ belief, evaluators need to be included at the outset of the program. They also need to be included at each stage thereafter (Program Implementation, Program Monitoring, and Program Delivery; Data Management and Data Analysis; Program Evaluation Utilization).

Systems Concepts.

Shortly after Evaluation 2002, (in 2004) the Systems Evaluation Topical Interest Group was formed.

AEA published (2007) “Systems Concepts in Evaluation: A Expert Anthology” (scroll to the end). It was edited by Bob Williams and Iraj Imam (who died as this volume was going to press). To order, see this link.

This volume does an excellent job of demonstrating how evaluation and systems concepts are related.

It connects the dots.

In that volume, Gerald Midgley writes about “the intellectual development of systems field, how this has influenced practice and importantly the relevance for all this to evaluators and evaluation”. It is the pivotal chapter (according to the editors).

While it is possible to trace the idea to trace the ideas about holistic thinking back to the ancient Greeks, systems thinking is probably best attributed to the ideas of von Bertalanffy [Bertalanffy, L. von. (1950). Theory of open systems in physics and biology. Science, III: 23-29.]

I would argue that the complexity concept will go back to at least Alexander von Humboldt . (Way before von Bertalanffy.) He was an intrepid explorer and created modern environmentalism. Alexander von Humboldt lived between 1769–1859. Although environmentalism is a complex word, it really is a system. With connections. And complexity.

Suffice it to say, there are no easy answers to the problems faced by professionals today. Complex. Complicated. And one thing leading to another.

my .

 Key Evaluation Questions.

 My friend and colleague, Patricia Rogers shared Better Evaluation discussion of Key Evaluation Questions. The opening paragraphs were enough to get me reading. She says:

Key Evaluation Questions (KEQs) are the high-level questions that an evaluation is designed to answer. Having an agreed set of KEQs makes it easier to decide what data to collect, how to analyze it, and how to report it.

KEQs should be developed by considering the type of evaluation being done, its intended users, its intended uses (purposes), and the evaluative criteria being used.

Our BetterEvaluation task page ‘Specify Key Evaluation Questions’ runs through some of the main considerations to keep in mind when developing your KEQs, and gives some useful examples and resources to help you.

Choosing Key Evaluation Questions.

How do we know we’re making a difference?
Are the tactics we are using appropriate?
What are the meaningful signs of progress?
How does impact relate?

In the graphic above (for which I am grateful although I could not find a source) the closest these steps come to asking “key evaluation questions” is the step “define evaluation questions”.

I want to reiterate: They are NOT the “not specific questions that are asked in an interview or a questionnaire”.

So if you have done all the steps listed above, who defines the evaluation questions?

Or does it (as Rogers suggests) depend on the type of evaluation being done? (To be fair, she also suggests that the questions make it easier to decide what data to collect, how to analyze it, and how to report it.)

Now I’ve talked about the type of evaluation being done before.

 

Perhaps the type of evaluation is the key here.

 

There are seven types of evaluation (formative, process, outcome, economic, impact, goal-based, and summative). Each of those seven types do address a specific activity; each has a specific purpose. So it would make sense that the questions that are asked are different. Does each of those types of evaluation need to answer the above questions?

 

If we are looking to improve the program (formative, usually) do we really need to know if we are making a difference?
If the question of “how” (process) the program was delivered, what do we really need to know? We do need to know if the tactics (approaches) we are using are appropriate.
For any of the other purposes, we do need if a difference occurs. And the only way to know what difference we made is to ask the participants.

Post Script

Oh, since I last posted, I was able to secure a copy of the third edition of Levin’s book on economic evaluation.  Henry Levin wrote it with Patrick J. McEwan, Clive Belfield, A. Brooks Bowden, and Robert Shand.  It is worth a read (as much has changed since the second edition, specifically with “questions of efficiency and cost-effectiveness”.)

 

Assumptions.

Assumptions.

You know the old saying about when you assume.

I’ve talked about assumptions here and here. (AEA365 talks about them here.)

Each of those times I was talking about assumptions, though not necessarily from the perspective of today’s post.

I still find that making assumptions is a mistake as well as a cognitive bias. And it does… .

Today, though, I want to talk about assumptions that evaluators can make, and in today’s climate, that is dangerous.

So, let me start with an example. Continue reading

Evaluation is political. I am reminded of that fact when I least expect it.

In yesterday’s AEA 365 post, I am reminded that social justice and political activity may be (probably are) linked; are probably sharing many common traits.

In that post the author lists some of the principles she used recently:

  1. Evaluation is a political activity.
  2. Knowledge is culturally, socially, and temporally contingent.
  3. Knowledge should be a resource of and for the people who create, hold, and share it.
  4. There are multiple ways of knowing (and some ways are privileged over others).

Evaluation is a trans-discipline, drawing from many many other ways of thinking. We know that politics (or anything political) is socially constructed. We know that ‘doing to’ is inadequate because ‘doing with’ and ‘doing as’ are ways of sharing knowledge. (I would strive for ‘doing as’.) We also know that there are multiple ways of knowing.

(See Belenky belenky, Clinchy [with Belenky] belenkyclinchy_trimmed, Goldberger nancy_goldberger, and Tarulejill-mattuck-tarule, Basic Books, 1986 as one.)

OR

(See: Gilligan carol-gilligan, Harvard University Press, 1982; among others.)

How does evaluation, social justice, and politics relate?

What if you do not bring representation of the participant groups to the table?

If they are not asked to be at the table or for their opinion?

What if you do not ask the questions that need to be asked of that group?

To whom ARE your are your questions being addressed?

Is that equitable?

Being equitable is one aspect of social justice. There are others.

Evaluation needs to be equitable.

 

I will be in Atlanta next week at the American Evaluation Association conference. atlanta-georgia-metropolitan

Maybe I’ll see you there!

my two cents.

molly.

 

 

 

 

Process is the “how”.

Recently,  reminded of the fact that process is the “how”, I had the opportunity to help develop a Vision Vision Road Sign with dramatic blue sky and clouds.

and a Mission mission statement.

The person who was facilitating the session provided the group with clear guidelines.

The Vision statement, defined as “the desired future condition”, will happen in 2-5 years (i.e., What will change?). We defined the change occurring (i.e., in the environment, the economy, the people). The group also identified what future conditions would be possible. We would write the vision statement so that it would happen within 2-5 years, be practical, be measurable, and be realistic. OK…

And be short…because that is what vision statements are.

The Mission statement (once the Vision statement was written and accepted) defined “HOW” we would get to the vision statement. This reminded me of process–something that is important in evaluation. So I went to my thesaurus to find out what that source said about process. Scriven  Scriven to the rescue, again.

 

 

Process Evaluation

Scriven, in his Evaluation Thesaurus Scriven book cover defines process as the activity that occurs “…between the input and the output, between the start and finish”. Sounds like “how” to me. Process relates to process evaluation. I suggest you read the section on process evaluation on page 277 in the above mentioned source.

Process evaluation rarely functions as the sole evaluation tool because of weak connections between “output quantity and quality”. Process evaluations will probably not generalize to other situations.

However, PROCESS evaluation “…must be looked at as part of any comprehensive evaluation, not as a substitute for inspection of outcomes…” The factors include “the legality of the process, the morality, the enjoyability, the truth of any claims involved, the implementation…, and whatever clues…” that can be provided.

Describing “how ” something is to be done is not easy. It is not output nor outcome.  Process is the HOW something will be accomplished if you have specific inputs . It happens between the inputs and the outputs.

To me, the group needs to read about process evaluation in crafting the mission statement in order to get to the HOW.

my two  two cents      .

molly.

 


 

Sheila Robinson has an interesting post which she titled “Outputs are for programs. Outcomes are for people.”  Sounds like a logic model to me.

Evaluating something (a strategic plan, an administrative model, a range management program) can be problematic. Especially if all you do is count. So “Do you want to count?” OR “Do you want to determine what difference you made?” I think it all relates to outputs and outcomes.

 

Logic model

 

The model below explains the difference between outputs and outcomes.

.logicmodel (I tried to find a link on the University of Wisconsin website and UNFORTUNATELY it is no longer there…go figure. Thanks to Sheila, I found this link which talks about outputs and outcomes) I think this model makes clear the  difference between Outputs (activities and participation) and Outcomes-Impact (learning, behavior, and conditions). Continue reading

mandela-impossible “It always seems impossible until it’s done.” ~Nelson Mandela

How many times have you shaken your head in wonder? in confusion? in disbelief?

Regularly throughout your life, perhaps. (Now if you are a wonder kid, you probably have just ignored the impossible and moved on to something else.) Most of us will have been in awe; uncertainty; incredulity. Most of us will always look at that which seems impossible and then be amazed when it is done. (Mandela nelson mandela 1 was such a remarkable man who had such amazing insights.) Continue reading