Alternative facts.

Never. Never. has evaluation been questioned with the label of “alternative facts.”

Over the years, I have been very aware that evaluation is a political activity.

I have talked about evaluation being political (here, and here, and here, and here ).

But is it? Is it just another way of making the answer be what we want it to be? A form of alternative fact?

I’ve been an evaluator for a long time. I want to make a difference to the people who experience my programs (or the programs for which I’m consulting as an external evaluator). The thought that I might be presenting “alternative facts” is troublesome.

Did I really determine that outcome? Or is the outcome bogus? Liars use statistics, you know. (This is a paraphrase of a quote that Mark Twain attributed to Benjamin Disraeli.)

Big news brings out the fakers. But are evaluation results “big news”? Or…do people not want to hear what is actually happening, what the outcome really is?

Reminds me of 1984 ( George Orwell): War is peace. Freedom is slavery. Ignorance is strength (the English Socialist Party–aka. INGSOC). Kevin Siers added, in his cartoon of Sean Spicer,  “2017 is 1984”.  Two contradictory ideas existing at the same time as correct.

Statistics.

Statistics is a tool that evaluators use on a regular basis. It allows evaluators to tease apart various aspects of a program. The “who” , the “what”, the “when”, maybe even the “why”. Statistics can certainly help determine if I made a difference  But how I see statistics may not be how you see them, interpret them, use them. Two people can look at a set of statistics and say they do not agree. Is that an example of alternative facts?

Bias.

Everyone comes to any program with preconceived bias. You, the evaluator, want to see a difference. Preferably a statistically significant difference, not just a practical significance (although that would be nice as well).

Even if you are dealing with qualitative data, and not with quantitative data yielding statistics, you come to the program with bias. Objectivity is not an option. You wouldn’t be doing the program if you didn’t think that the program will make a difference. Yet, the individuals who have funded the program (or in some other way are the folks who get the final report) can (and do) not accept the report as it is written. That is not what they want to see/hear/read. Does that make the report alternative facts? Or is bias speaking without acknowledging that bias?

Perhaps Kierkegaard is right.

There are only two ways you can be fooled.

 

my .

molly.

Jun
14
Filed Under (program evaluation) by Molly on 14-06-2017

A pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty.

~~Winston Churchill

 

“A pessimistpessimist is one who makes difficulties of his opportunities and an optimist optimist 1 is one who makes opportunities of his difficulties.”

~~Harry S. Truman

Two sides of the same coin? A different way to say the same thing?

Pessimist, optimist, realist?

So do you see the glass half empty–or half full?optimist-pessimist

OR are you a realist being able to refill the glass (with ice and a couple of shots of your favorite beverage)?

Evaluation is a field which includes all sorts of folks.

The optimist says that they can find a way.

The pessimist says that it isn’t likely.

The realist says it is possible and probable.

Working it out

One way to work it out is to use a logic model.

Another is to use a  theory of change.

Keep in mind that you might be wrong even if you apply either/both of the above.

(I remember a major professor of mine said that the theory may be wrong; it was.)

So sometimes, using the tools of evaluation may not get you where you want to be.

There are many approaches to learning something. For example: You can test a Hypothesis (Bill Nye the science guy says that a hypo thesis is an idea below). First, develop a plan; then you test it by gathering data. You analyze the data. And yes, there is some support for your hypothesis.

Or, you have an see an occurrence which repeats itself under various conditions.  You find that the emergent idea is dominant. So you control the situation and see if the idea emerges once again. It does!

I’m sure you can do trial and error; you can guess what the outcome will be; or you and follow what your parent did (worked for him/her, should work for you).

If you apply the scientific method  to the learning, you usually will test the hypothesis.

You will deal with humons at some point. The humon situation will be your guide.

But what if…

The people you are working with/for want it their way?

What if the end result is really a power and control issue and not one of transparent findings (good or bad).

How do you, the evaluator, address the implied (or actual) power and control.

Where does Standard III (Propriety) and the Guiding Principle D and E (Respect for People and Responsibilities for General and Public Welfare, respectively) enter into the discussion?

Out of your hands, you say? NO. Not really.

You do have a responsibility.

And an obligation.

You will have an opportunity

Does that make you a pessimist? Or an optimist? Or a realist? Only you will decide.

my  .

molly.

 

 

Jun
08
Filed Under (program evaluation) by Molly on 08-06-2017

Love. Revisited.

Love is the the most radically subversive activism of all, the only thing that ever changed anyone.

~~Ann Voskamp 

 

Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle. As with all matters of the heart, you’ll know when you find it.

~~Steve Jobs

 

Both of these quotes speak to me. So, let me tell you a story. And then how they relate to each other.

Two stories.

I fell in love with a 4 year old autistic boy,  a long time ago. I realized early on that stability and as much certainty as possible were important to him. I arranged to care for him (I was working in a hospital at the time) whenever I was working (stability). I don’t know what happened the two days I didn’t work; I just knew that he needed to “catch up” to where he had been before I didn’t work (as much certainty as possible). Slowly, oh, ever so slowly, he recovered from his illness and he “settled down”.  I like to think he trusted me; trusted me to be there; be there for him. And I was (even though I looked after other people). Then one day he wasn’t there anymore.

I was bereft. I mourned the loss of that delicate child. I could only hope that he “made it”. I do not know.

That was when I realized that I could make a difference in the lives of emotionally disturbed children. I was in love. I would do great work “saving” emotionally disturbed children. (NOT)

Fast forward many years. I was recently out of graduate school. It was spring. And spring had arrived in a burst of color and fragrance. Of course I took advantage of this opportunity and paused; I had an epiphany. I clearly saw three items:  do good work; be a good friend; grow spiritually/personally.  I realized  that the work I was doing filled a large part of my life (although I strived for a balance). I was satisfied. I was in love. Again. And even though it wasn’t with emotionally disturbed children, I was making a difference. 

I had found my passion. I didn’t settle. I found it what mattered to me.

 

Today.

Today (many more years since the epiphany and the love of my life) I find myself wanting to make a difference.  Still.

I certainly can do it with evaluation. And do. Does that program have value? Did it make a difference in the lives of the target audience? I look around at the world, the country, and wonder how can I make a difference; how can I be an activist (subversive or not); how will that change any thing? Yes, I believe love trumps hate. Yes, I believe in making a difference. Yes, I believe in doing good work (or as Steve Jobs said–great work).

Does that mean I need to work at this more OR does it mean I need to give myself permission to walk away from the struggle. To pause. To enjoy the roses now that they are in bloom?

 

my .

molly.

 

May
31
Filed Under (program evaluation) by Molly on 31-05-2017

Evaluations.

Recently, I talked about how evaluations have changed.

They are still familiar yet they are different.

I have talked about formative and summative evaluation. (Thank you, Michael Scriven).

Those are two of the evaluation.

The other five are:

  1. Process Evaluation
  2. Outcome Evaluation
  3. Economic Evaluation
  4. Impact Evaluation
  5. Goals-based Evaluation

Yes, this discussion was from another blog.

So let’s discuss these other evaluations (which the author says you need to know to have an effective monitoring and evaluation system).

Choosing the evaluation for your program depends on where you are in the development of your program. If you are in the conceptualization phase there is one evaluation to use; the implementation phase uses others; and the end of the project will use yet a different evaluation or evaluations.

Going through them may help.

Conceptualization phase.

Formative evaluation typically is conducted during the development or improvements phase. By preventing waste and identify potential areas of concerns formative evaluation increases chances of success. It helps improve the program. Formative evaluation is often conducted more than once. It is usually contrasted with summative evaluation. For more information, see ; in fact, see it for all these evaluations, except as noted.

Implementation phase.

Process Evaluation usually refers to an evaluation of the treatment that focus entirely on variables between input and output data. It can also refer to the process component of the evaluation. Process evaluation occurs during the implementation phase.

Outcome Evaluation is often called “payoff evaluation.” Outcomes are often effects during the treatment. We would be wise to distinguish between immediate outcomes, middle (or end of treatment) outcomes, and long term outcomes. Outcome evaluation occurs during the implementation phase.

Economic Evaluation is also known as cost-benefit (or benefit-cost) analysis or cost-effectiveness analysis. For a detailed description of these types of evaluation see  OR . More and more, program designers are asked to do more with fewer resources and want to know what how efficient is the program.

Project closure (end) phase.

Impact Evaluation is an evaluation that focuses on outcomes. It occurs at the end of the project. Although it is desirable to do long term impact, there is often no funding available for that evaluation. The impact evaluation often back off the impact because there is no funding.

Summative Evaluation is conducted after the completion of the program, usually for the benefit of some external audience or funding agency. It should not be confused with outcome evaluation (an evaluation focused on outcomes rather than on process).

Goal-based Evaluation is any type of evaluation based on the goals and objectives of the program. It is done at the end of a program that is not on-going.  It often involves SMART objectives (Specific, Measurable, Attainable, Relavant, and Timely).

So what are you going to use to evaluate your program? You have a choice.

my .

molly.

 

 

May
16
Filed Under (program evaluation, program planning) by Molly on 16-05-2017

Assumptions.

Assumptions.

You know the old saying about when you assume.

I’ve talked about assumptions here and here. (AEA365 talks about them here.)

Each of those times I was talking about assumptions, though not necessarily from the perspective of today’s post.

I still find that making assumptions is a mistake as well as a cognitive bias. And it does… .

Today, though, I want to talk about assumptions that evaluators can make, and in today’s climate, that is dangerous.

So, let me start with an example. Read the rest of this entry »

May
08
Filed Under (Methodology, program evaluation) by Molly on 08-05-2017

Professional Development.

AEA365 shares some insights into its use in evaluating professional development.

The authors cite Thomas Guskey (1, 2). I didn’t know Thomas Guskey. I went looking.

Turns out, Donald Kirkpatrick (1924-2014) was the inspiration for the five level evaluation model of Thomas Guskey.

Kirkpatrick has four levels in his model (reaction, learning, behavior, results). I’ve talked about them before here and here. I won’t go into them again.

Guskey has added a fifth level. In the middle.

He talks about participant reaction (level 1) and participant learning  (level 2) (like Kirkpatrick).

His third level is different. Here he talks about organization support and change.

Then he adds two additional levels that are representative of Kirkpatrick’s model (level 3 and 4). He adds participant use of new knowledge and skills (Kirkpatrick’s behavior; Guskey’s level 4) and participant learning outcomes (Kirkpatrick’s results; Guskey’s level 5). Read the rest of this entry »

May
02
Filed Under (program evaluation) by Molly on 02-05-2017

Differently.

People don’t want something truly new, they want the familiar done differently.

OK. I got this idea from a blog post on sushi, well, actually “California Roll”.

Made me think. Evaluation is a service; that service is familiar; over the years it is done differently.

That moves the profession along–like language drift, only evaluation drift.

It is valuable to know formative/summative. (Thank you, Michael Scriven.)

It is also valuable to know that evaluation wouldn’t be were it is today if you didn’t understand that concept and how it applies to what you are doing with your evaluation.

So evaluation is like sushi (California Roll). Evaluation takes what is familiar and repackages it into something that will advance the profession. Read the rest of this entry »

Apr
24
Filed Under (program evaluation) by Molly on 24-04-2017

Engagement

Engagement is about evaluation.

I read a lot of blogs.

One blog said: “Development programs have to prove that they have had a strong and positive impact.”

Sounds like engagement to me.

(And you can’t have engagement without outreach.)

And outreach and engagement often takes place beyond the walls of the academy. In community.

What is community?

So I went looking.

Not a definition in Scriven’s book.

Did find a book called Methods for Community-based Participatory Research for Health edited by Barbara Isreal, Eugenia Eng, Amy J. Schulz, and Edith A. Parker.

The book can be a resource for students, practitioners, researchers, and community members who use CBPR. Probably is.

You would think that CBPR would have a definition of community.

 

Read the rest of this entry »

Apr
17
Filed Under (program evaluation) by Molly on 17-04-2017

Communication.

Connection. Communication. How important is it that you communicate; that you connect?

In reading over some of the comments I have received through this blog, I came upon this partial quote. (Partial because I didn’t report all of it; the remaining is not relevant.)

“I personally…think (blogging) as a one way channel to transfer any information you have over the web.”

Certainly, transferring information about evaluation from me to you, the reader, is this person’s view of blogging.

There has been a lot in the press (among others) over the last several years about avoiding “blue light” and connecting to real people. People with whom you are friendly; they might even be your friends. (I’m not talking about Facebook.) I’m talking about connections; communications. Talking to people face to face. Real connections. Real communications.

Bonding

Professor Peter Cohen  says (in talking about addiction) “…that human beings have a deep need to bond and form connections. It’s how we get our satisfaction. If we can’t connect with each other, we will connect with anything we can find…He says we should stop talking about ‘addiction’ altogether, and instead call it ‘bonding.’” Bonding. It relates to connections; to communication.

Read the rest of this entry »

Apr
05
Filed Under (program evaluation) by Molly on 05-04-2017

Innovation, again, leads to two thoughts for today:

  1. Innovation is the first one, from the first Monday video from Scott Reed : Do something. Try anything.  and
  2. the other from Harold Jarche who sites the book, Only Humans Need Apply about automation and intelligent machines.

This does relate to evaluation. Just wait. Patiently.

Where would evaluation be if evaluators didn’t question? Didn’t try anything or something? Evaluators would still be thinking separately; in silos. Would any of the current approaches be available? Would evaluation as a field be where it is today? Not if evaluators didn’t do something; try anything; innovate. Fortunately, evaluators do something. Read the rest of this entry »