Cartoons

 

Chris Lysy chris lysydraws cartoons.

Evaluation  and research cartoons.http://blogs.oregonstate.edu/programevaluation/files/2014/06/evaluation-and-project-working.jpg

 

http://blogs.oregonstate.edu/programevaluation/files/2014/06/research-v.-evaluation.jpg  http://blogs.oregonstate.edu/programevaluation/files/2014/06/I-have-evidence-cartoon.png

Logic Model cartoons.   http://i2.wp.com/freshspectrum.com/wp-content/uploads/2014/03/Too-complex-for-logic-and-evidence.jpg

Presentation cartoons.BS cartoon from fresh spectrum

 

Data cartoons.  http://i0.wp.com/freshspectrum.com/wp-content/uploads/2013/09/wpid-Photo-Sep-27-2013-152-PM1.jpg

More Cartoons

He has offered an alternative to presenting survey data. He has a wonderful cartoon for this.

Survey results are in. Who's ready to spend the next hour looking at poorly formatted pie charts?

He is a wonderful resource. Use him. You can contact him through his blog, fresh spectrum.

my  two cents   .

molly.

 

 

Sep
14

Decisions

How do we make decisions when we think none of the choices are good?   decision

(Thank you for this thought, Plexus Institute.)

No, I’m not talking about the current political situation in the US. I’m talking about evaluation.

The lead for this email post was “Fixing the frame alters more than the view“. fixing the frame

Art Markman makes this comment (the “how do we make decisions…” comment) here. He says “If you dislike every choice you’ve got, you’ll look for one to reject rather than one to prefer—subtle difference, big consequences.” He based this opinion on research, saying that the rejection mind-set allows us to focus on negative information about options and fixate on the one with the smallest downside. Read the rest of this entry »

Sep
07
Filed Under (program evaluation) by Molly on 07-09-2016

Making a difference make a difference

I wrote a blog about making a difference. Many people have read the original post, recently. And there have been many comments about it and the follow-up posts. Most people have made supportive comments. For example:

  1. “I think you’re on the right track – being consistent about adding fresh content and trying to make it meaningful for your audience.”–Kevin;
  2. “Mr. Schaefer is taking stock of his blog–a good thing to do for a blog that has been posted for a while. So although he lists four innovations, he asks the reader to “…be the judge if it made a difference in your life, your outlook, and your business.”– Ưu điểm của máy lọc nước nano;
  3. “Yes, your posts were made sense and a difference. If you think that your doing able to help others, keep going and do the best.”– Samin Sadat;
  4. “Its refreshing to see an academic even pose the question “does this blog make a difference’. Success for You.”– Raizaldi; and
  5. “You are getting the comments and that eventually means that yes this blog is making a difference out there. Keep the good work up.”– Himanshu.

Less than a supportive comment

Some people have made a less than supportive comment. For example:

  1. Wow this pretty outdated by 2016 standards..any updates to the post?–Dan Tanduro (admittedly, this comment refers to a post I did not link above although linked here); and

Some other comments

Some people have made comments that do not relate to content yet are relevant. For example:

  1. “Hello, I have some knowledge of blogspot, but you can teach how to make the blog more faster and enough to our visits. I Think WordPress is better than blogspot, but is only my opinion…”– John Smith; and
  2.  “It’s interesting how careers cross paths, while I am not directly connected to the world of qualitative research, I have found myself trying to understand and integrate it into my daily workload more and more.” –Steinway

Responses

Making a difference. I will keep writing.  Making a difference needs to be measured. I keep in mind that stories (comments) are data with soul.

Less than a supportive comment. What is outdated? I need specific comments to which to respond, please. Also, the post to which is being referred is from April, 2012…over four years ago.

Some other comments. I can’t teach how to blog faster for I know nothing about blogspot.  I only know a little about WordPress. Stories are data with a soul–important to remember when dealing with qualitative data.

my two cents.

molly.

 

 

AEA365 is honoring living evaluators for Labor Day (Monday, September 5, 2016).

Some of the living evaluators I know (Jim Altschuld, Tom Chapel, Michael Patton, Karen Kirkhart, Mel Mark, Lois-Ellin Datta, Bob Stake); Some of them I don’t know (Norma Martinez-Rubin, Nora F. Murphy, Ruth P. Saunders, Art Hernandez, Debra Joy Perez). One I’m not sure of at all (Mariana Enriquez).  Over the next two weeks, AEA365 is hosting a recognition of living evaluator luminaries.

The wonderful thing is that this give me an opportunity to check out those I don’t know; to read about how others see them, what makes them special. I know that the relationships that develop over the years are dear, very dear.

I also know that the contributions that  these folks have made to evaluation cannot be captured in 450 words (although we try). They are living giants, legends if you will.

These living evaluators have helped move the field to where it is today. Documenting their contributions to evaluation enriches the field. We remember them fondly.

If you don’t know them, look for them at AEA ’16 in Atlanta atlanta-georgia-skyline. Check out their professional development sessions or their other contributions (paper, poster, round-table, books, etc). Many of them have been significant contributors to AEA; some have only been with AEA since the early part of this century. All have made a meaningful contribution to AEA.

Many evaluators could be mentioned and are not. Sheila B. Robinson suggests that “…we recognize that many, many evaluators could and should be honored as well as the 13 we feature this time, and we hope to offer another invitation next year for those who would like to contribute a post, so look for that around this time next year, and sign up!

Evaluators honored

altschuld       Thomas J. Chapel

James W. Altschuld            Thomas J. Chapel

Norma Martinez-Rubin              Patton

Norma Martinez-Rubin            Michael Quinn Patton

 

       Ruth P. Saunders

Nora F. Murphy                                     Ruth P. Saunders

 

ArthurHernandez                  Kirkhart

Art Hernandez                          Karen Kirkhart

Melvin Mark            Loisellen datta

Mel Mark                                       Lois-Ellin Datta

debra-perez-thumbnail-340x340       bob stake 2

Debra Joy Perez                           Bob Stake

ghost_person_60x60_v1

Mariana Enriquez (Photo not known/found)

my two cents.

molly.

Sheila Robinson has an interesting post which she titled “Outputs are for programs. Outcomes are for people.”  Sounds like a logic model to me.

Evaluating something (a strategic plan, an administrative model, a range management program) can be problematic. Especially if all you do is count. So “Do you want to count?” OR “Do you want to determine what difference you made?” I think it all relates to outputs and outcomes.

 

Logic model

 

The model below explains the difference between outputs and outcomes.

.logicmodel (I tried to find a link on the University of Wisconsin website and UNFORTUNATELY it is no longer there…go figure. Thanks to Sheila, I found this link which talks about outputs and outcomes) I think this model makes clear the  difference between Outputs (activities and participation) and Outcomes-Impact (learning, behavior, and conditions). Read the rest of this entry »

Aug
19
Filed Under (criteria, program evaluation) by Molly on 19-08-2016 and tagged , , , ,

Probable? Maybe. Making a difference is always possible.

Oxford English Dictionary defines possible as capable of being (may/can exist, be done, or happen). It  defines probable as worthy of acceptance, believable.

Ray Bradbury Ray Bradbury: “I define science fiction as the art of the possible. Fantasy is the art of the impossible.”

Somebody asked me what was the difference between science fiction and fantasy. Certainly the simple approach is that science fiction deals with the possible (if you can think it, it can happen). Fantasy deals with monsters, fairies, goblins, and other mythical creatures, i.e., majic and majical creatures.

(Disclaimer: I personally believe in majic; much of fantasy deals with magic.) I love the Arthurian legend (it could be fantasy; it has endured for so long it is believable). It is full of majic. I especially like  the Marion Zimmer Bradley MarionZimmerBradley book, The Mists of Avalon Mists_of_Avalon-1st_ed. (I find the feminist perspective refreshing.)

Is fantasy always impossible as Bradbury suggests, or is it just improbable?  (Do the rules of physics apply?) This takes me back to Bradbury’s quote and evaluation after the minor digression. Bradbury also says that “Science fiction, again, is the history of ideas, and they’re always ideas that work themselves out and become real and happen in the world.” Not unlike evaluation. Evaluation works itself out and becomes real and happens. Usually.

Evaluation and the possible.

Often, I am invited to be the evaluator of record after the program has started. I sigh. Then I have a lot of work to do. I must teach folks that evaluation is not an “add on” activity. I  must also teach the folks how to identify the difference the program made. Then there is the issue of outputs (activities, participants) vs. outcomes (learning, behavior, conditions). Many principal investigators want to count differences pre-post.

Does the “how many” provide a picture of what difference the program made? If you start with no or few participants  and you end with many participants, have you made a difference? Yes, it is possible to count. Counts often meet reporting requirements. They are possible. So is documenting the change in knowledge, behavior, and conditions. It takes more work and more money. It is possible. Will you get to world peace? Probably not. Even if you can think it. World peace may be probable; it may not be possible (at least in my lifetime).

my two cents.

molly.

 

Aug
12
Filed Under (Methodology, program evaluation) by Molly on 12-08-2016

AEA365 ran a blog on vulnerability vulnerability linkrecently (August 5, 2016). It cited the TED talk by Brené Brown brene brown on vulnerability on the same topic. Although I really enjoyed the talk (I haven’t met a TED talk I didn’t like), it was more than her discussion of vulnerability that I enjoyed (although I certainly enjoyed learning that vulnerability is the birth place of joy and connection is why we are here .

She talked about story and its relationship to qualitative data. She says that she is a qualitative researcher and she collects stories. She says that “stories are just data with a soul”. That made a lot of sense to me.

See, I’ve been struggling to figure out how to turn the story into a meaningful outcome without reducing it to a number. (I do not have an answer, yet. If any of you have any ideas, let me know.) She says (quoting a former research professor) that if you cannot measure it, it does not exist. If it doesn’t exist then is what ever you are studying a figment of your imagination? So is there a way to capture a story and aggregate that story with other similar stories to get an outcome WITHOUT REDUCING IT TO A NUMBER? So given that stories are often messy, and given that stories are often complicated, and given that stories are rich in what they tell the researcher, it occurred to me that stories are more than themes and and content analysis. Stories are “data with a soul”.

Qualitative Data

Yet any book on qualitative data analysis (for example qualitative data coding or Qualitative data analysis ed. 3 or Bernard qualitative data analysis ed 1) you will see that there is confusion in the analysis process. Is it the analysis of qualitative data OR is it the qualitative analysis of data. Where do you put the modifier “qualitative”? To understand the distinction, a 2×2 visual might be helpful. (Adapted from Bernard, H. R. & Ryan, G. W. (1996). Qualitative data, quantitative analysis. Cultural Anthropology Methods Journal, 8(1), 9 – 11. Copyright © 1996 Sage Publications.)

2x2 data analysis

We are doing data analysis in all four quadrants. We are analyzing and capturing the deeper meaning of the data in cell A. Yes, we are analyzing data in other cells (B, C, and D) just not the capturing the deeper meaning of those data. Cell D is the quantitative analysis of quantitative data; Cell B is the qualitative analysis of quantitative data; and Cell C is the quantitative analysis of qualitative data. So the question becomes “Do you want deeper meaning from your data?” or “Do you want a number from your data?” (I’m still working on relating this to story!)

It all depends on what you want when you analyze your data. If you want to reduce it to a number, focus on cells B, C, and D. If you want deeper meaning, focus on cell A. Depending on what you want (and how you interpret the data) will be the place where the personal and situational bias occur. No, you cannot be the “objective and dispassionate” scientist. Doesn’t happen in today’s world (probably ever–only I can only speak of today’s world). Everyone has biases and they rear their heads (perhaps ugly heads) when least expected.

You have to try. Regardless.

my two cents.

molly.

 

 

Aug
05
Filed Under (criteria, program evaluation) by Molly on 05-08-2016

To handle yourself, use your head; to handle others, use your heart. ~~Eleanor Roosevelt eleanor roosevelt

This quote is often attributed to Eleanor Roosevelt (1884-1962); there are some sites that attribute this quote to Donald Anderson Laird (1897-1969), a psychologist and author (no photo found). Probably, more accurate. I’m not sure that the origin of the saying is really important. It may be enough to keep in mind the saying itself. (I know–how does this relate to evaluation? Trust me, it does.)

Before I was an evaluator, I was a child therapist (I also treated young women). I learned many skills as a therapist that have served me well as an evaluator. Skills like listening, standing up for your self, looking at alternatives. Which leads me to this saying. I had to “handle” others all the time at the same time I had to “handle” my self. I could not “blow up” when reprimanded. I could not become discouraged when someone (the client, the funder) criticized me. I had to learn to laugh when the joke was on me. I had to keep my spirits up when things went wrong. I had to keep cool in emergencies. I had to learn to tune out gossip and negative comments from others. This was a hard time for me. I tend to be passionate when I have an opinion; I have/had opinions (often).

As an evaluator, I am still passionate. Once my evaluation “on” button is pushed, it is hard to turn it off. Yet I still have to handle people. This morning, for example, I met with a fellow faculty member. I had to listen. I had to look for (and at) alternatives. I “handled” with my head; remember, I am passionate about evaluation. I provided her with alternatives and followed through with those alternatives. I handled with my heart.

When others are involved (and in evaluation there are always others), they must be handled with care, with the heart. It goes back to the standards (propriety) The_Program_Evaluation_Standards_3ed and the guiding principles Guiding principles  (integrity/honesty, direct respect for people, and responsibilities for general and public welfare).  In the current times, it is especially important to have direct respect for people. All people. (Regardless of race, ethnicity, religion, gender identity, sex, national origin, veteran status, and disability.) To be honest and have integrity. One way to make sure you have integrity is to handle with your heart.

 

Jul
29
Filed Under (program evaluation) by Molly on 29-07-2016 and tagged ,

Recently, I read a Washington Post article on innovation. innovationThe WP  interviewed Calestous Juma (see below), author of the July, 2016 book, “Innovation and Its Enemies:Why People Resist New Technologies.” The book was published by Oxford University Press (prestigious, to be sure). Priced at $29.95 plus an estimated s/h of $5.50, it sounds like a good purchase.  There is quite a bit of information about the book and the author on the Oxford University Press site.  This prompted me to think about what has changed in evaluation (not just technology) over the last 30+ years. First, though, I want to talk about the article.

Article by Juma.

juma-200x300 Calestous Juma (Courtesy of Harvard)

Juma says that “people don’t fear innovation simply because the technology is new, but because innovation often means losing a piece of their identity or lifestyle.” He goes on to say that “Innovation can also separate people from nature or their sense of purpose.” He argues that these two things are fundamental to the humon experience. I have talked about sense of purpose previously. I wonder if nature is part of purpose or if a sense of purpose comes from a person’s nature? Read the rest of this entry »

Jul
22

Thinking. We do it all the time (hopefully). It is crucial to making even the smallest decisions (what to wear, what to eat), and bigger decisions (where to go, what to do). Given this challenging time, even news watchers would be advised to use evaluative and critical thinking.  Especially since evaluation is an everyday activity.

This graphic was provided by WNYC. (There are other graphics; use your search engine to find them.)This graphic makes good sense to me and this applies to almost every news cast (even those without a shooter!). Read the rest of this entry »