Filed Under (criteria, Methodology, program evaluation) by Molly on 10-11-2016

Trustworthiness. An interesting topic.

Today is November 9, 2016. An auspicious day, to be sure. (No, I’m not going to rant about November 8, 2016; just post this and move on with my living.) Keep in mind trustworthiness, I remind myself.

I had the interesting opportunity to review a paper recently that talked about trustworthiness. This caused me much thought as I was troubled by what was written. I decided to go to my source on “Naturalistic Inquiry”lincoln book . Given that the paper used a qualitative design, employed a case study method, and talked about trustworthiness, I wanted to find out more. This book was written by two of my long time evaluation guides, Yvonna Lincoln yvonna lincolnand Egon Gubaegon guba bw. (Lincoln’s name may be familiar to you from the Sage Handbook of Qualitative Research which she co-edited with Norman Denzin.)


On page 218, they talk about trustworthiness. About the conventional criteria for trustworthiness (internal validity, external validity, reliability, and objectivity). They talk about the questions underlying those criteria (see page 218).

They talk about how the criteria formulated by conventional inquirers are not appropriate for naturalistic inquiry. Guba (1981a) offers four new terms as they have “…a better fit with naturalistic epistemology.” These four terms and the terms they propose to replace are: Read the rest of this entry »




Chris Lysy chris lysydraws cartoons.

Evaluation  and research cartoons.

Logic Model cartoons.

Presentation cartoons.BS cartoon from fresh spectrum


Data cartoons.

More Cartoons

He has offered an alternative to presenting survey data. He has a wonderful cartoon for this.

Survey results are in. Who's ready to spend the next hour looking at poorly formatted pie charts?

He is a wonderful resource. Use him. You can contact him through his blog, fresh spectrum.

my  two cents   .




Filed Under (program evaluation) by Molly on 31-08-2016

AEA365 is honoring living evaluators for Labor Day (Monday, September 5, 2016).

Some of the living evaluators I know (Jim Altschuld, Tom Chapel, Michael Patton, Karen Kirkhart, Mel Mark, Lois-Ellin Datta, Bob Stake); Some of them I don’t know (Norma Martinez-Rubin, Nora F. Murphy, Ruth P. Saunders, Art Hernandez, Debra Joy Perez). One I’m not sure of at all (Mariana Enriquez).  Over the next two weeks, AEA365 is hosting a recognition of living evaluator luminaries.

The wonderful thing is that this give me an opportunity to check out those I don’t know; to read about how others see them, what makes them special. I know that the relationships that develop over the years are dear, very dear.

I also know that the contributions that  these folks have made to evaluation cannot be captured in 450 words (although we try). They are living giants, legends if you will.

These living evaluators have helped move the field to where it is today. Documenting their contributions to evaluation enriches the field. We remember them fondly.

If you don’t know them, look for them at AEA ’16 in Atlanta atlanta-georgia-skyline. Check out their professional development sessions or their other contributions (paper, poster, round-table, books, etc). Many of them have been significant contributors to AEA; some have only been with AEA since the early part of this century. All have made a meaningful contribution to AEA.

Many evaluators could be mentioned and are not. Sheila B. Robinson suggests that “…we recognize that many, many evaluators could and should be honored as well as the 13 we feature this time, and we hope to offer another invitation next year for those who would like to contribute a post, so look for that around this time next year, and sign up!

Evaluators honored

altschuld       Thomas J. Chapel

James W. Altschuld            Thomas J. Chapel

Norma Martinez-Rubin              Patton

Norma Martinez-Rubin            Michael Quinn Patton


       Ruth P. Saunders

Nora F. Murphy                                     Ruth P. Saunders


ArthurHernandez                  Kirkhart

Art Hernandez                          Karen Kirkhart

Melvin Mark            Loisellen datta

Mel Mark                                       Lois-Ellin Datta

debra-perez-thumbnail-340x340       bob stake 2

Debra Joy Perez                           Bob Stake


Mariana Enriquez (Photo not known/found)

my two cents.


Filed Under (Methodology, program evaluation) by Molly on 25-01-2016

Alan Rickman quote

Alan Rickman Alan-Rickman died this month. He was an actor of my generation; one that provided me with much entertainment. I am sad. Then I saw this quote on the power of stories. How stories explain. How stories can educate. How stories can help reduce bias.  And I am reminded how stories are evaluative.

Dick Krueger dick-1997 did a professional development session (then called a “pre-session”) many years ago. It seems relevant now. Of course, I couldn’t find my notes (which were significant) so I did an online search, using “Dick Krueger and stories” as my search terms. I was successful! (See link.) When I went to the link, he had a whole section on story and story telling. What I remember most about that session is what he has listed under “How to Analyze the Story”. Specifically the four points he lists under problems with credibility:

  • Authenticity – Truth
  • Accuracy – Memory Problems
  • Representativeness and Sampling
  • Generalizability / Transferability

The next time you tell a story think of it in evaluative terms. And check out what Dick Krueger has to say. Read the rest of this entry »

Filed Under (program evaluation) by Molly on 24-07-2015

survey image 3The use of a survey is a valuable evaluation tool, especially in the world of electronic media. The survey allows individuals to gather data (both qualitative and quantitative) easily and relatively inexpensively. When I want information about surveys, I turn to the 4th edition of the Dillman book Dillman 4th ed. (Dillman, Smyth, & Christian, 2014*). Dillman has advocated the “Tailored Design Method” for a long time. (I first became aware of his method, which he called “Total Design Method,” in his 1978 first edition,dillman 1st edition a thin, 320 page volume [as opposed to the 509 page fourth edition].)

Today I want to talk about the “Tailored Design” method (originally known as total design method).

In the 4th edition, Dillman et al. say that “…in order to minimize total survey error, surveyors have to customize or tailor their survey designs to their particular situations.” They are quick to point out (through various examples) that the same procedures won’t work  for all surveys.  The “Tailored Design Method” refers to the customizing survey procedures for each separate survey.  It is based upon the topic of the survey and the audience being surveyed as well as the resources available and the time-line in use.  In his first edition, Dillman indicated that the TDM (Tailored Design Method) would produce a response rate of 75% for mail surveys and an 80%-90% response rate is possible for telephone surveys. Although I cannot easily find the same numbers in the 4th edition, I can provide an example (from the 4th edition on page 21-22) where the response rate is 77% after a combined contact of mail and email over one month time. They used five contacts of both hard and electronic copy.

This is impressive. (Most surveys I and others I work with conduct have a response rate less than 50%.) Dillman et al. indicate that there are three fundamental considerations in using the TDM. They are:

  1. Reducing four sources of survey error–coverage, sampling, nonresponse, and measurement;
  2. Developing a set of survey procedures that interact and work together to encourage all sample members to respond; and
  3. Taking into consideration elements such as survey sponsorship, nature of survey population, and the content of the survey questions.

The use of a social exchange perspective suggests that respondent behavior is motivated by the return that behavior is expected, and usually does, bring. This perspective affects the decisions made regarding coverage and sampling, the way questions are written and questionnaires are constructed, and determines how contacts will produce the intended sample.

If you don’t have a copy of this book (yes, there are other survey books out there) on your desk, get one! It is well worth the cost ($95.00, Wiley; $79.42, Amazon).

* Dillman, D. A., Smyth, J. D. & Christian, L. M. (2014)  Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Hoboken, N. J.: John Wiley & Sons, Inc.

my two cents.


Filed Under (program evaluation) by Molly on 28-01-2015

making a difference 5I am an evaluator, charter member of the American Evaluation Association, and former member of the forerunner organization, Evaluation Network. When you push my “on ” button, I can talk evaluation until (a lot of metaphors could be used here); and often do. (I can also talk about other things with equal passion, though not professionally.) When my evaluation button is pushed or, for that matter, most of the time, I wonder what differencemake a difference am I making. In this case, I wonder what difference I am making with this blog.

One of my readers (I have more than I ever imagined) suggested that I develop an “online” survey that I can include regularly in my posts. I thought that was a good idea. I thought I’d go one better and have it be a part of the blog. Then I would tabulate the findings (if there are any 🙂 ). Just so you know, I DO read all the comments; I get at least six daily. I often do not comment on those, however.

So, reader, here is the making a difference survey . This link will (should) take you to Surveymonkey and the survey. Below, I’ve listed the questions that are in the survey.

Check all that apply.

Reading this blog makes a difference to me by:

  1. _____ Giving me a voice to follow
  2. _____ Providing interesting content
  3. _____ Providing content I can use in my work
  4. _____ Providing dependable post
  5. _____ Providing me with information to share
  6. _____ Building my skills in evaluation
  7. _____ Showing me that there are others in the world concerned with similar things
  8. _____ Offering me good reading about an interesting weekly topic
  9. _____ Offering me content of value to me
  10. _____ Other. Please specify in comment

Please complete the survey.

my two cents.




Filed Under (program evaluation) by Molly on 15-01-2015

A reader made the comment that “blogging is like doing case studies”.blog Made me think about the similarities and differences. Since case studycase-study  is a well known qualitative method used in evaluation with small samples, I think this view is valid.

Read the rest of this entry »

Filed Under (Methodology, program evaluation) by Molly on 09-12-2014

Recently, I got a copy of Marvin Alkin’s book, Evaluation Roots (his first edition; eventually, I will get the second edition).

In Chapter Two, he and Tina Christie talk about an evaluation theory tree and presents this idea graphically (all be it in draft form).

Think of your typical tree with three strong branches (no leaves) and two roots. Using this metaphor, the authors explain the development of evaluation theory as it appears in western (read global north) societies.evaluation theory tree

As you can see, the roots are “accountability and control” (positivist paradigm?) and social inquiry (post-positivist paradigm?). Read the rest of this entry »

Personal and situational bias are forms of cognitive bias and we all have cognitive bias.

When I did my dissertation on personal and situational biases, I was talking about cognitive bias (only I didn’t know it, then).

According to Wikipedia, the term cognitive bias was introduced in 1972 (I defended my dissertation in 1983) by two psychologists Daniel Kahneman  and Amos Tversky kahneman-tversky1.

Then, I hypothesized that previous research experience (naive or sophisticated)  and the effects of exposure to expected project outcomes (positive, mixed, negative) would affect the participant and make a difference in how the participant would code data. (It did.)  The Sadler article which talked about intuitive data processing was the basis for this inquiry. Now many years later, I am encountering cognitive bias again. Sadler says that “…some biases can be traced to a particular background knowledge…”(or possibly–I think–lack of knowledge), “…prior experience, emotional makeup or world view”. bias 4 (This, I think, falls under the category of, according to Tversky and Kahneman, human judgements and it will differ from rational choice theory (often given that label). Read the rest of this entry »

Filed Under (program evaluation) by Molly on 30-09-2014

Recently, I drafted a paper about a capacity building; I’ll be presenting it at the 2014 AEA conference. The example on which I was reporting was regional and voluntary; it took a dedication, a commitment from participants. During the drafting of that paper, I had think about the parts of the program; what would be necessary for individuals who were interested in evaluation and had did not have a degree. I went back to the competencies listed in the AJE article (March 2005) that I cited in a previous post. I found it interesting to see that the choices I made (after consulting with evaluation colleagues) were listed in the competencies identified by Stevahn et al., yet they list so much more. So the question occurs to me is: To be competent, to build institutional evaluation capacity are all those needed? Read the rest of this entry »