Nov
10
Filed Under (criteria, Methodology, program evaluation) by Molly on 10-11-2016

Trustworthiness. An interesting topic.

Today is November 9, 2016. An auspicious day, to be sure. (No, I’m not going to rant about November 8, 2016; just post this and move on with my living.) Keep in mind trustworthiness, I remind myself.

I had the interesting opportunity to review a paper recently that talked about trustworthiness. This caused me much thought as I was troubled by what was written. I decided to go to my source on “Naturalistic Inquiry”lincoln book . Given that the paper used a qualitative design, employed a case study method, and talked about trustworthiness, I wanted to find out more. This book was written by two of my long time evaluation guides, Yvonna Lincoln yvonna lincolnand Egon Gubaegon guba bw. (Lincoln’s name may be familiar to you from the Sage Handbook of Qualitative Research which she co-edited with Norman Denzin.)

Trustworthiness

On page 218, they talk about trustworthiness. About the conventional criteria for trustworthiness (internal validity, external validity, reliability, and objectivity). They talk about the questions underlying those criteria (see page 218).

They talk about how the criteria formulated by conventional inquirers are not appropriate for naturalistic inquiry. Guba (1981a) offers four new terms as they have “…a better fit with naturalistic epistemology.” These four terms and the terms they propose to replace are: Read the rest of this entry »

Sep
14
Filed Under (criteria, program evaluation) by Molly on 14-09-2016

Decisions

How do we make decisions when we think none of the choices are good?   decision

(Thank you for this thought, Plexus Institute.)

No, I’m not talking about the current political situation in the US. I’m talking about evaluation.

The lead for this email post was “Fixing the frame alters more than the view“. fixing the frame

Art Markman makes this comment (the “how do we make decisions…” comment) here. He says “If you dislike every choice you’ve got, you’ll look for one to reject rather than one to prefer—subtle difference, big consequences.” He based this opinion on research, saying that the rejection mind-set allows us to focus on negative information about options and fixate on the one with the smallest downside. Read the rest of this entry »

Jan
07
Filed Under (program evaluation) by Molly on 07-01-2016

What do you do with your idea?got-idea

Do you hold on to it?hold idea    Give it away? give ideas away Share it?share idea

The idea is the most important thing in the world of blogs, which is a form of social media. The idea is the one characteristic that distinguishes a person. Traditionally, we tend to protect our ideas with our lives. Why patents, trademarks ®™, and copyrights © exist. Read the rest of this entry »

Dec
07
Filed Under (Methodology, program evaluation) by Molly on 07-12-2015

The OSU Extension Service conference started today (#OSUExtCon). There are concurrent sessions, plenary sessions, workshops, twitter feeds, (Jeff Hino is Tweeting), tours, receptions, and meal gatherings. There are lots of activities and they cover four days. But I want to talk about conference evaluation.

The thought occurs to me: “What difference is this making?” Ever the evaluator, I realize that the selection will be different next year (it was different last year) so I wonder how valuable is it to evaluate the concurrent sessions? Given that time doesn’t stand still (fortunately {or not, depending}), the plenary sessions will also be different. Basically, the conference this year will be different from the conference the next time. Yes, it will be valuable for the presenters to have feedback on what they have done and it will be useful for conference planners to have feed back on various aspects of the conference. I still have to ask, “Did it make a difference?”

A long time colleague of mine (formerly at Pennsylvania State University penn-state-logo), Nancy Ellen Kiernannancy ellen kiernan proposed a method of evaluating conferences that I think is important to keep and use. She suggested the use of “Listening Post” as an evaluation method. She says, “The “Listening Posts” consisted of a group of volunteer conference participants who agreed beforehand to “post” themselves in the meeting rooms, corridors, and break rooms and record what conferees told them about the conference as it unfolded [Not unlike Twitter, but with value; parenthetical added]. Employing listening posts is an informal yet structured way to get feedback at a conference or workshop without making participants use pencil and paper.” She put it in “Tipsheet #5” and published the method in Journal of Extension (JoE), the peer reviewed monthly on-line publication.

Quoting from the abstract of the JoE article, “Extension agents often ask, “Isn’t there an informal but somewhat structured way to get feedback at a conference or workshop without using a survey?” This article describes the use of ‘Listening Posts’ and the author gives a number of practical tips for putting this qualitative strategy to use. Benefits include: quality feedback, high participation and enthusiastic support from conferees and the chance to build program ownership among conference workers. Deficits: could exclude very shy persons or result in information most salient to participants.”

I’ve used this method. It works. It does solicit information about what difference the conference made, not whether the participants liked or didn’t like the conference. (This is often what is asked in the evaluation.) Nancy Ellen suggests that the listening post collectors ask the following questions:

  1. “What did you think of the idea of …this conference? and
  2. What is one idea or suggestion that you found useful for your professional work? (The value/difference question)
  3. Then, she suggests, that the participant tell anything else about the conference that is important for us to know.

Make sure the data collectors are distinctive. Make sure they do not ask any additional questions. The results will be interesting.

Jul
30
Filed Under (program evaluation) by Molly on 30-07-2015

Ignorance is a choice.ignorance

Not knowing may be “easier”; you know, less confusing, less intimidating, less fearful, less embarrassing.

I remember when I first asked the question, “Is it easier not knowing?” What I was asking was “By choosing to not know, did I really make a choice, or was it a default position?” Because if you consciously avoid knowing, do you really not know or are you just ignoring the obvious. Perhaps it goes back to the saying common on social media today: “Great people talk about ideas; average people talk about things; small people talk about other people” (which is a variation of what Elanor Roosevelt said).great minds-people Read the rest of this entry »

Apr
10
Filed Under (criteria, program evaluation) by Molly on 10-04-2015

Can there be inappropriate use of evaluation studies?

Jody Fitzpatrick¹ jody fitzpatrick and her co-authors Jim SandersJim Sanders and Blaine Worthen,blaine worthen in Program Evaluation: Alternative Approaches and Practical Guidelines (2011)fitzpatrick book 2 provide several examples of inappropriate evaluation use. Before they give the examples, they share some wise words from Nick Smith² NickSmith_forweb. Nick says there are two broad categories for declining conducting an evaluation. They are “1) when the evaluation could harm the field of evaluation, or 2) when it would fail to support the social good.” Fitzpatrick, Sanders, and Worthen (2011) go on to say that “these problems may arise when it is likely that the ultimate quality of the evaluation will be questionable, major clients would be alienated or misled concerning what evaluation can do, resources will be inadequate, or ethical principles would be violated” (p. 265).

The examples provided are

  1. Evaluation would produce trivial information;
  2. Evaluation results will not be used;
  3. Evaluation cannot yield useful, valid information;
  4. Type of evaluation is premature for the stage of the program; and
  5. Propriety of evaluation is doubtful.

When I  study these examples (there may be others; I’m quoting Fitzpatrick, Sanders, and Worthen, 2011), I find that these are examples often found in the published literature. As a reviewer, I find “show and tell” evaluations of little value because they produce trivial information. They report a study that has limited or insufficient impact and that has little or no potential for continuation. The cost of conducting a formal evaluation would easily outweigh the value–if monetized–(merit or worth) of the program and would yield little information useful for others in the field. The intention might be well designed; the product is less than ideal. Read the rest of this entry »

Mar
03
Filed Under (Methodology, program evaluation) by Molly on 03-03-2015

This is a link to an editorial in Basic and Applied Social PsychologyBasic and applied social psychology cover. It says that inferential statistics are no longer allowed by authors in the journal.

“What?”, you ask. Does that have anything to do with evaluation? Yes and no. Most of my readers will not publish here. They will publish in evaluation journals (of which there are many) or if they are Extension professionals, they will publish in the Journal of Extension.JoE logo And as far as I know, BASP is the only journal which has established an outright ban on inferential statistics. So evaluation journals and JoE still accept inferential statistics.

Still–if one journal can ban the use, can others?

What exactly does that mean–no inferential statistics? The journal editors define this ban as as “…the null hypothesis significance testing procedure is invalid and thus authors would be not required to perform it.” That means that authors will remove all references to  p-values, t-values, F-values, or any reference to statements about significant difference (or lack thereof) prior to publication. The editors go on to discuss the use of confidence intervals (No) and Bayesian methods (case-by case) and what inferential statistical procedures are required by the journal. Read the rest of this entry »

Feb
11
Filed Under (program evaluation) by Molly on 11-02-2015

I don’t know what to write today for this week’s post. I turn to my book shelf and randomly choose a book. Alas, I get distracted and don’t remember what I’m about.  Mama said there would be days like this…I’ve got writer’s block (fortunately, it is not contagious).writers-block (Thank you, Calvin). There is also an interesting (to me at least because I learned a new word–thrisis: a crisis of the thirties) blog on this very topic (here).

So this is what I decided rather than trying to refocus. In the past 48 hours I’ve had the following discussions that relate to evaluation and evaluative thinking.

  1. In a faculty meeting yesterday, there was the discussion of student needs which occur during the students’ matriculation in a program of study. Perhaps it should include assets in addition to needs as students often don’t know what they don’t know and cannot identify needs.
  2. A faculty member wanted to validate and establish the reliability for a survey being constructed. Do I review the survey, provide the reference for survey development, OR give a reference for validity and reliability (a measurement text)? Or all of the above.
  3. There appears to be two virtual focus group transcripts for a qualitative evaluation that have gone missing. How much affect will those missing focus groups have on the evaluation? Will notes taken during the sessions be sufficient?
  4. A candidate came to campus for an assistant professor position who presented a research presentation on the right hand (as opposed to the left hand) [Euphemisms for the talk content to protect confidentiality.] Why even study the right hand when the left hand is what is the assessment?
  5. Reading over a professional development proposal dealing with what is, what could be, and what should be. Are the questions being asked really addressing the question of gaps?

Read the rest of this entry »

Jan
15
Filed Under (program evaluation) by Molly on 15-01-2015

A reader made the comment that “blogging is like doing case studies”.blog Made me think about the similarities and differences. Since case studycase-study  is a well known qualitative method used in evaluation with small samples, I think this view is valid.

Read the rest of this entry »

This will be short.

I showed a revised version of Alkin’s Evaluation Theory Tree in last week’s post. It had leaves. It looked like this:Evaluation theory tree edition 2

It was taken from the second edition of Alkin’s book.

I have had two comments about this tree.

  1. There are few women represented in the tree. (True, especially in the draft version; in version above there are more.)
  2. I was reminded about the  Fred  Carden and Marvin C. Alkin’s article in the Journal of Multidisciplinary Evaluation, 8(17), January 2012. (There are still more leaves and the global south is represented.)

Read the rest of this entry »