independence-2

Erma Bombeck said “You have to love a nation that celebrates its independence every July 4th not with a parade of guns, tanks, and soldiers, who file by the White House in a show of strength and muscle, but with family picnics, where kids throw frisbees, potato salad gets iffy, and the flies die from happiness. You may think you’ve overeaten, but its patriotism.”

I heard this quote on my way back from Sunriver, OR on Splendid Table, an American Public Media show I don’t get to listen to very often and has wonderful tidbits of information, not necessarily evaluative. Since I had just celebrated July 4th, this quote was most apropos! I also heard snippets of a broadcast (probably on NPR) that talked about patriotism/being patriotic. For me, tradition is patriotic. You know blueberry pie on the 4th of Julyblueberry pie natural light; potato salad; pasta; and of course, fireworks (unless the fire danger is extreme [like it was in Sunriver] and then all you can hope is that people will be VERY VERY careful!

So what do you think makes for patriotism? What do you do to be patriotic? Certainly, for me, it wouldn’t be 4th of July without blueberry pie and my “redwhiteblue” t-shirt. I don’t need fireworks or potato salad… 🙂 What makes this celebratory for me is the fact that I am assured freedom from want, freedom of worship, freedom from fear, and freedom of speech and I realize that they are only as free as I make them. four-freedoms-2

Franklin Delano Roosevelt said it clearly in his speech to congress, January 6, 1941: “In the future days, which we seek to make secure, we look forward to a world founded upon four essential human freedoms.

The first is freedom of speech and expression — everywhere in the world.

The second is freedom of every person to worship God in his (sic) own way — everywhere in the world.

The third is freedom from want — which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants — everywhere in the world.

The fourth is freedom from fear — which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbor– anywhere in the world…”

This is an exercise in evaluative thinking. What do you think (about patriotism)? What criteria do you use to think this?

mytwo cents.

molly.

Thinking for yourself is a key competency for evaluators. Scriven says that critical thinking is “The name of an approach to or a subject within the curriculum that might equally well be called ‘evaluative thinking…’ “.

Certainly, one of the skills I taught my daughters from an early age is to evaluate experiences both qualitatively and quantitatively. They got so good at this exercise, they often preempted me with their reports. They learned early that critical thinking is evaluative, that critical doesn’t mean being negative, rather it means being thoughtful or analytical. Scriven goes on to say, “The result of critical thinking is in fact often to provide better support for a position under consideration or to create and support a new position.” I usually asked my girls to evaluate an experience to determine if we would do that experience (or want to do it) again.  Recently, I had the opportunity to do just that. My younger daughter had not been to the Ringling Museum in Sarasota FL; my older daughter had (she went to college in FL). She agreed, after she took me, that we needed to go as a family. We did. We all agreed that it was worth the price of admission. An example of critical thinking–where we provided support for a position under consideration.

Could we have done this without the ability to critically think? Maybe. Could we have come to an agreement that it was worth seeing more than once with out this ability? Probably not. Since the premise of this blog is that evaluation is something that everyone (whether they know it or not) does every day, then would it follow that critical thinking is done everyday? Probably. Yet, I wonder if you need this skill to get out of bed? To decide what to eat for breakfast? To develop the content of a blog? Do I need analysis and/or thoughtfulness to develop a content of a blog? It may help. Often, the content is what ever happens to catch my attention or stick in my caw the day I start my blog. Yet, I wonder…

Evaluation is an activity that requires thoughtfulness and analysis. Thoughtfulness in planning and implementing; analysis in implementing and data examination. Both in final report preparation and presentation. This is a skill that all evaluators need. It is not acquired as a function of birth; yet it is taught through application. But people may not have all the information they need. Can people (evaluators) be critical thinkers if they are not informed? Can people (evaluators)be thoughtful and analytical if they are not informed? Or just impassioned?  Does information just cloud the thoughtfulness and analysis? Something to ponder…

 

mytwo cents.

molly.

Chris Lysy, at Fresh Spectrum, had a guest contributor in his most recent blog, Rakesh Mohan.

Rakesh says “…evaluators forget that evaluation is inherently political because it involves making judgment about prioritization, distribution, and use of resources.”

I agree that evaluators can make judgements about prioritization, distribution and resource use. I wonder if making judgements is built in to the role of evaluator; is even taught to the nascent evaluator? I also wonder if the Principal Investigator (PI) has much to say about the judgements. What if the evaluator interprets the findings one way and the PI doesn’t agree. Is that political? Or not. Does the PI have the final say about what the outcomes mean (the prioritization, distribution, and resource use)? Does the evaluator make recommendations or does the evaluator only draw conclusions? Then where do comments on the prioritization, the distribution, the resource use come into the discussion? Are they recommendations or are they conclusions?

I decided I would see what my library says about politics: Scriven’s Thesaurus* Scriven book covertalks about the politics of evaluation; Fitzpatrick, Sanders, and Worthen* fitzpatrick book 2 have a chapter on “Political, Interpersonal, and Ethical Issues in Evaluation” (chapter 3);  Rossi, Lipsey, and Freeman* have a section on political context (pp. 18-20) and a section on political process (pp. 381-393) that includes policy and policy implications. The 1982 Cronbach* lee j. cronbachvolume (Designing Evaluatations of Educational and Social Programs)  has a brief discussion (of multiple perspectives) and the classic 1980 volume, Toward Reform of Program Evaluationcronbach toward reform, also addresses the topic*. Least I neglect to include those authors who ascribe to the naturalistic approaches, Guba and Lincoln  talk about the politics of evaluation (pp. 295-299) in their1981  volume, Effective Evaluation effective evaluation. The political aspects of evaluation have been part of the field for a long time.

So–because politics has been and continues to be part of evaluation, perhaps what Mohan says is relevant. When I look at Scriven’s comments in the Thesauras, the comment that stands out is, “Better education for the citizen about –and in–evaluation, may be the best route to improvement, short of a political leader with the charisma to persuade us of anything and the brains to persuade us to imporve our critical thinking.”  Since the likelihood that we will see a political leader to persuade us is slim, perhaps education is the best approach. And like Mohan says, invite them to the conference. (After all, education comes in all sizes and experiences.) Perhaps then policy makers, politicians, press, and public will be able understand and make a difference BECAUSE OF EVALUATION!

 

*Scriven, M. (1991). Evaluation thesaurus. Newbury Park, CA: Sage.

*Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines. (4th ed.) Boston, MA: Pearson

*Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.) Thousand Oaks, CA: Sage.

*Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass Inc. Publishers.

*Cronbach, L. J. et al. (1980). Toward reform of program evaluation. San Francisco, CA: Jossey-Bass Inc. Publishers.

*Guba, E. G. & Lincoln, Y. S. (1981). Effective evaluation. San Francisco, CA: Jossey-Bass Inc. Publishers.

 

 

About two years ago, I conducted a 17 month hybrid evaluation preparation program for the Western Region Extension Service faculty. There were over 30 individuals involved. I was the evaluation expert; Jim LindstromJames-Lindstrom (who was at WSU at the time) was the cheerleader, the encourager, the professional development person. I really couldn’t have done it without him. (Thank you, Jim.) Now, to maximize this program and make it available to others who were not able to participate, I’ve been asked to explore an option for creating an on-line version of the WECT (say west) program. It would be loaded through the OSU professional and continuing education (PACE) venue. To that end, I am calling on those of you who participated in the original program (and any other readers) to provide me with feedback of the following:

  1. What was useful?
  2. What needed to be added?
  3. What could be more in depth?
  4. What could be deleted?
  5. Other comments?

Please be as specific as possible.

I can go to the competency literature (of which there is a lot) and redevelop WECT from those guidelines.  (For more information on competencies see: King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a taxonomy of essential evaluator competencies. American Journal of Evaluation, 22(2), 229-247.) Or I could use the Canadian system as a foundation. (For more information see this link.)

I doubt if I can develop an on-line version that would cover (or do justice) to all those competencies.

So I turn to you my readers. Let me know what you think.

my two cents.

molly.

Can there be inappropriate use of evaluation studies?

Jody Fitzpatrick¹ jody fitzpatrick and her co-authors Jim SandersJim Sanders and Blaine Worthen,blaine worthen in Program Evaluation: Alternative Approaches and Practical Guidelines (2011)fitzpatrick book 2 provide several examples of inappropriate evaluation use. Before they give the examples, they share some wise words from Nick Smith² NickSmith_forweb. Nick says there are two broad categories for declining conducting an evaluation. They are “1) when the evaluation could harm the field of evaluation, or 2) when it would fail to support the social good.” Fitzpatrick, Sanders, and Worthen (2011) go on to say that “these problems may arise when it is likely that the ultimate quality of the evaluation will be questionable, major clients would be alienated or misled concerning what evaluation can do, resources will be inadequate, or ethical principles would be violated” (p. 265).

The examples provided are

  1. Evaluation would produce trivial information;
  2. Evaluation results will not be used;
  3. Evaluation cannot yield useful, valid information;
  4. Type of evaluation is premature for the stage of the program; and
  5. Propriety of evaluation is doubtful.

When I  study these examples (there may be others; I’m quoting Fitzpatrick, Sanders, and Worthen, 2011), I find that these are examples often found in the published literature. As a reviewer, I find “show and tell” evaluations of little value because they produce trivial information. They report a study that has limited or insufficient impact and that has little or no potential for continuation. The cost of conducting a formal evaluation would easily outweigh the value–if monetized–(merit or worth) of the program and would yield little information useful for others in the field. The intention might be well designed; the product is less than ideal. Continue reading

Today is the middle of Spring Break at Oregon State University.

What did you do today that involved thinking evaluatively?

Did you decide to go to work?work-clip art

Did you decide to go to the beach?oregon beach

Did you decide you were sick?sick clip art

Did you decide you would work in the yard/garden?garden clip art

Did you decide to stop and smell the roses?smell the roses Continue reading

How many of you are planning on attending the American Evaluation Association (AEA)AEA logo conference in Chicago this November? AEA just closed its call for proposals on Monday, March 16. Hopefully, you were able to submit prior to the deadline. Notifications of acceptance will be announced in July. It is a lot of work to review those proposals, schedule those proposals, and make sure that there is a balance of topics and presentation types across the week.

I hope anyone (everyone) interested in program evaluation and all the evaluation permutations (of which there are many) will make an effort to attend. I plan to be there.

AEA is my professional home. The first meeting I attended was in 1981 in Austin, Texas. I was a graduate student; several of us drove from Tucson to Austin.(Let me tell you West-TexasWest Texas west-texas-desert is quite an experience; certainly a bucket list opportunity.) That meeting was a combined meeting of the Evaluation Research Society and Evaluation Network. It had about 200 attendees. Quite a difference from meetings experienced in the 21st century. AEA (the name and the organization) became official with the vote of the membership in 1986. Who would have thought that AEA would be the leading evaluation association in the country, possibly in the world? The membership page says that there are members who come from 60 foreign countries. I have met marvelous folks there. I count some of my best friends as AEA members. Certainly the landscape of attendees has changed regularly over the years. For a founding member, that evolution has been interesting to watch. As a board member and as a past-president (among other roles), being part of the organizational change has been exciting. I urge you to attend; I urge you to get involved.

Hope to see you in Chicago in November.

 

If you haven’t taken the my survey, please do. It is found here.

my two cents.

molly.

A recent blog (not mine) talked about the client’s evaluation use.evaluation cycle and use The author says that she feels “…successful…if the client is using the data…” This statement allowed me to stop and pause and think about data use. The author continues with the comment about the difference between “…facilitating the client’s understanding of the data in order to create plans and telling the client exactly what the data means and what to do with it.”

I work with Extension professionals who may or may not understand the methodology, the data analysis, or the results. How does one communicate with Extension professionals who may be experts in their content area (cereal crops, nutrition, aging, invasive species) and know little about the survey on which they worked? Is my best guess (not knowing the content area) a good guess? Do Extension professionals really use the evaluation findings?  If I suggest that the findings could say this, or suggest that the findings could say that, am I preventing a learning opportunity from happening? Continue reading

This is a link to an editorial in Basic and Applied Social PsychologyBasic and applied social psychology cover. It says that inferential statistics are no longer allowed by authors in the journal.

“What?”, you ask. Does that have anything to do with evaluation? Yes and no. Most of my readers will not publish here. They will publish in evaluation journals (of which there are many) or if they are Extension professionals, they will publish in the Journal of Extension.JoE logo And as far as I know, BASP is the only journal which has established an outright ban on inferential statistics. So evaluation journals and JoE still accept inferential statistics.

Still–if one journal can ban the use, can others?

What exactly does that mean–no inferential statistics? The journal editors define this ban as as “…the null hypothesis significance testing procedure is invalid and thus authors would be not required to perform it.” That means that authors will remove all references to  p-values, t-values, F-values, or any reference to statements about significant difference (or lack thereof) prior to publication. The editors go on to discuss the use of confidence intervals (No) and Bayesian methods (case-by case) and what inferential statistical procedures are required by the journal. Continue reading

I don’t know what to write today for this week’s post. I turn to my book shelf and randomly choose a book. Alas, I get distracted and don’t remember what I’m about.  Mama said there would be days like this…I’ve got writer’s block (fortunately, it is not contagious).writers-block (Thank you, Calvin). There is also an interesting (to me at least because I learned a new word–thrisis: a crisis of the thirties) blog on this very topic (here).

So this is what I decided rather than trying to refocus. In the past 48 hours I’ve had the following discussions that relate to evaluation and evaluative thinking.

  1. In a faculty meeting yesterday, there was the discussion of student needs which occur during the students’ matriculation in a program of study. Perhaps it should include assets in addition to needs as students often don’t know what they don’t know and cannot identify needs.
  2. A faculty member wanted to validate and establish the reliability for a survey being constructed. Do I review the survey, provide the reference for survey development, OR give a reference for validity and reliability (a measurement text)? Or all of the above.
  3. There appears to be two virtual focus group transcripts for a qualitative evaluation that have gone missing. How much affect will those missing focus groups have on the evaluation? Will notes taken during the sessions be sufficient?
  4. A candidate came to campus for an assistant professor position who presented a research presentation on the right hand (as opposed to the left hand) [Euphemisms for the talk content to protect confidentiality.] Why even study the right hand when the left hand is what is the assessment?
  5. Reading over a professional development proposal dealing with what is, what could be, and what should be. Are the questions being asked really addressing the question of gaps?

Continue reading