I just got back from a road trip across Southern Alabama with my younger daughter.southern alabama We started from Birmingham and drove a very circuitous route ending in Mobile and the surrounding areas, then returned to Birmingham for her to start her second year at Birmingham-Southern College.

As we traveled, I read a book by Bill McKibben (one of many) called Oil and Honey: The Education of an Unlikely Activist. It is a memoir, a personal recounting of the early years of this decade, which corresponded with the years my older daughter was in college (2011-2014). I met Bill McKibben, who, in 2008, is credited with starting the non-profit, 350.0rg, and is currently listed as “senior adviser and co-founder”. He is a passionate, soft-spoken man, who beleives that the world is on a short fuse. He really seems to believe that there is a better way to have a future. He, like Gandhi, is taking a stand.  Oil and Honey puts into action Gandhi’s saying about being the change you want to seegandhi and change. As the subtitle indicates, McKibben is an unlikely activist. He is a self-described non-leader who led and advises the global effort to increase awareness of climate change/chaos. When your belief is on the line, you do what has to be done.

Evaluators are the same way. When your belief is on the line, you do what has to be done. And, hopefully, in the process you are the change that you want to see in the world. But know it cannot happen one pipeline at a time. The fossil fuel industry has too much money. So what do you do? You start a campaign. That is what 350.org has done:  “There are currently fossil fuel divestment campaigns at 308 colleges and universities, 105 cities and states, and 6 religious institutions.”(Wikipedia, 350.0rg) (Scroll down to the heading “Fossil Fuel Divestment” to see the complete discussion.) Those are clear numbers, hard data for consumption. (Unfortunately, the  divestment campaign at OSU failed.)

So I see the question as one of impact, though not specifically world peace (my ultimate impact). If there is no planet on which to work for world peace, there in no need for world peace. Evaluators can help. They can look at data critically. They can read the numbers. They can gather the words. This may be the best place for the use of pictures (they are, after all, worth 1000 words).  Perhaps by combining efforts, the outcome will be an impact that benefits all humanity and builds a tomorrow for the babies born today.

my two cents.

molly.

 

I keep getting comments about my posts “Does this blog make a difference?”

I want to say thank you for all who read it.

thank-you

 

I want to say thank you for all who follow this blog.

thank-you

Mostly, I am continually amazed that people find what I have to say interesting to come back.

So: Thank you. For reading. For following. For coming back.

I think that is making a difference.

my two cents.

molly.

P. S. See you in two weeks!

The use of the term impact is problematic, as I see it. If you (or any evaluator) are going to have an impact, if your program is going to have an impact, if you are going to do anything other than focus on the outcomes, how will you know? Scriven, in his Thesauras Scriven book cover, says an impact evaluation is an evaluation which focuses on outcomes rather than process, progress (delivery), or implementation. (Is that an example of using the word to define the word?) Is an impact evaluation the same as an evaluation which captures the outcomes? Continue reading

Ignorance is a choice.ignorance

Not knowing may be “easier”; you know, less confusing, less intimidating, less fearful, less embarrassing.

I remember when I first asked the question, “Is it easier not knowing?” What I was asking was “By choosing to not know, did I really make a choice, or was it a default position?” Because if you consciously avoid knowing, do you really not know or are you just ignoring the obvious. Perhaps it goes back to the saying common on social media today: “Great people talk about ideas; average people talk about things; small people talk about other people” (which is a variation of what Elanor Roosevelt said).great minds-people Continue reading

survey image 3The use of a survey is a valuable evaluation tool, especially in the world of electronic media. The survey allows individuals to gather data (both qualitative and quantitative) easily and relatively inexpensively. When I want information about surveys, I turn to the 4th edition of the Dillman book Dillman 4th ed. (Dillman, Smyth, & Christian, 2014*). Dillman has advocated the “Tailored Design Method” for a long time. (I first became aware of his method, which he called “Total Design Method,” in his 1978 first edition,dillman 1st edition a thin, 320 page volume [as opposed to the 509 page fourth edition].)

Today I want to talk about the “Tailored Design” method (originally known as total design method).

In the 4th edition, Dillman et al. say that “…in order to minimize total survey error, surveyors have to customize or tailor their survey designs to their particular situations.” They are quick to point out (through various examples) that the same procedures won’t work  for all surveys.  The “Tailored Design Method” refers to the customizing survey procedures for each separate survey.  It is based upon the topic of the survey and the audience being surveyed as well as the resources available and the time-line in use.  In his first edition, Dillman indicated that the TDM (Tailored Design Method) would produce a response rate of 75% for mail surveys and an 80%-90% response rate is possible for telephone surveys. Although I cannot easily find the same numbers in the 4th edition, I can provide an example (from the 4th edition on page 21-22) where the response rate is 77% after a combined contact of mail and email over one month time. They used five contacts of both hard and electronic copy.

This is impressive. (Most surveys I and others I work with conduct have a response rate less than 50%.) Dillman et al. indicate that there are three fundamental considerations in using the TDM. They are:

  1. Reducing four sources of survey error–coverage, sampling, nonresponse, and measurement;
  2. Developing a set of survey procedures that interact and work together to encourage all sample members to respond; and
  3. Taking into consideration elements such as survey sponsorship, nature of survey population, and the content of the survey questions.

The use of a social exchange perspective suggests that respondent behavior is motivated by the return that behavior is expected, and usually does, bring. This perspective affects the decisions made regarding coverage and sampling, the way questions are written and questionnaires are constructed, and determines how contacts will produce the intended sample.

If you don’t have a copy of this book (yes, there are other survey books out there) on your desk, get one! It is well worth the cost ($95.00, Wiley; $79.42, Amazon).

* Dillman, D. A., Smyth, J. D. & Christian, L. M. (2014)  Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Hoboken, N. J.: John Wiley & Sons, Inc.

my two cents.

molly.

“fate is chance; destiny is choice”.destiny-calligraphy-poster-c123312071

Went looking for who said that originally so that I could give credit. Found this as the closest saying: “Destiny is no matter of chance. It is a matter of choice: It is not a thing to be waited for, it is a thing to be achieved.

William Jennings Bryan

 

Evaluation is like destiny. There are many choices to make. How do you choose? What do you choose?

Would you listen to the dictates of the Principal Investigator even if you know there are other, perhaps better, ways to evaluate the program?

What about collecting data? Are you collecting it because it would be “nice”? OR are you collecting it because you will use the data to answer a question?

What tools do you use to make your choices? What resources do you use?

I’m really curious. It is summer and although I have a list (long to be sure) of reading, I wonder what else is out there, specifically relating to making choices? (And yes, I could use my search engine; I’d rather hear from my readers!)

Let me know. PLEASE!

my two cents.

molly.

independence-2

Erma Bombeck said “You have to love a nation that celebrates its independence every July 4th not with a parade of guns, tanks, and soldiers, who file by the White House in a show of strength and muscle, but with family picnics, where kids throw frisbees, potato salad gets iffy, and the flies die from happiness. You may think you’ve overeaten, but its patriotism.”

I heard this quote on my way back from Sunriver, OR on Splendid Table, an American Public Media show I don’t get to listen to very often and has wonderful tidbits of information, not necessarily evaluative. Since I had just celebrated July 4th, this quote was most apropos! I also heard snippets of a broadcast (probably on NPR) that talked about patriotism/being patriotic. For me, tradition is patriotic. You know blueberry pie on the 4th of Julyblueberry pie natural light; potato salad; pasta; and of course, fireworks (unless the fire danger is extreme [like it was in Sunriver] and then all you can hope is that people will be VERY VERY careful!

So what do you think makes for patriotism? What do you do to be patriotic? Certainly, for me, it wouldn’t be 4th of July without blueberry pie and my “redwhiteblue” t-shirt. I don’t need fireworks or potato salad… 🙂 What makes this celebratory for me is the fact that I am assured freedom from want, freedom of worship, freedom from fear, and freedom of speech and I realize that they are only as free as I make them. four-freedoms-2

Franklin Delano Roosevelt said it clearly in his speech to congress, January 6, 1941: “In the future days, which we seek to make secure, we look forward to a world founded upon four essential human freedoms.

The first is freedom of speech and expression — everywhere in the world.

The second is freedom of every person to worship God in his (sic) own way — everywhere in the world.

The third is freedom from want — which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants — everywhere in the world.

The fourth is freedom from fear — which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbor– anywhere in the world…”

This is an exercise in evaluative thinking. What do you think (about patriotism)? What criteria do you use to think this?

mytwo cents.

molly.

Knowledge is personal!

A while ago I read a blog by Harold Jarche. He was talking about knowledge management (the field in which he works). That field  makes the claim that knowledge can be transferred; he makes the claim that knowledge cannot be transferred.  He goes on to say that we can share (transfer) information; we can share data; we cannot share knowledge. I say once we share the information, the other person has the choice to make that shared information part of her/his knowledge or not. Stories help individuals see (albeit, briefly) others’ knowledge.

Now,  puzzling the phrase, “Knowledge is personal”.  I would say, “The only thing ‘they” can’t take away from you is knowledge.” (The corollary to that is “They may take your car, your house, your life; they cannot take your knowledge!”).

So I am reminded, when I remember that knowledge is personal and cannot be taken away from you, that there are evaluation movements and models which are established to empower people with knowledge, specifically evaluation knowledge. I must wonder, then, if by sharing the information, we are sharing knowledge? If people are really empowered? To be sure, we share information (in this case about how to plan, implement, analyze, and report an evaluation). Is that sharing knowledge?

Fetterman (and Wandersman in their 2005 Guilford Press volume*)AA027003 says that “empowerment evaluation is committed to contributing to knowledge creation”. (Yes, they are citing Lentz, et al., 2005*; and Nonaka & Takeuchi, 1995*., just to be transparent.) So I wonder, if knowledge is personal and known only to the individual, how can “they” say that empowerment evaluation is contributing to knowledge creation. Is it because knowledge is personal and every individual creates her/his own knowledge through that experience? Or does empowerment evaluation contribute NOT to knowledge creation but information creation? (NOTE: This is not a criticism of empowerment evaluation, only an example using empowerment evaluation of the dissonance I’m experiencing; in fact, Fetterman defines empowerment evaluation as “the use of evaluation concepts, techniques, and findings to foster improvement and self-determination”. It is only later in the volume cited that the statement of knowledge creation)

Given that knowledge is personal, it would make sense that knowledge is implicit and implicit knowledge requires interpretation to make sense of it. Hence, stories because stories can help share implicit knowledge. As each individual seeks information to become knowledge, that same individual makes that information into knowledge and that knowledge implicit.  Jarche says, “As each person seeks information, makes sense of it through reflection and articulation, and then shares it through conversation…” I would add, “and shared as information”.

Keep that in mind the next time you want to measure knowledge as part of KASA on a survey.

my two cents.

molly.

  1. * Fetterman, D. M. & Wandersman, A. (eds.) (2005). Empowerment evaluation principles in practice. New Y0rk: Guilford Press.
  2. Lentz, B. E., Imm, P. S., Yost, J. B., Johnson, N. P., Barron, C., Lindberg, M. S. & Treistman, J. In D. M. Fetterman & A. Wandersman (Eds.), Empowerment evaluation principles in practice. New York: Guilford Press.
  3. Nonaka, I., & Takeuchi, K. (1995). The knowledge creating company. New York: Oxford University Press.

Thinking for yourself is a key competency for evaluators. Scriven says that critical thinking is “The name of an approach to or a subject within the curriculum that might equally well be called ‘evaluative thinking…’ “.

Certainly, one of the skills I taught my daughters from an early age is to evaluate experiences both qualitatively and quantitatively. They got so good at this exercise, they often preempted me with their reports. They learned early that critical thinking is evaluative, that critical doesn’t mean being negative, rather it means being thoughtful or analytical. Scriven goes on to say, “The result of critical thinking is in fact often to provide better support for a position under consideration or to create and support a new position.” I usually asked my girls to evaluate an experience to determine if we would do that experience (or want to do it) again.  Recently, I had the opportunity to do just that. My younger daughter had not been to the Ringling Museum in Sarasota FL; my older daughter had (she went to college in FL). She agreed, after she took me, that we needed to go as a family. We did. We all agreed that it was worth the price of admission. An example of critical thinking–where we provided support for a position under consideration.

Could we have done this without the ability to critically think? Maybe. Could we have come to an agreement that it was worth seeing more than once with out this ability? Probably not. Since the premise of this blog is that evaluation is something that everyone (whether they know it or not) does every day, then would it follow that critical thinking is done everyday? Probably. Yet, I wonder if you need this skill to get out of bed? To decide what to eat for breakfast? To develop the content of a blog? Do I need analysis and/or thoughtfulness to develop a content of a blog? It may help. Often, the content is what ever happens to catch my attention or stick in my caw the day I start my blog. Yet, I wonder…

Evaluation is an activity that requires thoughtfulness and analysis. Thoughtfulness in planning and implementing; analysis in implementing and data examination. Both in final report preparation and presentation. This is a skill that all evaluators need. It is not acquired as a function of birth; yet it is taught through application. But people may not have all the information they need. Can people (evaluators) be critical thinkers if they are not informed? Can people (evaluators)be thoughtful and analytical if they are not informed? Or just impassioned?  Does information just cloud the thoughtfulness and analysis? Something to ponder…

 

mytwo cents.

molly.

Chris Lysy, at Fresh Spectrum, had a guest contributor in his most recent blog, Rakesh Mohan.

Rakesh says “…evaluators forget that evaluation is inherently political because it involves making judgment about prioritization, distribution, and use of resources.”

I agree that evaluators can make judgements about prioritization, distribution and resource use. I wonder if making judgements is built in to the role of evaluator; is even taught to the nascent evaluator? I also wonder if the Principal Investigator (PI) has much to say about the judgements. What if the evaluator interprets the findings one way and the PI doesn’t agree. Is that political? Or not. Does the PI have the final say about what the outcomes mean (the prioritization, distribution, and resource use)? Does the evaluator make recommendations or does the evaluator only draw conclusions? Then where do comments on the prioritization, the distribution, the resource use come into the discussion? Are they recommendations or are they conclusions?

I decided I would see what my library says about politics: Scriven’s Thesaurus* Scriven book covertalks about the politics of evaluation; Fitzpatrick, Sanders, and Worthen* fitzpatrick book 2 have a chapter on “Political, Interpersonal, and Ethical Issues in Evaluation” (chapter 3);  Rossi, Lipsey, and Freeman* have a section on political context (pp. 18-20) and a section on political process (pp. 381-393) that includes policy and policy implications. The 1982 Cronbach* lee j. cronbachvolume (Designing Evaluatations of Educational and Social Programs)  has a brief discussion (of multiple perspectives) and the classic 1980 volume, Toward Reform of Program Evaluationcronbach toward reform, also addresses the topic*. Least I neglect to include those authors who ascribe to the naturalistic approaches, Guba and Lincoln  talk about the politics of evaluation (pp. 295-299) in their1981  volume, Effective Evaluation effective evaluation. The political aspects of evaluation have been part of the field for a long time.

So–because politics has been and continues to be part of evaluation, perhaps what Mohan says is relevant. When I look at Scriven’s comments in the Thesauras, the comment that stands out is, “Better education for the citizen about –and in–evaluation, may be the best route to improvement, short of a political leader with the charisma to persuade us of anything and the brains to persuade us to imporve our critical thinking.”  Since the likelihood that we will see a political leader to persuade us is slim, perhaps education is the best approach. And like Mohan says, invite them to the conference. (After all, education comes in all sizes and experiences.) Perhaps then policy makers, politicians, press, and public will be able understand and make a difference BECAUSE OF EVALUATION!

 

*Scriven, M. (1991). Evaluation thesaurus. Newbury Park, CA: Sage.

*Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines. (4th ed.) Boston, MA: Pearson

*Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.) Thousand Oaks, CA: Sage.

*Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass Inc. Publishers.

*Cronbach, L. J. et al. (1980). Toward reform of program evaluation. San Francisco, CA: Jossey-Bass Inc. Publishers.

*Guba, E. G. & Lincoln, Y. S. (1981). Effective evaluation. San Francisco, CA: Jossey-Bass Inc. Publishers.