I came across this quote from Viktor Frankl today (thanks to a colleague)
“…everything can be taken from a man (sic) but one thing: the last of the human freedoms – to choose one’s attitude in any given set of circumstances, to choose one’s own way.” Viktor Frankl (Man’s Search for Meaning – p.104)
I realized that, especially at this time of year, attitude is everything–good, bad, indifferent–the choice is always yours.
How we choose to approach anything depends upon our previous experiences–what I call personal and situational bias. Sadler* has three classifications for these biases. He calls them value inertias (unwanted distorting influences which reflect background experience), ethical compromises (actions for which one is personally culpable), and cognitive limitations (not knowing for what ever reason).
When we approach an evaluation, our attitude leads the way. If we are reluctant, if we are resistant, if we are excited, if we are uncertain, all these approaches reflect where we’ve been, what we’ve seen, what we have learned, what we have done (or not). We can make a choice how to proceed.
The America n Evaluation Association (AEA) has long had a history of supporting difference. That value is imbedded in the guiding principles. The two principles which address supporting differences are
AEA also has developed a Cultural Competence statement. In it, AEA affirms that “A culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation.”
Both of these documents provide a foundation for the work we do as evaluators as well as relating to our personal and situational bias. Considering them as we enter into the choice we make about attitude will help minimize the biases we bring to our evaluation work. The evaluative question from all this–When has your personal and situational biases interfered with you work in evaluation?
Attitude is always there–and it can change. It is your choice.
Sadler, D. R. (1981). Intuitive data processing as a potential source of bias in naturalistic evaluations. Education Evaluation and Policy Analysis, 3, 25-31.
Those of you who read this blog know a little about evaluation. Perhaps you’d like to know more? Perhaps not…
I think it would be valuable to know who was instrumental in developing the profession to the point it is today; hence, a little history. This will be fun for those of you who don’t like history. It will be a matching game. Some of these folks have been mentioned in previous posts. I’ll post the keyed responses next week.
Directions: Match the name with the evaluation contribution. I’ve included photos so you know who is who, who you can put with a name and a contribution.
A. Michael Scriven 1. Empowerment Evaluation
B. Michael Quinn Patton 2. Mixed Methods
C. Blaine Worthen 3. Naturalistic Inquiry
D. David Fetterman 4. CIPP
E. Thomas Schwandt 5. Formative/Summative
F. Jennifer Greene 6. Needs Assessment
G. James W. Altschuld 7. Developmental Evaluation
H. Ernie House 8. Case study
I. Yvonna Lincoln 9. Fourth Generation Evaluation
J. Egon Guba 10. Evaluation Capacity Building
K. Lee J. Cronbach 11. Evaluation Research
L. W. James Popham 12. Teacher Evaluation
M. Peter H. Rossi 13. Logic Models
N. Hallie Preskill 14. Educational Evaluation
O. Ellen Taylor-Powell 15. Foundations of Program Evaluation
P. Robert Stake 16. Toward Reform of Program Evaluation
Q. Dan Stufflebeam 17. Participatory Evaluation
R. Jason Millman 18. Evaluation and Policy
S. Will Shadish 19. Evaluation and epistomology
T. Laura Leviton 20. Evaluation Certification
There are others more recent who have made contributions.These represent the folks who did seminal work that built the profession. It also includes some more recent thinkers. Have fun.
Statistically significant is a term that is often bandied about. What does it really mean? Why is it important?
First–why is it important?
It is important because it helps the evaluator make decisions based on the data gathered.
That makes sense–evaluators have to make decisions so that the findings can be used. If there isn’t some way to set the findings apart from the vast morass of information, then it is only background noise. So those of us who do analysis have learned to look at the probability level (written as a “p” value such as p=0.05). The “p” value helps us determine if something is true, not necessarily that something is important.
Second–what does that number really mean?
Probability level means–can this (fill in the blank here) happen by chance? If it can occur by chance, say 95 times out of 100, then it is probably not true. When evaluators look at probability levels, we want really small numbers. Small numbers say that the likelihood that this change occurred by chance (or is untrue) is really unlikely. So even though a really small number occurs (like 0.05) it really means that there is a 95% chance that this change did not occur by chance and that it is really true. You can convert a p value by subtracting it from 100 (100-5=95; the likelihood that this did not occur by chance)
Convention has it that for something to be statistically significant, the value must be at least 0.05. This convention comes from academic research. Smaller numbers aren’t necessarily better; they just confirm that the likelihood that true change occurs more often. There are software programs (Statxact for example) that can compute the exact probability; so seeing numbers like 0.047 would occur.
Exploratory research (as opposed to confirmatory) may have a higher p value such as p=0.10.This means that the trend is moving in the desired direction. Some evaluators let the key stakeholders determine if the probability level (p value) is at a level that indicates importance, for example, 0.062. Some would argue that 94 time out of 100 is not that much different from 95 time out of 100 of being true.
There are a three topics on which I want to touch today.
In reverse order:
Evaluation use: I neglected to mention Michael Quinn Patton’s book on evaluation use. Patton has advocated use before most everyone else. The title of his book is Utilization-Focused Evaluation. The 4th edition is available from the publisher (Sage) or from Amazon (and if I knew how to insert links to those sites, I’d do it…another lesson…).
Systems diagrams: I had the opportunity last week to work with a group of Extension faculty all involved in Watershed Education (called the WE Team). This was an exciting experience for me. I helped them visualize what their concept of the WE Team looked like using the systems tool of drawing a systems diagram. This is an exercise whereby individuals or small groups quickly draw a visualization of a system (in this case the WE Team). This is not art; it is not realistic; it is only a representation from one perspective.
This is a useful tool for evaluators because it can help evaluators see where there are opportunities for evaluation; where there are opportunities for leverage; and where there there might be resistance to change (force fields). It also helps evaluators see relationships and feedback loops. I have done workshops on using systems tools in evaluating multi-site systems (of which a systems diagram is one tool) with Andrea Hegedus for the American Evaluation Association. Although this isn’t the diagram the WE Team created, it is an example of what a system diagram could look like. I used the soft ware called Inspiration to create the WE Team diagram. Inspiration has a free 30- day download and it is inexpensive (the download for V. 9 is $69.00).
Focus group participant composition.
The composition of focus groups is very important if you want to get data that you can use AND that answers your study question(s). Focus groups tend to be homogeneous, with variations to allow for differing opinions. Since the purpose of the focus group is to elicit in-depth opinions, it is important to compose the group with similar demographics (depending on your topic) in
Comfort and use drive the composition. More on this later.
Welcome back! For those of you new to this blog–I post every Tuesday, rain or shine…at least I have for the past 6 weeks…:) I guess that is MY new year’s resolution–write here every week; on Tuesdays…now to today’s post…
What one thing are you going to learn this year about evaluation?
Something about survey design?
OR logic modeling?
OR program planning?
OR focus groups?
OR…(fill in the blank and let me know…)
A colleague of mine asked me the other day about focus groups.
Specifically, the question was, “What makes a good focus group question?”
I went to Dick Krueger and Mary Anne Casey’s book (Focus Groups, 3rd ed. Sage Publications, 2000). On page 40, they have a section called “Qualities of Good Questions”. These make sense.They say: Good questions…
Let’s explore these a bit.
Before you convene your focus group, make sure you have several individuals (3 – 5) who are similar to and not included in your target audience review the focus group questions. It is always a good idea to pilot any question you use to gather data.
Ellen Taylor-Powell (at University of Wisconsin Extension) has a Quick Tips sheet on focus groups for more information. To access it go to: http://www.uwex.edu/ces/pdande/resources/pdf/Tipsheet5.pdf