Hello, readers. This week I’m doing something different with this blog. This week, and the third week in each month from now on, I’ll be posting a column called Timely Topic. This will be a post on a topic that someone (that means you reader) has suggested. A topic that has been buzzing around in conversations. A topic that has relevance to evaluation. This all came about because a colleague from another land grant institution is concerned about the dearth of evaluation skills among Extension colleagues. (Although this comment makes me wonder to whom this colleague is talking, that question is content for another post, another day.) So thinking about how to get core evaluation information out to more folks, I decided to devote one post a month to TIMELY TOPICS. To day’s post is about “THINKING CAREFULLY”.
Recently, I’ve been asked to review a statistics text book for my department. This particular book uses a program that is available on everyone’s computer. The text has some important points to make and today’s post reflects one of those points. The point is thinking carefully about using statistics.
As an evaluator–if only the evaluator of your own programs–you must think critically about the “…context of the data, the source of the data, the method used in data collection, the conclusions reached, and the practical implications” (Triola, 2010, p. 18). The author posits that to understand general methods of using sample data; make inferences about populations; understand sampling and surveys; and important measures of key characteristics of data, as well as the use of valid statistical methods, one must recognize the misuse of statistics.
I’m sure all of you have heard the quote, “Figures don’t lie; liars figure,” which is attributed to Mark Twain. I’ve always heard the quote as “Statistics lie and liars use statistics.” Statistics CAN lie. Liars CAN use statistics. That is where thinking carefully comes in–to determine if the statistical conclusions being presented are seriously flawed.
As evaluators, we have a responsibility (according to the AEA guiding principles) to conduct systematic, data-based inquiry; provide competent performance; display honesty and integrity…of the entire evaluation process; respect the security, dignity, and self-worth of all respondents; and consider the diversity of the general and public interests and values. This demands that we think carefully about the reporting of data. Triola cautions, “Do not use voluntary response sample data for making conclusions about a population.” How often have you used data from individuals who decide themselves (self-selected) whether to participate in your survey or not? THINK CAREFULLY about your sample. These data cannot be generalized to all people like your respondents because of the bias that is introduced by self-selection.
Other examples of misuse of statistics include
When reporting statistics gathered from your evaluation, THINK CAREFULLY.
Statistically significant is a term that is often bandied about. What does it really mean? Why is it important?
First–why is it important?
It is important because it helps the evaluator make decisions based on the data gathered.
That makes sense–evaluators have to make decisions so that the findings can be used. If there isn’t some way to set the findings apart from the vast morass of information, then it is only background noise. So those of us who do analysis have learned to look at the probability level (written as a “p” value such as p=0.05). The “p” value helps us determine if something is true, not necessarily that something is important.
Second–what does that number really mean?
Probability level means–can this (fill in the blank here) happen by chance? If it can occur by chance, say 95 times out of 100, then it is probably not true. When evaluators look at probability levels, we want really small numbers. Small numbers say that the likelihood that this change occurred by chance (or is untrue) is really unlikely. So even though a really small number occurs (like 0.05) it really means that there is a 95% chance that this change did not occur by chance and that it is really true. You can convert a p value by subtracting it from 100 (100-5=95; the likelihood that this did not occur by chance)
Convention has it that for something to be statistically significant, the value must be at least 0.05. This convention comes from academic research. Smaller numbers aren’t necessarily better; they just confirm that the likelihood that true change occurs more often. There are software programs (Statxact for example) that can compute the exact probability; so seeing numbers like 0.047 would occur.
Exploratory research (as opposed to confirmatory) may have a higher p value such as p=0.10.This means that the trend is moving in the desired direction. Some evaluators let the key stakeholders determine if the probability level (p value) is at a level that indicates importance, for example, 0.062. Some would argue that 94 time out of 100 is not that much different from 95 time out of 100 of being true.