Filed Under (Methodology, program evaluation) by Molly on 03-03-2015

This is a link to an editorial in Basic and Applied Social PsychologyBasic and applied social psychology cover. It says that inferential statistics are no longer allowed by authors in the journal.

“What?”, you ask. Does that have anything to do with evaluation? Yes and no. Most of my readers will not publish here. They will publish in evaluation journals (of which there are many) or if they are Extension professionals, they will publish in the Journal of Extension.JoE logo And as far as I know, BASP is the only journal which has established an outright ban on inferential statistics. So evaluation journals and JoE still accept inferential statistics.

Still–if one journal can ban the use, can others?

What exactly does that mean–no inferential statistics? The journal editors define this ban as as “…the null hypothesis significance testing procedure is invalid and thus authors would be not required to perform it.” That means that authors will remove all references to  p-values, t-values, F-values, or any reference to statements about significant difference (or lack thereof) prior to publication. The editors go on to discuss the use of confidence intervals (No) and Bayesian methods (case-by case) and what inferential statistical procedures are required by the journal. Read the rest of this entry »


In a recent post, I said that 30 was the rule of thumb, i.e., 30 cases was the minimum needed in a group to be able to run inferential statistics and get meaningful results.  How do I know, a colleague asked? (Specifically,  “Would you say more about how it takes approximately 30 cases to get meaningful results, or a good place to find out more about that?”) When I was in graduate school, a classmate (who was into theoretical mathematics) showed me the mathematical formula for this rule of thumb. Of course I don’t remember the formula, only the result. So I went looking for the explanation. I found this site. Although my classmate did go into the details of the chi-square distribution and the formula computations, this article doesn’t do that. It even provides an Excel Demo for calculating sample size and verifying this rule of thumb. I am so relieved that there is another source besides my memory.


New Topic:

Read the rest of this entry »

Filed Under (Data Analysis, program evaluation) by Molly on 19-05-2014

Had a comment a while back on analyzing survey data…hmm…that is a quandary as most surveys are done on line (see Survey monkey, among others).

If you want to reach a large audience (because your population from which you sampled is large), you will probably use an on-line survey. The on-line survey companies will tabulate the data for you. Can’t guarantee that the tabulations you get will be what you want, or will tell you want you want to know. Typically (in my experience), you can get an Excel file which can be imported into a soft ware program and you can run your own analyses, separate from the on line analyses. Read the rest of this entry »

Filed Under (Data Analysis) by Molly on 26-06-2013

The question of the week is:

What statistical test do I use when I have pre/post reflective questions.

First, what is a reflective question?

Ask says: “A reflective question is a question that requires an individual to think about their knowledge or information, before giving a response. A reflective question is mostly used to gain knowledge about an individual’s personal life.”

I assume (and we have talked about assumptions before assume) that these items were scaled to some hierarchy, like a lot to a little, and a number assigned to each.  Since the questions are pre/post, they are “matched” and can be compared using a comparison test of dependence, like a t-test or a Wilcoxon.  However, if the questions are truly nominal (i.e., “know” and “not know”) and in response to some prompt and DO NOT have a keyed response (like specific knowledge questions),  then even though the same person answered the pre questions and the post questions there really isn’t established dependence.

If the data are nominal, then using a chi-square test would be the best approach because it will tell you if there is a difference from what was expected and what was actually observed (responded).  On a pre/post reflective question, one would expect that they respondents would “know” some information before the intervention, say 50-50 and after the intervention, that difference would shift to say 80 “know” to 20 “not know”.  A chi-square test would give you a statistic of probability that that distribution on the post occurred by chance.  SPSS will run this test; find it under the non-parametric tests.