Jul
24
Filed Under (program evaluation) by Molly on 24-07-2015

survey image 3The use of a survey is a valuable evaluation tool, especially in the world of electronic media. The survey allows individuals to gather data (both qualitative and quantitative) easily and relatively inexpensively. When I want information about surveys, I turn to the 4th edition of the Dillman book Dillman 4th ed. (Dillman, Smyth, & Christian, 2014*). Dillman has advocated the “Tailored Design Method” for a long time. (I first became aware of his method, which he called “Total Design Method,” in his 1978 first edition,dillman 1st edition a thin, 320 page volume [as opposed to the 509 page fourth edition].)

Today I want to talk about the “Tailored Design” method (originally known as total design method).

In the 4th edition, Dillman et al. say that “…in order to minimize total survey error, surveyors have to customize or tailor their survey designs to their particular situations.” They are quick to point out (through various examples) that the same procedures won’t work  for all surveys.  The “Tailored Design Method” refers to the customizing survey procedures for each separate survey.  It is based upon the topic of the survey and the audience being surveyed as well as the resources available and the time-line in use.  In his first edition, Dillman indicated that the TDM (Tailored Design Method) would produce a response rate of 75% for mail surveys and an 80%-90% response rate is possible for telephone surveys. Although I cannot easily find the same numbers in the 4th edition, I can provide an example (from the 4th edition on page 21-22) where the response rate is 77% after a combined contact of mail and email over one month time. They used five contacts of both hard and electronic copy.

This is impressive. (Most surveys I and others I work with conduct have a response rate less than 50%.) Dillman et al. indicate that there are three fundamental considerations in using the TDM. They are:

  1. Reducing four sources of survey error–coverage, sampling, nonresponse, and measurement;
  2. Developing a set of survey procedures that interact and work together to encourage all sample members to respond; and
  3. Taking into consideration elements such as survey sponsorship, nature of survey population, and the content of the survey questions.

The use of a social exchange perspective suggests that respondent behavior is motivated by the return that behavior is expected, and usually does, bring. This perspective affects the decisions made regarding coverage and sampling, the way questions are written and questionnaires are constructed, and determines how contacts will produce the intended sample.

If you don’t have a copy of this book (yes, there are other survey books out there) on your desk, get one! It is well worth the cost ($95.00, Wiley; $79.42, Amazon).

* Dillman, D. A., Smyth, J. D. & Christian, L. M. (2014)  Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Hoboken, N. J.: John Wiley & Sons, Inc.

my two cents.

molly.

Mar
03
Filed Under (Methodology, program evaluation) by Molly on 03-03-2015

This is a link to an editorial in Basic and Applied Social PsychologyBasic and applied social psychology cover. It says that inferential statistics are no longer allowed by authors in the journal.

“What?”, you ask. Does that have anything to do with evaluation? Yes and no. Most of my readers will not publish here. They will publish in evaluation journals (of which there are many) or if they are Extension professionals, they will publish in the Journal of Extension.JoE logo And as far as I know, BASP is the only journal which has established an outright ban on inferential statistics. So evaluation journals and JoE still accept inferential statistics.

Still–if one journal can ban the use, can others?

What exactly does that mean–no inferential statistics? The journal editors define this ban as as “…the null hypothesis significance testing procedure is invalid and thus authors would be not required to perform it.” That means that authors will remove all references to  p-values, t-values, F-values, or any reference to statements about significant difference (or lack thereof) prior to publication. The editors go on to discuss the use of confidence intervals (No) and Bayesian methods (case-by case) and what inferential statistical procedures are required by the journal. Read the rest of this entry »

May
19
Filed Under (Data Analysis, program evaluation) by Molly on 19-05-2014

Had a comment a while back on analyzing survey data…hmm…that is a quandary as most surveys are done on line (see Survey monkey, among others).

If you want to reach a large audience (because your population from which you sampled is large), you will probably use an on-line survey. The on-line survey companies will tabulate the data for you. Can’t guarantee that the tabulations you get will be what you want, or will tell you want you want to know. Typically (in my experience), you can get an Excel file which can be imported into a soft ware program and you can run your own analyses, separate from the on line analyses. Read the rest of this entry »

Nov
07
Filed Under (Data Analysis, program evaluation) by Molly on 07-11-2013

I had a topic all ready to write about then I got sick.  I’m sitting here typing this trying to remember what that topic was, to no avail. That topic went the way of much of my recent memory; another day, perhaps.

I do remember the conversation with my daughter about correlation.  She had a correlation of .3 something with a probability of 0.011 and didn’t understand what that meant.  We had a long discussion of causation and attribution and correlation.

We had another long conversation about practical v. statistical significance, something her statistics professor isn’t teaching.  She isn’t learning about data management in her statistics class either.  Having dealt with both qualitative and quantitative data for a long time, I have come to realize that data management needs to be understood long before you memorize the formulas for the various statistical tests you wish to perform.  What if the flood happens????lost data

So today I’m telling you about data management as I understand it, because the flood  did actually happen and, fortunately, I didn’t loose my data.  I had a data dictionary.

Data dictionary.  The first step in data management is a data dictionary.   There are other names for this, which escape me right now…know that a hard copy of how and what you have coded is critical.  Yes, make a back up copy on your hard drive…have a hard copy because the flood might happen. (It is raining right now and it is Oregon in November.)

Take a hard copy of your survey, evaluation form, qualitative data coding sheet and mark on it what every code notation you used means.  I’d show you an example of what I do, only they are at the office and I am home sick without my files.  So, I’ll show you a clip art instead…data management    smiley.  No, I don’t use cards any more for my data (I did once…most of you won’t remember that time…), I do make a hard copy with clear notations.  I find my self doing that with other things to make sure I code the response the same way.  That is what a data dictionary allows you to do–check yourself.

Then I run a frequencies and percentages analysis.  I use SPSS (because that is what I learned first).  I look for outliers, variables that are miscoded, and system generated missing data that isn’t missing.  I look for any anomaly in the data, any humon error (i. e. my error).  Then I fix it.  Then I run my analyses.

There are probably more steps than I’ve covered today.  These are the first steps that absolutely must be done BEFORE you do any analyses.  Then you have a good chance of keeping your data safe.

Jun
26
Filed Under (Data Analysis) by Molly on 26-06-2013

The question of the week is:

What statistical test do I use when I have pre/post reflective questions.

First, what is a reflective question?

Ask says: “A reflective question is a question that requires an individual to think about their knowledge or information, before giving a response. A reflective question is mostly used to gain knowledge about an individual’s personal life.”

I assume (and we have talked about assumptions before assume) that these items were scaled to some hierarchy, like a lot to a little, and a number assigned to each.  Since the questions are pre/post, they are “matched” and can be compared using a comparison test of dependence, like a t-test or a Wilcoxon.  However, if the questions are truly nominal (i.e., “know” and “not know”) and in response to some prompt and DO NOT have a keyed response (like specific knowledge questions),  then even though the same person answered the pre questions and the post questions there really isn’t established dependence.

If the data are nominal, then using a chi-square test would be the best approach because it will tell you if there is a difference from what was expected and what was actually observed (responded).  On a pre/post reflective question, one would expect that they respondents would “know” some information before the intervention, say 50-50 and after the intervention, that difference would shift to say 80 “know” to 20 “not know”.  A chi-square test would give you a statistic of probability that that distribution on the post occurred by chance.  SPSS will run this test; find it under the non-parametric tests.