One response I got for last week’s query was about on-line survey services.  Are they reliable?  Are they economical?  What are the design limitations?  What are the question format limitations?

Yes.  Depends.  Some.  Not many.

Let me take the easy question first:  Are they economical?

Depends.  Cost of postage for paper survey (both out and back) vs. the time it takes to enter questions in system.  Cost of system vs. length of survey.  These are things to consider.

Because most people have access to email today,  using an on-line survey service is often the easiest and most economical way to distribute an evaluation survey.  Most institutional review boards view an on-line survey like a mail survey and typically grant a waiver of documentation of informed consent.  The consenting document is the entry screen and often an agree to participate question is included on that screen.

Are they valid and reliable?

Yes, but…The old adage “Garbage in, garbage out” applies here.  Like a paper survey, and internet survey is only as good as the survey questions.  Don Dillman, in his third edition “Internet, mail, and mixed-mode surveys” (co-authored with Jolene D.  Smyth and Leah Melani Christian), talks about question development.  Since he wrote the book (literally), I use this resource a lot!

What are the design limitations?

Some limitations apply…Each online survey service is different.  The most common service is Survey Monkey (www.surveymonkey.com).  The introduction to Survey Monkey says, “Create and publish online surveys in minutes, and view results graphically and in real time.”  The basic account with Survey Monkey is free.  It has limitations (number of questions [10]; limited number of question formats [15]; number of responses [100]). And you can upgrade to the Pro or Unlimited  for a subscription fee ($19.95/mo or $200/annually, respectively).  There are others.  A search using “survey services” returns many options such as Zoomerang or InstantSurvey.

What are the question format limitations?

Not many–both open-ended and closed ended questions can be asked.  Survey Monkey has 15 different formats from which to choose (see below).  I’m sure there may be others; this list covers most formats.

  • Multiple Choice (Only one Answer)
  • Multiple Choice (Multiple Answers)
  • Matrix of Choices (Only one Answer per Row)
  • Matrix of Choices (Multiple Answers per Row)
  • Matrix of Drop-down Menus
  • Rating Scale
  • Single Textbox
  • Multiple Textboxes
  • Comment/Essay Box
  • Numerical Textboxes
  • Demographic Information (US)
  • Demographic Information (International)
  • Date and/or Time
  • Image
  • Descriptive Text

Oregon State University has an in-house service sponsored by the College of Business (BSG–Business Survey Groups).  OSU also has an institutional account with Student Voice, an on-line service designed initially for learning assessment which I have found useful for evaluations.  Check your institution for options available.  For your next evaluation that involves a survey, think electronically.

I have been writing this blog since December 2009.  That seems like forever from my perspective.

I write.  I post.  I wait.  Nothing.

Oh, occasionally, I receive an email (which is wonderful and welcome) and early on I received a few comments (that was great).


Recently, nothing.

I know it is summer–and in Extension that is the time for fairs and camps and that means every one is busy.

Yet, I know that learning happens all the time and you have some amazing experiences that can teach.  So, my good readers: What evaluation question have you had this week?  Any question related to evaluating what you are doing is welcome.  Let me hear from you.  You can email me (molly.engle@oregonstate.edu) or you can post a comment (see comment link below).

It occurs to me, as I have mentioned before (see July 13, 2010), that data management is the least likely part of evaluation to be taught.

Research design (from which evaluation borrows heavily), methodology (the tools for data collection), and report writing are all courses in which  Extension professionals could have participated.  Data management is not typically taught as a course.

So what’s a body to do???

Well, you could make it up as you go a long.  You could ask some senior member how they do it.  You could explore how a major software package (like SPSS) manages data.

I think there are several parts to managing data that all individuals conducting evaluations need to do.

  • Organize data sequentially–applying an order will help in the long run.
  • Create a data dictionary or a data code book
  • Develop a data base–use what you know.
  • Save the hard copy in a secure place–I know many folks who kept their dissertation data in the freezer in case of fire or flood.

Organize data.

I suggest that as the evaluations are returned to you, the evaluator, that they be numbered sequentially, 1 – n.  This sequential number can also serve as the identifying number, providing the individual with confidentiality.  The identifying number is typically the first column in the data base.

Data dictionary.

This is a hard copy record of the variables and how they are coded.  It includes any idiosyncrasies that occur as the data are being coded so that a stranger could code your data easily.  An idiosyncrasy that often occurs is for a participant to split the difference between two numbers on a scale.  You must decide how it is coded.  You must also not how you coded it the first time and the next time and the next time.  If you alternate between coding high and coding low, you need to note that.

Data base.

Most folks these days have Excel on their computers.  A few have a specific data analysis software program.  Excel can be imported into most software programs.  Use it.  It is easy.  If you know how, you can even run frequencies and percentages in Excel.  These analyses are the first analyses  you conduct.  Use the rows for cases (individual participants) and the columns as variables.  Name each variable in an identifiable manner AND PUT THAT IDENTIFICATION IN THE DATA DICTIONARY!

Data security.

I don’t necessarily advocate storing your data in the freezer, although I certainly did when I did my dissertation in the days before laptops and personal computers.  Make sure the data are secured with a password–not only does it protect the data from most nosy people, it makes IRB happy and assures confidentiality.

Oh, and one other point when talking about data.  The word “data” is a plural word and takes a plural verb; there fore–Data are.  I know that the spoken convention is to think of the word “data” as singular–in writing, it is plural.  A good habit to develop-

Rensis Likert was a sociologist at the University of Michigan.  He is credited with developing the Likert scale.

Before I say a few words about the scale and subsequently the item (two different entities), I want to clarify how to say his name:

Likert pronounced (he died in 1981) his name lick-urt (short i), like to lick something.  Most people mispronounce it.  I hope he is resting easy…

Lickert scales and Lickert items are two different things.

A Lickert scale is a multi-item instrument composed of items asking opinions (attitudes) on an agreement-disagreement continuum.  The several items have response levels arranged horizontally.  The response levels are anchored with sequential integers as well as words that assumes equal intervals.  These words–strongly disagree, somewhat disagree, neither agree or disagree, somewhat agree, strongly agree–are symmetrical around a neutral middle point.  Likert always measured attitude by agreement or disagreement. Today the methodology is applied to other domains.

A Lickert item is one of many that has response levels arranged horizontally and anchored with consecutive integers that are more or less evenly spaced, bivalent and symmetrical about a neutral middle.  If it doesn’t have these characteristics, it is not a Lickert item–some authors would say that without these characteristics, the item is not even a Likert-type item.  For example, an item asking how often you do a certain behavior with a scale of  “never,” “sometimes, “average,” “often,” and “very often” would not be a Lickert item.  Some writers would consider it a Likert-type item.  If the middle point “average” is omitted, it would still be considered a Likert-type item.

Referring to ANY ordered category item as Likert-type is a misconception.  Unless it has response levels arranged horizontally, anchored with consecutive integers, anchored with words that connote even spacing, and are bivalent, the item is only an ordered-category item or sometimes a visual analog scale or a semantic differential scale.  More on visual analog scales and semantic differential scales at another time.