…that there is a difference between a Likert item and a Likert scale?**

Did you know that a Likert item was developed by Rensis Likert, a psychometrician and an educator? 

And that the item was developed to have the individual respond to the level of agreement or disagreement with a specific phenomenon?

And did you know that most of the studies on Likert items use a five- or seven-points on the item? (Although sometimes a four- or six-point scale is used and that is called a forced-choice approach–because you really want an opinion, not a middle ground, also called a neutral ground.)

And that the choices in an odd-number choice usually include some variation on the following theme, “Strongly disagree”, “Disagree”, “Neither agree or disagree”, “Agree”, “Strongly Agree”?

And if you did, why do you still write scales, and call them Likert, asking for information using a scale that goes from “Not at all” to “A little extent” to “Some extent” to “Great extent?  Responses that are not even remotely equidistant (that is, have equal intervals with respect to the response options) from each other–a key property of a Likert item.

And why aren’t you using a visual analog scale to get at the degree of whatever the phenomenon is being measured instead of an item for which the points on the scale are NOT equidistant? (For more information on a visual analog scale see a brief description here or Dillman’s book.)

I sure hope Rensis Likert isn’t rolling over in his grave (he died in 1981 at the age of 78).

Extension professionals use survey as the primary method for data gathering.  The choice of survey is a defensible one.  However, the format of the survey, the question content, and the question construction must also be defensible.  Even though psychometric properties (including internal consistency, validity, and other statistics) may have been computed, if the basic underlying assumptions are violated, no psychometric properties will compensate for a poorly designed instrument, an instrument that is not defensible.

All Extension professionals who choose to use survey to evaluate their target audiences need to have scale development as a personal competency.  So take it upon yourself to learn about guidelines for scale development (yes, there are books written on the subject!).

<><><><><>

**Likert scale is the SUM of of responses on several Likert items.  A Likert item is just one 4 -, 5-, 6, or 7-point single statement asking for an opinion.

Reference:  Devellis, R. F. (1991).  Scale development:  Theory and applications. Newbury Park: Sage Publications. Note:  there is a newer edition.

Dillman, D. A, Smyth, J. D., & Christian, L. M. (2009).  Internet, mail, and mixed-mode surveys:  The tailored design method. (3rd ed.). Hoboken, NJ: John Wiley& Sons, Inc.

Hi everyone–it is the third week in April and time for a TIMELY TOPIC!  (I was out of town last week.)

Recently, I was asked: Why should I plan my evaluation strategy in the program planning stage? Isn’t it good enough to just ask participants if they are satisfied with the program?

Good question.  This is the usual scenario:  You have something to say to your community.  The topic has research support and is timely.  You think it would make a really good new program (or a revision of a current program).  So you plan the program. 

Do you plan the evaluation at the same time? The keyed response is YES.  The usual response is something like, “Are you kidding?”  No, not kidding.  When you plan your program is the time to plan your evaluation.

Unfortunately, my experience is that many (most) faculty when planning or revising a program fail to think about evaluating that program at the planning stage.  Yet, it is at the planning stage that you can clearly and effectively identify what you think will happen and what will indicate that your program has made a difference. Remember the evaluative question isn’t, “Did the participants like the program?”  The evaluative question is, “What difference did my program make in the lives of your participants–and if possible in the economic, environmental, and social conditions in which they live.” That is the question you need to ask yourself when you plan your program.  It also happens to be the evaluative question for the long term outcomes in a logic model.

If you ask this question before you implement your program, you may find that you can not gather data to answer it.  This allows you to look at what change (or changes) can you measure.  Can you measure changes in behavior?  This answers the question, “What difference did this program make in the way the participants act in the context presented in the program?” Or perhaps,  “What change occurred in what the participants know about the program topic?”  These are the evaluative questions for the short and intermediate term outcomes in a logic model.  (As an a side, there are evaluative questions that can be asked at every stage of a logic model.)

By thinking about and planning for evaluation at the PROGRAM PLANNING STAGE,you avoid an evaluation that gives you data that cannot be used to support your program.  A program you can defend with good evaluation data is a program that has staying power.  You also avoid having to retrofit your evaluation to your program.  Retrofits, though often possible,  may miss important data that could only be gathered by thinking of your outcomes ahead of the implementation.

Years ago (back when we beat on hollow logs), evaluations typically asked questions that measured participant satisfaction.  You probably still want to know if participants are satisfied with your program.  Satisfaction questionnaires may be necessary; they are no longer sufficient.  They do not answer the evaluative question, “What difference did this program make?”

Last week, I spoke about how to questions  and applying them  to program planning, evaluation design, evaluation implementation, data gathering, data analysis, report writing, and dissemination.  I only covered the first four of those topics.  This week, I’ll give you my favorite resources for data analysis.

This list is more difficult to assemble.  This is typically where the knowledge links break down and interest is lost.  The thinking goes something like this.  I’ve conducted my program, I’ve implemented the evaluation, now what do I do?  I know my program is a good program so why do I need to do anything else?

YOU  need to understand your findings.  YOU need to be able to look at the data and be able to rigorously defend your program to stakeholders.  Stakeholders need to get the story of your success in short clear messages.  And YOU need to be able to use the findings in ways that will benefit your program in the long run.

Remember the list from last week?  The RESOURCES for EVALUATION list?  The one that says:

1.  Contact your evaluation specialist.

2.  Listen to stakeholders–that means including them in the planning.

3.  Read

Good.  This list still applies, especially the read part.  Here are the readings for data analysis.

First, it is important to know that there are two kinds of data–qualitative (words) and quantitative (numbers).  (As an aside, many folks think words that describe are quantitative data–they are still words even if you give them numbers for coding purposes, so treat them like words, not numbers).

  • Qualitative data analysis. When I needed to learn about what to do with qualitative data, I was given Miles and Huberman’s book.  (Sadly, both authors are deceased so there will not be a forthcoming revision of their 2nd edition, although the book is still available.)

Citation: Miles, M. B., & Huberman, A. Michael. (1994). Qualitative data analysis: An expanded source book. Thousand Oaks, CA: Sage Publications.

Fortunately, there are newer options, which may be as good.  I will confess, I haven’t read them cover to cover at this point (although they are on my to-be-read pile).

Citation:  Saldana, J.  (2009). The coding manual for qualitative researchers. Los Angeles, CA: Sage.

Bernard, H. R. & Ryan, G. W. (2010).  Analyzing qualitative data. Los Angeles, CA: Sage.

If you don’t feel like tackling one of these resources, Ellen Taylor-Powell has written a short piece  (12 pages in PDF format) on qualitative data analysis.

There are software programs for qualitative data analysis that may be helpful (Ethnograph, Nud*ist, others).  Most people I know prefer to code manually; even if you use a soft ware program, you will need to do a lot of coding manually first.

  • Quantitative data analysis. Quantitative data analysis is just as complicated as qualitative data analysis.  There are numerous statistical books which explain what analyses need to be conducted.  My current favorite is a book by Neil Salkind.

Citation: Salkind, N. J. (2004).  Statistics for people who (think they) hate statistics. (2nd ed. ). Thousand Oaks, CA: Sage Publications.

NOTE:  there is a 4th ed.  with a 2011 copyright available. He also has a version of this text that features Excel 2007.  I like Chapter 20 (The Ten Commandments of Data Collection) a lot.  He doesn’t talk about the methodology, he talks about logistics.  Considering the logistics of data collection is really important.

Also, you need to become familiar with a quantitative data analysis software program–like SPSS, SAS, or even Excel.  One copy goes a long way–you can share the cost and share the program–as long as only one person is using it at a time.  Excel is a program that comes with Microsoft Office.  Each of these has tutorials to help you.

A part of my position is to build evaluation capacity.  This has many facets–individual, team, institutional.

One way I’ve always seen as building capacity is knowing where to find the answer to the how to questions.  Those how to questions apply to program planning, evaluation design, evaluation implementation, data gathering, data analysis, report writing, and dissemination.  Today I want to give you resources to build your tool box.  These resources build capacity only if you use them.

RESOURCES for EVALUATION

1.  Contact your evaluation specialist.

2.  Listen to stakeholders–that means including them in the planning.

3.  Read.

If you don’t know what to read to give you information about a particular part of your evaluation, see resource Number 1 above.  For those of you who do not have the luxury of an evaluation specialist, I’m providing some reading resources below (some of which I’ve mentioned in previous blogs).

1.  For program planning (aka program development):  Ellen Taylor-Powell’s web site at the University of Wisconsin Extension.  Her web site is rich with information about program planning, program development, and logic models.

2.  For evaluation design and implementation:  Jody Fitzpatrick”s book.

Citation:  Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines.  (3rd ed.).  Boston: Pearson Education, Inc.

3.  For evaluation methods, that depends on the method you want to use for data gathering; it doesn’t cover the discussion of evaluation design, though.

  • For needs assessment, the books by Altschuld and Witkin (there are two).

(Yes, needs assessment is an evaluation activity).

Citation:  Witkin, B. R. & Altschuld, J. W. (1995).  Planning and conducting needs assessments: A practical guide. Thousand Oaks, CA:  Sage Publications.

Citation:  Altschuld, J. W. & Witkin B. R. (2000).  From needs assessment to action: Transforming needs into solution strategies. Thousand Oaks, CA:  Sage Publications, Inc.

  • For survey design:     Don Dillman’s book.

Citation:  Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009).  Internet, mail, and mixed-mode surveys:  The tailored design method.  (3rd. ed.).  Hoboken, New Jersey: John Wiley & Son, Inc.

  • For focus groups:  Dick Krueger’s book.

Citation:  Krueger, R. A. & Casey, M. A. (2000).  Focus groups:  A practical guide for applied research. (3rd. ed.).  Thousand Oaks, CA: Sage Publications, Inc.

  • For case study:  Robert Yin’s classic OR

Bob Brinkerhoff’s book. 

Citation:  Yin, R. K. (2009). Case study research: Design and methods. (4th ed.). Thousand Oaks, CA: Sage, Inc.

Citation:  Brinkerhoff, R. O. (2003).  the success case method:  Find out quickly what’s working and what’s not. San Francisco:  Berrett-Koehler Publishers, Inc.

  • For multiple case studies:  Bob Stake’s book.

Citation:  Stake, R. E. (2006).  Multiple case study analysis. New York: The Guilford Press.

Since this post is about capacity building, a resource for evaluation capacity building:

Hallie Preskill and Darlene Russ-Eft’s book .

Citation:  Preskill, H. & Russ-Eft, D. (2005).  Building Evaluation Capacity: 72 Activities for teaching and training. Thousand Oaks, CA: Sage Publications.

I’ll cover reading resources for data analysis, report writing, and dissemination another time.