On February 1 at 12:00 pm PT, I will be holding my annual virtual tea party.  This is something I’ve been doing since February of 1993.  I was in Minnesota and the winter was very cold, and although not as bleak as winter in Oregon, I was missing my friends who did not live near me.  I had a tea party for the folks who were local and wanted to think that those who were not local were enjoying the tea party as well.  So I created a virtual tea party.  At that time, the internet was not available; all this was done in hard copy (to this day, I have one or two friends who do not have internet…sigh…).  Today, the internet makes the tea party truly virtual–well the invitation is; you have to have a real cup of tea where ever you are.
Virtual Tea Time 2014

 

How is this evaluative?  Gandhi says that only you can be the change you want to see…this is one way you can make a difference.  How will you know?

I know because my list of invitees has grown exponentially.  And some of them share the invitation.  They pass it on.  I started with a dozen or so friends.  Now my address list is over three pages long.  Including my daughters and daughters of my friends (maybe sons, too for that matter…)

Other ways:  Design an evaluation plan; develop a logic model; create a metric/rubric.  Report the difference.  This might be a good place for using an approach other than a survey or Likert scale.  Think about it.

Evaluation models abound.

Models are a set of plans.

Educational evaluation models are plans that could “lead to more effective evaluations” (Popham, 1993, p. 23).  Popham, educational evaluation  Popham (1993) goes on to say that there was little or no thought given to a new evaluation model that would make it distinct from other models so that in sorting models into categories, the categories “fail to satisfy…without overlap” (p. 24).  Popham employs five categories:

  1. Goal-attainment models;
  2. Judgmental models emphasizing inputs;
  3. Judgmental models emphasizing outputs;
  4. Decision-facilitation models; and
  5. Naturalistic models

I want to acquaint you with one of the naturalistic models, the connoisseurship model.  (I hope y’all recognize the work of Guba and Lincoln in the evolution of naturalistic models; if not I have listed several sources below.)  Elliott Eisner  drew upon his experience as an art educator and used art criticism as the basis for this model.  His approach relies on educational connoisseurship and educational criticism.  Connoisseurship focuses on complex entities (think art, wine, chocolate); criticism is a form which “discerns the qualities of an event or object” (Popham, 1993, p. 43) and puts into words that which has been experienced.  This verbal presentation allows for those of us who do not posess the critic’s expertise can understand what was perceived.  Eisner advocated that design is all about relationships and relationships are necessary for the creative process and thinking about the creative process.  He proposed “that experienced experts, like critics of the arts, bring their expertise to bear on evaluating the quality of programs…” (Fitzpatrick, Sanders and Worthen, 2004).  He proposed an artistic paradigm (rather than a scientific one) as a supplement other forms of inquiry.  It is from this view that connoisseurship derives—connoisseurship is the art of appreciation; the relationships between/among the qualities of the evaluand. 

Elliot Eisner died January 10, 2014; he was 81. He was the Lee Jacks Professor of Education at Stanford Graduate School of Education.  He advanced the role of arts in education and used arts as models for improving educational practice in other fields.  His contribution to evaluation was significant.

Resources:

Eisner, E. W. (1975). The perceptive eye:  Toward the reformation of educational evaluation.  Occasional Papers of the Stanford Evaluation Consortium.  Stanford, CA: Stanford University Press.

Eisner, E. W. (1991a). Taking a second look: Educational connoisseurship revisited.  In Evaluation and education: At quarter century, ed. M. W. McLaughlin & D. C. Phillips.  Chicago: University of Chicago Press.

Eisner, E. W. (1991b). The enlightened eye: Qualitative inquiry and the enhancement of educational practice.  New York: Macmillian.

Eisner, E. W., & Peshkin, A. (eds.) (1990).  Qualitative inquiry in education.  NY:Teachers College Press.

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program Evaluation: Alternative approaches and practical guidelines, 3rd ed. Boston, MA: Pearson

Guba, E. G., & Lincoln, Y. S. (1981). Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches.  San Francisco: Jossey-Bass.

Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Newbury Park, CA: Sage Publications.

Patton, M. Q. (2002).  Qualitative research & evaluation methods. 3rd ed. Thousand Oaks, CA: Sage Publications.

Popham, W. J. (1993). Educational evaluation. 3rd ed. Boston, MA: Allyn and Bacon.

 

Normal
0

false
false
false

EN-US
X-NONE
X-NONE

MicrosoftInternetExplorer4

 

A new calendar year…2014…where has the time gone…Happy-New-Year-2014-Pictures

It might be useful for those of you who are interested in evaluation to review a list of evaluation conferences offered around the world this year.  Sarah Baughman cited the list offered by Better Evaluation.  You could spend all year coming and going.  What a way to see the world.  That certainly has evaluative opportunities.

2014 beginning question.

One question I am asked by people new to evaluation is How often do I need to conduct an evaluation?  How much budget/time/resources do I allot for evaluation?  My evaluative answer is “It all depends.”

For new faculty who want to know if their programs are working, (not impact, just working), identify your most important program and evaluate it.  Next year, so another program, and so on.  If you want to know impact, you will need to wait at least three years, maybe five.  Although some programs could show impact after one year.  (We are not talking world peace here, only did the program make a difference, does it have merit, value, worth?)

For executive directors, my “it depends answer is still important.  They have different needs than program planners and those who implement programs.  My friend Stan says executive directors need to know:  What is the problem?   What caused the problem?  How do I solve the problem (in two sentences or less)?  Executive directors don’t have a lot of time to devote to evaluation; yet they need to know.

For people who are continuing a program of long standing, I would suggest you answer the question that is most pressing.  (It all depends…)

I think these categories mostly cover everybody.  If you can think of other situations, let me know.  I’ll tell you what I think.