Intention to change

I’ve talked about intention to change and how stating that intention out loud and to others makes a difference. This piece of advice is showing up in some unexpected places and here. If you state your goal, there is a higher likelihood that you will be successful. That makes sense. If you confess publicly (or even to a priest), you are more likely to do the penance/make a change. What I find interesting is that this is so evaluation. What difference did the intervention make? How does that difference relate to the merit, worth, value of the program?

Lent started March 5. That is 40 days of discipline–giving up or taking on. That is a program. What difference will it make? Can you go 40 days without chocolate?

New Topic:

I got my last comment in November, 2013. I miss comments. Sure most of them were check out this other web site. Still there were some substantive comments and I’ve read those and archived them. My IT person doesn’t know what was the impetus for this sudden stop. Perhaps Google changed its search engine optimization code and my key words are no longer in the top. So I don’t know if what I write is meaningful; is worthwhile; or is resonating with you the reader in any way. I have been blogging now for over four years…this is no easy task. Comments and/or questions would be helpful, give me some direction.

New Topic:

Chris Lysy cartoons in his blog. This week he blogged about logic models. He only included logic models that are drawn with boxes. What if the logic model is circular? How would it be different? Can it still lead to outcomes? Non-linear thinkers/cultures would say so. How would you draw it? Given that mind mapping may also be a model, how do they relate?

Have a nice weekend. The sun is shining again! sunshine in oregon

 

I’ve been reading about models lately; models that have been developed, models that are being used today, models that may be used tomorrow.

Webster (Seventh New Collegiate) Dictionary has almost two inches about models–I think my favorite definition is the fifth one: an example for imitation or emulation. It seems to be most relevant to evaluation. What do evaluators do if not imitate or emulate others?

To that end, I went looking for evaluation models. Jim Popham’s book Popham, educational evaluationhas a chapter (2, Alternative approaches to educational evaluation) on models. Fitzpatrick, Sanders, and Worthen  fitzpatrick book 2has numerous chapters on “approaches”  (what Popham calls models). (I wonder if this is just semantics?)

Models have appeared in other blogs (not called models, though). In the case of Life in Perpetual Beta (Harold Jarche) provides this view of how organizations have evolved and calls them forms.(The below image is credited to David Ronfeldt.)

TIMN-David Ronfeldt

(Looks like a model to me. I wonder what evaluators could make of this.)

The reading is interesting because it is flexible. It approaches the “if it works, use it” paradigm; the one I use regularly.

I’ll just list the models Popham uses and discuss them over the next several weeks. (FYI-both Popham and Fitzpatrick, et. al., talk about the overlap of models.) Why is a discussion of models important, you may ask? I’ll quote Stufflebeam: “The study of alternative evaluation approaches is important for professionalizing program evaluation and for its scientific advancement and operation” (2001, p. 9).

Popham lists the following models:

  • Goal-Attainment models
  • Judgmental models emphasizing inputs
  • Judgmental models emphasizing outputs
  • Decision-Facilitation models
  • Naturalistic models

Popham does say that the model classification could have been done a different way. You will see that in the Fitzpatrick, Sanders, and Worthen volume  where they talk about the following approaches:

  • Expertise-oriented approaches
  • Consumer-oriented approaches
  • Program-oriented approaches
  • Decision-oriented approaches
  • Participant-oriented approaches

They have a nice table that does a comparative analysis of alternative approaches (Table 10.1, pp. 249-251)

Interesting reading.

References

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Boston, MA: Pearson.

Popham, W. J. (1993). Educational Evaluation (3rd ed.). Boston, MA: Allyn and Bacon.

Stufflebeam, D. L. (2001). Evaluation models. New Directions for Evaluation (89). San Francisco, CA: Jossey-Bass.

 

 

 

When Elliot Eisner eliott eisner died in January, I wrote a post on his work as I understood it.

I may have mentioned naturalistic models; if not I needed to label them as such.

Today, I’ll talk some more about those models.

These models are often described as qualitative. Egon Guba egon guba (who died in 2008) and Yvonna Lincoln yvonna lincoln (distinguished professor of higher education at Texas A&M University) talk about qualitative inquiry in their 1981 book, Effective Evaluation (it has a long subtitle–here is the cover)effective evaluation. They indicate that there are two factors on which constraints can be imposed: 1) antecedent variables and 2) possible outcomes, with the first impinging on the evaluation at its outset and the second referring to the possible consequences of the program. They propose a 2×2 figure to contrast between naturalistic inquiry and scientific inquiry depending on the constraints.

Besides Eisner’s model, Robert Stake robert stakeand David Fetterman Fetterman have developed models that fit this model. Stake’s model is called responsive evaluation and Fetterman talks about ethnographic evaluation. Stake’s work is described in Standards-Based & Responsive Evaluation (2004) Stake-responsive evaluation.  Fetterman has a volume called Ethnography: Step-by-Step (2010) ethnography step-by-step.

Stake contended that evaluators needed to be more responsive to the issues associated with the program and in being responsive, measurement precision would be decreased. He argued that an evaluation (and he is talking about educational program evaluation) would be responsive if it “oreints more directly to program activities than to program intents; responds to audience requirements for information and if the different value perspectives present are referred to in reporting the success and failure of the program” (as cited in Popham, 1993, pg. 42). He indicates that human instruments (observers and judges) will be the data gathering approaches.  Stake views responsive evaluation to be “informal, flexible, subjective, and based on evolving audience concerns” (Popham, 1993, pg. 43).  He indicates that this approach is based on anthropology as opposed to psychology.

More on Fetterman’s ethnography model later.

References:

Fetterman, D. M. (2010). Ethnography step-by-step. Applied Social Research Methods Series, 17. Los Angeles, CA: Sage Publications.

Popham, W. J. (1993). Educational Evaluation (3rd ed.). Boston, MA: Allyn and Bacon.

Stake, R. E. (1975). Evaluating the arts in education: a responsive approach. Columbus, OH: Charles E. Merrill.

Stake, R. E. (2004). Standards-based & responsive evaluation. Thousand Oaks, CA: Sage Publications.

 

 

 

 

Evaluation models abound.

Models are a set of plans.

Educational evaluation models are plans that could “lead to more effective evaluations” (Popham, 1993, p. 23).  Popham, educational evaluation  Popham (1993) goes on to say that there was little or no thought given to a new evaluation model that would make it distinct from other models so that in sorting models into categories, the categories “fail to satisfy…without overlap” (p. 24).  Popham employs five categories:

  1. Goal-attainment models;
  2. Judgmental models emphasizing inputs;
  3. Judgmental models emphasizing outputs;
  4. Decision-facilitation models; and
  5. Naturalistic models

I want to acquaint you with one of the naturalistic models, the connoisseurship model.  (I hope y’all recognize the work of Guba and Lincoln in the evolution of naturalistic models; if not I have listed several sources below.)  Elliott Eisner  drew upon his experience as an art educator and used art criticism as the basis for this model.  His approach relies on educational connoisseurship and educational criticism.  Connoisseurship focuses on complex entities (think art, wine, chocolate); criticism is a form which “discerns the qualities of an event or object” (Popham, 1993, p. 43) and puts into words that which has been experienced.  This verbal presentation allows for those of us who do not posess the critic’s expertise can understand what was perceived.  Eisner advocated that design is all about relationships and relationships are necessary for the creative process and thinking about the creative process.  He proposed “that experienced experts, like critics of the arts, bring their expertise to bear on evaluating the quality of programs…” (Fitzpatrick, Sanders and Worthen, 2004).  He proposed an artistic paradigm (rather than a scientific one) as a supplement other forms of inquiry.  It is from this view that connoisseurship derives—connoisseurship is the art of appreciation; the relationships between/among the qualities of the evaluand. 

Elliot Eisner died January 10, 2014; he was 81. He was the Lee Jacks Professor of Education at Stanford Graduate School of Education.  He advanced the role of arts in education and used arts as models for improving educational practice in other fields.  His contribution to evaluation was significant.

Resources:

Eisner, E. W. (1975). The perceptive eye:  Toward the reformation of educational evaluation.  Occasional Papers of the Stanford Evaluation Consortium.  Stanford, CA: Stanford University Press.

Eisner, E. W. (1991a). Taking a second look: Educational connoisseurship revisited.  In Evaluation and education: At quarter century, ed. M. W. McLaughlin & D. C. Phillips.  Chicago: University of Chicago Press.

Eisner, E. W. (1991b). The enlightened eye: Qualitative inquiry and the enhancement of educational practice.  New York: Macmillian.

Eisner, E. W., & Peshkin, A. (eds.) (1990).  Qualitative inquiry in education.  NY:Teachers College Press.

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program Evaluation: Alternative approaches and practical guidelines, 3rd ed. Boston, MA: Pearson

Guba, E. G., & Lincoln, Y. S. (1981). Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches.  San Francisco: Jossey-Bass.

Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Newbury Park, CA: Sage Publications.

Patton, M. Q. (2002).  Qualitative research & evaluation methods. 3rd ed. Thousand Oaks, CA: Sage Publications.

Popham, W. J. (1993). Educational evaluation. 3rd ed. Boston, MA: Allyn and Bacon.

 

Normal
0

false
false
false

EN-US
X-NONE
X-NONE

MicrosoftInternetExplorer4

 

On January 22, 23, and 24, a group of would be evaluators will gather in Tucson, AZ at the Westin La Paloma Resort.

Even though Oregon State is a co-sponsor for this program, being in Oregon in winter (i.e., now) is not the land of sunshine, and since Vitamin D is critical for everyone’s well being, I chose Tucson for our capstone event.  Our able support person, Gretchen, chose  the La Paloma, a wonderful site on the north side of Tucson.  So even if it is not warm, it will be sunny.  Why, we might even get to go swimming; if not swimming,  certainly hiking.  There are a lot of places to hike around Tucson…in Sabino Canyon ; near/around A Mountain (first year U of A students get to whitewash or paint the A)  ; Saguaro National Park ; or maybe in one of the five (yes, five) mountain ranges surrounding Tucson.  (If you are interested in other hikes, look here.)

We will be meeting Tuesday afternoon, all day Wednesday, and Thursday morning.  Participants have spent the past 17 months participating in and learning about evaluation.  They have identified a project/program (either big P or little p), and they participated in a series of modules, webinars, and office hours on topics used everyday in evaluating a project or program.We anticipate over 20 attendees from the cohorts.  We have participants from five Extension program areas (Nutrition, Agriculture, Natural Resources, Family and Community Science, and 4-H), from ten western states (Oregon, Washington, California, Utah, Colorado, Idaho, New Mexico, Arizona, Wyoming, and Hawaii.), and all levels of familiarity with evaluation (beginner to expert).

I’m the evaluation specialist in charge of the program content (big P) and Jim Lindstrom (formerly of Washington State, currently University of Idaho) has been the professional development and technical specialist, and Gretchen Cuevas (OSU) has been our wonderful support person.  I’m using Patton’s Developmental Evaluation Model to evaluate this program.  Although some things were set at the beginning of the program (the topics for the modules and webinars, for example), other things were changed depending on feedback (readings, office hours). Although we expect that participants will grow their knowledge of evaluation, we do not know what specific and measurable outcomes will result (hence, developmental). We hope to run the program (available to Extension faculty in the Western Region) again in September 2013.  Our goal is to build evaluation capacity in the Western Extension Region.  Did we?

As promised last week, this week is (briefly) on implementation, monitoring, and delivering evaluation.

Implementation. To implement an evaluation, one needs to have a plan, often called a protocol.  Typically, this is a step-by-step list of what you will do to present the program to your target audience.  In presenting your program to your target audience, you will also include a step-by-step list of how you will gather evaluation information (data).  What is important about the plan is that it be specific enough to be replicated by other interested parties.  When a plan is developed, there is typically a specific design behind each type of data to be collected.  For example, specific knowledge change is often measured by a pretest-posttest design; behavioral change is often measured with a repeated measures design.  Campbell and Stanley, in their classic book, Experimental and quasi-experimental designs for research, present a wealth of information about designs that is useful in evaluation (as well as research).

There are numerous designs which will help develop the plan for the implementation of the program AND the evaluation.

Monitoring. Simply put, monitoring is watching to see if what you said would happen, actually does.  Some people think of monitoring as .  Although monitoring may seem like being watched, it is being watched with a plan.  When I first finished my doctorate and became an evaluator, I conceptualized evaluation simply as process, progress, product. This helped stakeholders understand what evaluation was all about.  The monitoring part of evaluation was answered when I asked, “Are we making progress?  Are we where we said we would be at the time we said we would be there?”  This is really important because sometimes, as Jonny Morell points out in his book, evaluation don’t always  go as planned, even with the best monitoring system.

Delivering.  Delivering is the nuts and bolts of what you are going to do.  It addresses the who, what, where, when, how, and why of the implementation plan.  All of these questions interrelate–for example, if you do not identify who will conduct the evaluation, often the evaluation is “squeezed in” at the end of a program because it is required.

In addition to answering these questions when delivering the evaluation, one thinks about the models, or evaluation approaches.  Stufflebeam, Madaus, and Kellaghan  (in Evaluation models:  Viewpoints on educational and human services evaluation) discuss various approaches and state that the approach used by the evaluator will provide a framework for conducting an evaluation as well as  presenting and using the evaluation results.