Over the last several months, the Local Arrangements Working Group has been blogging at AEA365. One of ways evaluators can get ready for the upcoming annual conference is to read what the LAWG has to say about the conference. This year, the conference is once again in Denver. AEA was in Denver in 2008. Be forewarned–Denver is the mile high city. The air is rarefied and very dry. It may snow!

The LAWG has a lot to say about the conference and there are A LOT of links in these posts that are worth checking.  For those who have not been to AEA before or for those who have recently embraced evaluation, reading their posts are a wealth of information, as is the AEA website.

I will be presenting at two sessions this year–one on blogging (duh…) and one on capacity building. I see them related. I will also (like last year) be assisting with a professional development session (number 25) with my long time friend and colleague, Jim Altschuld. The professional development session occurs on Wednesday, October 15, 2014 from 8:00am MT – 3:00pm MT. It is titled Practical Ways to Link Needs Assessment (NA) with Asset/Capacity Building. (Just a little advertisement 🙂 ) It will draw from his new book, Bridging the Gap Between Asset/Capacity Building and Needs Assessment. Bridging the Gap-altschuld Continue reading

I just finished a chapter on needs assessment in the public sector–you know that part of the work environment that provides a service to the community and receives part of its funding from the county/state/federal governments. Most of you know I’ve been an academic for at least 31 years, maybe more (depending on when you start the clock). In that time I’ve worked as an internal evaluator, a program planner, and a classroom teacher. Most of what I’ve done has an evaluative component to it. (I actually earned my doctorate in program evaluation when most people in evaluation came from other disciplines.) During that time I’ve worked on many programs/projects in a variety of situations (individual classroom, community, state, and country). I find it really puzzling that evaluators will take on evaluation without having a firm foundation on which to base those evaluations. (I know I have done this; I can offer all manner of excuses, only not here).

If I had been invited to participate in the evaluation at the beginning of the program, at the conceptualization stage, I would have asked if a needs assessment had been done and what was the outcome of that assessment. Was there really a lack (i.e., a need); or was this “need” contrived to do something else (bring in grant money, further a career, make a stakeholder happy, etc.)? Continue reading

Within the last 24 hours I’ve had two experiences that remind me of how tenuous our connection is to others.

  1. Yesterday, I was at the library to return several books and pick up a hold. As I went to check out using the digitally connected self-check out station, I got an “out of service” message. Not thinking much of it, as I had received that message before, I moved to another machine. And got the same message! So I went to the main desk. There was a person in front of me; she was taking a lot of time. Turns out it wasn’t her; it was the internet (or intranet, don’t know which). There was no connection! After several minutes, a paper system was implemented and I was assured that the book would be listed by this evening. That the library had a back up system impressed me; I’ve often wondered what would happen if the electricity went out for a long periods of time since the card catalogs are no longer available.
  2. Also, yesterday, I received a phone call on my office land line (!), which is a rare occurrence these days. On the other end was a long time friend and colleague. We are working feverishly on finishing a NDE volume. We have an August 22 deadline and I will be out of town taking my youngest daughter to college. Family trumps everything. He was calling because the gardeners at his condo had cut the cable to his internet, television, and most importantly, his wi-fi. He couldn’t Skype me (our usual form of communication)! He didn’t expect resumption of service until the next day (August 20 at 9:47am PT he went back on line–he lives in the Eastern Time Zone). Continue reading

Summer reading 2 Many of you have numerous lists for summer reading (NY Times, NPR, Goodreads, Amazon, others…). My question is what are you reading to further your knowledge about evaluation? Perhaps you are; perhaps you’re not. So I’m going to give you one more list 🙂 …yes, it is evaluative.

If you want something light:  Regression to the Mean by Ernest R. House.house--regression to the mean It is a novel. It is about evaluation. It explains what evaluators do from a political perspective.

If you want something qualitative:  Qualitative Data Analysis by Matthew B. Miles, A. Michael Huberman, and Johnny Saldana.Qualitative data analysis ed. 3 It is the new 3rd edition which Sage (the publisher) commissioned. A good thing, too, as both Miles and Huberman are no longer able to do a revision. My new go-to book.

If you want something on needs assessment: Bridging the Gap Between Asset/Capacity Building and Needs Assessment by James W. Altschuld. Bridging the Gap-altschuld Most needs assessments start with what is lacking (i.e., needed); this proposes that an assessment start with what is present (assets) and build  from there, and in the process, meeting needs.

If you want something on higher education:  College (Un)bound by Jeff Selingo.college unbound by jeffry selingo  The state of higher education and some viable alternatives by a contributing editor at the Chronicle of Higher Education. Yes, it is evaluative.

Most of these I’ve mentioned before. I’ve read the above. I recommend them.

Continue reading

How do you approach evaluation?

Are you the expert?

Do you work in partnership?

Are you one of the group?

To which question did you answer yes?

If you are the expert and know the most (not everything, no one know everything [although teenagers think they do]), you are probably “doing to”. Extension has been “doing to” for most of its existence. Continue reading

What makes a blog engaging?

We know that blogs and blogging outreach to community members–those who have subscribed as well as those using various search engines to find a topical response.

Do the various forms of accessing the blog make a difference in whether the reader is engaged?

This is not a casual question, dear Readers. I will be presenting a poster at the Engagement Scholarship Consortium in October (which will be held in Edmonton, Alberta). I want to know. I want to be able to present to the various audiences at that meeting what my readers think. I realize that reading evaluation blogs may yield a response that is different from reading blogs related to food, or sustainability, or food sustainability, or climate chaos, or parenthood, or some other topic. There are enough evaluation blogs populating the internet that I think that there is some interest. I think my readers are engaged. Continue reading

When Elliot Eisner eliott eisner died in January, I wrote a post on his work as I understood it.

I may have mentioned naturalistic models; if not I needed to label them as such.

Today, I’ll talk some more about those models.

These models are often described as qualitative. Egon Guba egon guba (who died in 2008) and Yvonna Lincoln yvonna lincoln (distinguished professor of higher education at Texas A&M University) talk about qualitative inquiry in their 1981 book, Effective Evaluation (it has a long subtitle–here is the cover)effective evaluation. They indicate that there are two factors on which constraints can be imposed: 1) antecedent variables and 2) possible outcomes, with the first impinging on the evaluation at its outset and the second referring to the possible consequences of the program. They propose a 2×2 figure to contrast between naturalistic inquiry and scientific inquiry depending on the constraints.

Besides Eisner’s model, Robert Stake robert stakeand David Fetterman Fetterman have developed models that fit this model. Stake’s model is called responsive evaluation and Fetterman talks about ethnographic evaluation. Stake’s work is described in Standards-Based & Responsive Evaluation (2004) Stake-responsive evaluation.  Fetterman has a volume called Ethnography: Step-by-Step (2010) ethnography step-by-step.

Stake contended that evaluators needed to be more responsive to the issues associated with the program and in being responsive, measurement precision would be decreased. He argued that an evaluation (and he is talking about educational program evaluation) would be responsive if it “oreints more directly to program activities than to program intents; responds to audience requirements for information and if the different value perspectives present are referred to in reporting the success and failure of the program” (as cited in Popham, 1993, pg. 42). He indicates that human instruments (observers and judges) will be the data gathering approaches.  Stake views responsive evaluation to be “informal, flexible, subjective, and based on evolving audience concerns” (Popham, 1993, pg. 43).  He indicates that this approach is based on anthropology as opposed to psychology.

More on Fetterman’s ethnography model later.

References:

Fetterman, D. M. (2010). Ethnography step-by-step. Applied Social Research Methods Series, 17. Los Angeles, CA: Sage Publications.

Popham, W. J. (1993). Educational Evaluation (3rd ed.). Boston, MA: Allyn and Bacon.

Stake, R. E. (1975). Evaluating the arts in education: a responsive approach. Columbus, OH: Charles E. Merrill.

Stake, R. E. (2004). Standards-based & responsive evaluation. Thousand Oaks, CA: Sage Publications.

 

 

 

 

Evaluation models abound.

Models are a set of plans.

Educational evaluation models are plans that could “lead to more effective evaluations” (Popham, 1993, p. 23).  Popham, educational evaluation  Popham (1993) goes on to say that there was little or no thought given to a new evaluation model that would make it distinct from other models so that in sorting models into categories, the categories “fail to satisfy…without overlap” (p. 24).  Popham employs five categories:

  1. Goal-attainment models;
  2. Judgmental models emphasizing inputs;
  3. Judgmental models emphasizing outputs;
  4. Decision-facilitation models; and
  5. Naturalistic models

I want to acquaint you with one of the naturalistic models, the connoisseurship model.  (I hope y’all recognize the work of Guba and Lincoln in the evolution of naturalistic models; if not I have listed several sources below.)  Elliott Eisner  drew upon his experience as an art educator and used art criticism as the basis for this model.  His approach relies on educational connoisseurship and educational criticism.  Connoisseurship focuses on complex entities (think art, wine, chocolate); criticism is a form which “discerns the qualities of an event or object” (Popham, 1993, p. 43) and puts into words that which has been experienced.  This verbal presentation allows for those of us who do not posess the critic’s expertise can understand what was perceived.  Eisner advocated that design is all about relationships and relationships are necessary for the creative process and thinking about the creative process.  He proposed “that experienced experts, like critics of the arts, bring their expertise to bear on evaluating the quality of programs…” (Fitzpatrick, Sanders and Worthen, 2004).  He proposed an artistic paradigm (rather than a scientific one) as a supplement other forms of inquiry.  It is from this view that connoisseurship derives—connoisseurship is the art of appreciation; the relationships between/among the qualities of the evaluand. 

Elliot Eisner died January 10, 2014; he was 81. He was the Lee Jacks Professor of Education at Stanford Graduate School of Education.  He advanced the role of arts in education and used arts as models for improving educational practice in other fields.  His contribution to evaluation was significant.

Resources:

Eisner, E. W. (1975). The perceptive eye:  Toward the reformation of educational evaluation.  Occasional Papers of the Stanford Evaluation Consortium.  Stanford, CA: Stanford University Press.

Eisner, E. W. (1991a). Taking a second look: Educational connoisseurship revisited.  In Evaluation and education: At quarter century, ed. M. W. McLaughlin & D. C. Phillips.  Chicago: University of Chicago Press.

Eisner, E. W. (1991b). The enlightened eye: Qualitative inquiry and the enhancement of educational practice.  New York: Macmillian.

Eisner, E. W., & Peshkin, A. (eds.) (1990).  Qualitative inquiry in education.  NY:Teachers College Press.

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program Evaluation: Alternative approaches and practical guidelines, 3rd ed. Boston, MA: Pearson

Guba, E. G., & Lincoln, Y. S. (1981). Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches.  San Francisco: Jossey-Bass.

Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Newbury Park, CA: Sage Publications.

Patton, M. Q. (2002).  Qualitative research & evaluation methods. 3rd ed. Thousand Oaks, CA: Sage Publications.

Popham, W. J. (1993). Educational evaluation. 3rd ed. Boston, MA: Allyn and Bacon.

 

Normal
0

false
false
false

EN-US
X-NONE
X-NONE

MicrosoftInternetExplorer4

 

Did you know that there are at least 11 winter holidays besides Christmas–many of them related to light or the return of light.

One needs evaluation tools to determine the merit or worth, to evaluate the holiday’s value to you.  For me, any that return lightsolstice light are important.  So for me, there is Hanukkah menorah (and eight candles), Solstice solstice bonfire (and bonfires and yule logs), Christmas advent wreath(and Advent wreaths with five candles), Kwanzaa kinara( and kinara seven candles).  Sometimes Diwali Diwali falls late in November to be included (it is the ancient Hindu festival of lights that is a movable feast like Hanukkah).

I have celebrations for Hanukkah  (I have several menorahs), for Solstice  (I have two special candelabra solstice candelabra that holds 12 candles–a mini-bonfire to be sure), for Advent/Christmas (I make a wreath each year), and for Kwanzaa  (a handmade Kinara).  And foods for each celebration as well.  Because I live in a multicultural household, it is important that everyone understand that no holiday is more important than any other–all talk about returning light (literal or figurative).  Sometimes the holidays over lap–Hanukkah, Solstice, Christmas all in the same week…phew, I’m exhausted just thinking about it.  Sometimes it seems hard to keep them separate–then I realized that returning the light is not separate; it is light returning.  It is an evaluative task.

So well come the new born sun/son…the light returns.  Evaluation continues.

Happy Holidays…all of them!

I’m taking two weeks holiday–will see you in the new year.

Variables.

We all know about independent variables, and dependent variables.  Probably even learned about moderator variables, control variables and intervening variables.  Have you heard of confounding variables?  Variables over which you have no (or very little) control.  They present as a positive or negative correlation with the dependent and independent variable.  This spurious relationship plays havoc with analyses, program outcomes, and logic models.  You see them often in social programs.

Ever encounter one? (Let me know).  Need an example?  Here is one a colleague provided.  There was a program developed to assist children removed from their biologic  mothers (even though the courts typically favor mothers) to improve the children’s choices and chances of success.  The program had included training of key stakeholders (including judges, social service, potential foster parents).  The confounding variable that wasn’t taken into account was the sudden appearance of the biological father.  Judges assumed that he was no longer present (and most of the time he wasn’t); social service established fostering without taking into consideration the presence of the biological father; potential foster parents were not allerted in their training of the possibility.  Needless to say, the program failed.  When biologic fathers appeared (as often happened), the program had no control over the effect they had.  Fathers had not been included in the program’s equation.

Reviews.

Recently, I was asked to review a grant proposal, the award would result in several hundred thousand dollars (and in today’s economy, no small change).  The PI’s passion came through in the proposal’s text.  However, the PI and the PI’s colleagues did some major lumping in the text that confounded the proposed outcomes.  I didn’t see how what was being proposed would result in what was said to happen.  This is an evaluative task.  I was charged to with evaluating the proposal on technical merit, possibility of impact (certainly not world peace), and achievability.  The proposal was lofty and meant well.  The likelihood that it would accomplish what it proposed was unclear, despite the PI’s passion.  When reviewing a proposal, it is important to think big picture as well as small picture.  Most proposals will not be sustainable after the end of funding.  Will the proposed project be able to really make an impact (and I’m not talking here about world peace).

Conversations.

I attended a meeting recently that focused on various aspects of diversity.  (Now among the confounding here is what does one mean by diversity; is it only the intersection of gender and race/ethnicity?  Or something bigger, more?)  One of the presenters talked about how just by entering into the conversation, the participants would be changed.  I wondered, how can that change be measured?  How would you know that a change took place?  Any ideas?  Let me know.

Focus groups.

A colleague asked whether a focus group could be conducted via email.  I had never heard of such a thing (virtual, yes; email, no).  Dick Krueger and Mary Ann Casey only talk about electronic reporting in their 4th edition of their Focus Group book. krueger 4th ed  If I go to Wikipedia (keep in mind it is a wiki…), there is a discussion of online focus groups.  Nothing offered about email focus groups.  So I ask you, readers, is it a focus group if it is conducted by email?