At a loss for what to write, I once again went to one of my favorite books, Michael Scriven’s ScrivenEvaluation Thesaurus Scriven book cover. This time when I opened the volume randomly, I came upon the entry for meta-evaluation. This is a worthy topic, one that isn’t addressed often. So this week, I’ll talk about meta-evaluation and quote Scriven as I do.

First, what is meta-evaluation? This is an evaluation approach which is the evaluation of evaluations (and “indirectly, the evaluation of evaluators”). Scriven suggests the application of an evaluation-specific checklist or a Key Evaluation Checklist (KEC) (p. 228). Although this approach can be used to evaluate one’s own work, the results are typically unreliable which implies (if one can afford it) to use an independent evaluator to conduct a meta-evaluation of your evaluations.

Then, Scriven goes on to say the following key points:

  • Meta-evaluation is the professional imperative of evaluation;
  • Meta-evaluation can be done formatively or summatively or both; and
  • Use the KEC to generate a new evaluation OR apply the checklist to the original evaluation as a product.

He lists the parts a KEC involved in a meta evaluation; this process includes 13 steps (pp. 230-231).

He gives the following reference:

Stufflebeam, D. (1981). Meta-evaluation: Concepts, standards, and uses. In R. Berk (Ed.), Educational evaluation methodology: The state of the art. Baltimore, MD: Johns Hopkins.

 

This is a link to an editorial in Basic and Applied Social PsychologyBasic and applied social psychology cover. It says that inferential statistics are no longer allowed by authors in the journal.

“What?”, you ask. Does that have anything to do with evaluation? Yes and no. Most of my readers will not publish here. They will publish in evaluation journals (of which there are many) or if they are Extension professionals, they will publish in the Journal of Extension.JoE logo And as far as I know, BASP is the only journal which has established an outright ban on inferential statistics. So evaluation journals and JoE still accept inferential statistics.

Still–if one journal can ban the use, can others?

What exactly does that mean–no inferential statistics? The journal editors define this ban as as “…the null hypothesis significance testing procedure is invalid and thus authors would be not required to perform it.” That means that authors will remove all references to  p-values, t-values, F-values, or any reference to statements about significant difference (or lack thereof) prior to publication. The editors go on to discuss the use of confidence intervals (No) and Bayesian methods (case-by case) and what inferential statistical procedures are required by the journal. Continue reading

Had a comment a while back on analyzing survey data…hmm…that is a quandary as most surveys are done on line (see Survey monkey, among others).

If you want to reach a large audience (because your population from which you sampled is large), you will probably use an on-line survey. The on-line survey companies will tabulate the data for you. Can’t guarantee that the tabulations you get will be what you want, or will tell you want you want to know. Typically (in my experience), you can get an Excel file which can be imported into a soft ware program and you can run your own analyses, separate from the on line analyses. Continue reading

On February 1 at 12:00 pm PT, I will be holding my annual virtual tea party.  This is something I’ve been doing since February of 1993.  I was in Minnesota and the winter was very cold, and although not as bleak as winter in Oregon, I was missing my friends who did not live near me.  I had a tea party for the folks who were local and wanted to think that those who were not local were enjoying the tea party as well.  So I created a virtual tea party.  At that time, the internet was not available; all this was done in hard copy (to this day, I have one or two friends who do not have internet…sigh…).  Today, the internet makes the tea party truly virtual–well the invitation is; you have to have a real cup of tea where ever you are.
Virtual Tea Time 2014

 

How is this evaluative?  Gandhi says that only you can be the change you want to see…this is one way you can make a difference.  How will you know?

I know because my list of invitees has grown exponentially.  And some of them share the invitation.  They pass it on.  I started with a dozen or so friends.  Now my address list is over three pages long.  Including my daughters and daughters of my friends (maybe sons, too for that matter…)

Other ways:  Design an evaluation plan; develop a logic model; create a metric/rubric.  Report the difference.  This might be a good place for using an approach other than a survey or Likert scale.  Think about it.

A colleague asks for advice on handling evaluation stories, so that they don’t get brushed aside as mere anecdotes.  She goes on to say of the AEA365 blog she read, ” I read the steps to take (hot tips), but don’t know enough about evaluation, perhaps, to understand how to apply them.”  Her question raises an interesting topic.  Much of what Extension does can be captured in stories (i.e., qualitative data)  rather than in numbers (i.e., quantitative data).  Dick Krueger, former Professor and Evaluation Leader (read specialist) at the University of Minnesota has done a lot of work in the area of using stories as evaluation.  Today’s post summarizes his work.

 

At the outset, Dick asks the following question:  What is the value of stories?  He provides these three answers:

  1. Stories make information easier to remember
  2. Stories make information more believable
  3. Stories can tap into emotions.

There are all types of stories.  The type we are interested in for evaluation purposes are organizational stories.  Organizational stories can do the following things for an organization:

  1. Depict culture
  2. Promote core values
  3. Transmit and reinforce the culture
  4. Provide instruction to employees
  5. Motivate, inspire, and encourage

He suggests six common types of organizational stories:

  1. Hero stories  (someone in the organization who has done something beyond the normal range of achievement)
  2. Success stories (highlight organizational successes)
  3. Lessons learned stories (what major mistakes and triumphs teach the organization)
  4. “How it works around here” stories (highlight core organizational values reflected in actual practice
  5. “Sacred bundle” stories (a collection of stories that together depict the culture of an organization; core philosophies)
  6. Training and orientation stories (assists new employees in understanding how the organization works)

To use stories as evaluation, the evaluator needs to consider how stories might be used, that is, do they depict how people experience the program?  Do they understand program outcomes?  Do they get insights into program processes?

You (as evaluator) need to think about how the story fits into the evaluation design (think logic model; program planning).  Ask yourself these questions:  Should you use stories alone?  Should you use stories that lead into other forma of inquiry?  Should you use stories that augment/illustrate results from other forms of inquiry?

You need to establish criteria for stories.  Rigor can be applied to story even though the data are narrative.  Criteria include the following:   Is the story authentic–is it truthful?  Is the story verifiable–is there a trail of evidence back to the source of the story?  Is there a need to consider confidentiality?  What was the original intent–purpose behind the original telling?  And finally, what does the story represent–other people or locations?

You will need a plan for capturing the stories.  Ask yourself these questions:  Do you need help capturing the stories?  What strategy will you use for collecting the stories?  How will you ensure documentation and record keeping?  (Sequence the questions; write them down the type–set up; conversational; etc.)  You will also need a plan for analyzing and reporting the stories  as you, the evaluator,  are responsible for finding meaning.

 

Last week I spoke about thinking like an evaluator by identifying the evaluative questions that you face daily.  They are endless…Yet, doing this is hard, like any new behavior.  Remember when you first learned to ride a bicycle?  You had to practice before you got your balance.  You had to practice a lot.  The same is true for identifying the evaluative questions you face daily.

So you practice, maybe.  You try to think evaluatively.  Something happens along the way; or perhaps you don’t even get to thinking about those evaluative questions.  That something that interferes with thinking or doing is resistance.  Resistance is a Freudian concept that means that you directly or indirectly refuse to change your behavior.   You don’t look for evaluative questions.  You don’t articulate the criteria for value.  Resistance usually occurs with anxiety about a new and strange situation.  A lot of folks are anxious about evaluation–they personalize the process.  And unless it is personnel evaluation, it is never about you.  It is all about the program and the participants in that program.

What is interesting (to me at least) is that there is resistance at many different levels–the evaluator, the participant, the stakeholder  (which may include the other two levels as well).  Resistance may be active or passive.  Resistance may be overt or covert.  I’ve often viewed resistance as a 2×2 diagram.   The rows are active or passive; the columns are overt or covert.  So combining labels, resistance can be active overt, active covert, passive overt, passive covert.  Now I know this is an artificial and  socially constructed idea and may be totally erroneous.  This approach helps me to make sense out of what I see when I go to meetings to help a content team develop their program and try to introduce (or not) evaluation in the process.  I imagine you have seen examples of these types of resistance–maybe you’ve even demonstrated them.  If so, then you are in good company–most people have demonstrated all of these types of resistance.

I bring up the topic of resistance now for two reasons.

1) Because I’ve just started a 17-month long evaluation capacity building program with 38 participants.  Some of those participants were there because they were told to be there, and let me know their feelings about participating–what kind of resistance could they demonstrate?  Some of those participants are there because they are curious and want to know–what kind of resistance could that be?  Some of the participants just sat there–what kind of resistance could that be?  Some of the participants did anything else while sitting in the program–what kind of resistance could that be? and

2) Because I will be delivering a paper on resistance and evaluation at the annual American Evaluation Association meeting in November.  This is helping me organize my thoughts.

I would welcome your thoughts on this complex topic.

I was talking with a colleague about evaluation capacity building (see last week’s post) and the question was raised about thinking like an evaluator.  Got me thinking about the socialization of professions and what has to happen to build a critical mass of like minded people.

Certainly, preparatory programs in academia conducted by experts, people who have worked in the field a long time–or at least longer than you starts the process.  Professional development helps–you know, attending meetings where evaluators meet (like the upcoming AEA conference, U. S. regional affiliates [there are many and they have conferences and meetings, too], and international organizations [increasing in number–which also host conferences and professional development sessions]–let me know if you want to know more about these opportunities).  Reading new and timely literature  on evaluation provides insights into the language.  AND looking at the evaluative questions in everyday activities.  Questions such as:  What criteria?  What  standards?  Which values?  What worth? Which decisions?

The socialization of evaluators happens because people who are interested in being evaluators look for the evaluation questions in everything they do.  Sometimes, looking for the evaluative question is easy and second nature–like choosing a can of corn at the grocery store; sometimes it is hard and demands collaboration–like deciding on the effectiveness of an educational program.

My recommendation is start with easy things–corn, chocolate chip cookies, wine, tomatoes; move to harder things with more variables–what to wear when and where, or whether to include one group or another .  The choices you make  will all depend upon what criteria is set, what standards have been agreed upon, and what value you place on the outcome or what decision you make.

The socialization process is like a puzzle, something that takes a while to complete, something that is different for everyone, yet ultimately the same.  The socialization is not unlike evaluation…pieces fitting together–criteria, standards, values, decisions.  Asking the evaluative questions  is an ongoing fluid process…it will become second nature with practice.

 

I started this post the third week in July.  Technical difficulties prevented me from completing the post.  Hopefully, those difficulties are now in the past.

A colleague asked me what can we do when we can’t measure actual behavior change in our evaluations.  Most evaluations can capture knowledge change (short term outcomes); some evaluations can capture behavior change (intermediate or medium term outcomes); very few can capture condition change (long term outcomes, often called impacts–though not by me).  I thought about that.  Intention to change behavior can be measured.  Confidence (self-efficacy) to change behavior can be measured.  For me, all evaluations need to address those two points.

Paul Mazmanian, Associate Dean for Continuing Professional Development and Evaluation Studies at Virginia Commonwealth University, has studied changing practice patterns for several years.  One study, conducted in 1998, reported that “…physicians in both study and control groups were significantly more likely to change (47% vs. 7% p< .001) if they indicated intent to change immediately following the lecture” (Academic Medicine. 1998; 73:882-886).   Mazmanian and his co-authors say in their conclusions that “successful change in practice may depend less on clinical and barriers information than on other factors that influence physicians’ performance.  To further develop the commitment-to-change strategy in measuring effects of planned change, it is important to isolate and learn the powers of individual components of the strategy as well as their collective influence on physicians’ clinical behavior.”

 

What are the implications for Extension and other complex organizations?   It makes sense to extrapolate from this information from the continuing medical education literature.  Physicians are adults; most of Extension’s audience are adults.  If stated intention to change is highly predictable  “immediately following the lecture” (i.e., continuing education program) based on stated intention to change, then stated intention to change solicited from participants in Extension programs immediately following the program delivery would increase the likelihood of behavior change.  One of the outcomes Extension wants to see is change in behavior (medium term outcomes).  Measuring those behavior changes directly (through observation, or some other method) is often outside the resources available.  Measuring those intended behavior changes is within the scope of Extension resources.  Using a time frame (such as 6 months) helps bound the anticipated behavior change.  In addition, intention to change can be coupled with confidence to implement the behavior change to provide the evaluator with information about the effect of the program.  The desired effect is high confidence to change and willingness to implement the change within the specified time frame.  If Extension professionals find that result, then it would be safe to say that the program is successful.

REFERENCES

1.  Mazmanian, P.E., Daffron, S. R., Johnson, R. E., Davis, D. A., Kantrowitz, M. P.  (1998).  Information about barriers to planned change:  A Randomized controlled trial involving continuing medical education lectures and commitment to change.  Academic Medicine, 73 (8), 882-886.

2.  Mazmanian, P. E. & Mazmanian, P. M.  (1999).  Commitment to change: Theoretical foundations, methods, and outcomes.  The Journal of Continuing Education in the Health Professions, 19, 200 – 207.

3.  Mazmanian, P. E., Johnson, R. E, Zhang, A. Boothby, J. & Yeatts, E. J. (2001).  Effects of a signature on rates of change: A randomized controlled trial involving continuing medical education and the commitment-to-change model.  Academic Medicine, 76 (6), 642-646.

 

Hopefully, the technical difficulties with images is no longer a problem and I will be able to post the answers to the history quiz and the post I had hoped to post last week.  So, as promised, here are the answers to the quiz I posted the week of July 5.  The keyed responses are in BOLD

1.  Michael Quinn Patton, author of Utilization-Focused Evaluation and the new book, Developmental Evaluation and the classic Qualitative Evaluation and Research Methods .

2.   Michael Scriven is best known for his concept of formative and summative evaluation. He has also advocated that evaluation is a transdiscipline.  He is the author of the Evaluation Thesaurus .

3. Hallie Preskill is the co-author (with Darlene Russ-Eft) of Evaluation Capacity Building

4. Robert E. Stake has advanced work in case study and is the author of the book Multiple Case Study and The Art of Case Study Research.

5. David M. Fetterman is best known for his advocacy of empowerment evaluation and the book of that name, Foundations of Empowerment Evaluation .

6. Daniel Stufflebeam developed the CIPP (context input process product) model which is discussed in the book Evaluation Models .

7. James W. Altschuldt is the go-to person for needs assessment.  He is the editor of the Needs Assessment Kit (or everything you wanted to know about needs assessment and didn’t know where to find the answer).  He is also the co-author with Bell Ruth Witkin of two needs assessment books,  and  .

8. Jennifer C. Greene, the current President of the American Evaluation Association, and the author of a book on mixed methods .

9. Ernest R. House is a leader in the work of evaluation policy and is the author of  an evaluation novel,  Regression to the Mean   .

10. Lee J. Cronbach is a pioneer in education evaluation and the reform of that practice.  He co-authored with several associates the book, Toward Reform of Program Evaluation .

11.  Ellen Taylor-Powell, the former Evaluation Specialist at University of Wisconsin Extension Service and is credited with developing the logic model later adopted by the USDA for use by the Extension Service.  To go to the UWEX site, click on the words “logic model”.

12. Yvonna Lincoln, with her husband Egon Guba (see below) co-authored the book Naturalistic Inquiry  . She is the currently co-editor (with Norman K. Denzin) of the Handbook of Qualitative Research .

13.   Egon Guba, with his wife Yvonna Lincoln, is the co-author of 4th Generation Evaluation.

14. Blaine Worthen has championed certification for evaluators.  He wit h Jody L. Fitzpatrick and James
R. Sanders have co-authored Program Evaluation: Alternative Approaches and Practical Guidelines.

15.  Thomas A. Schwandt, a philosopher at heart who started as an auditor, has written extensively on evaluation ethics. He is also the co-author (with Edward S. Halpern) of Linking Auditing and Metaevaluation.

16.   Peter H. Rossi, co-author with Howard E. Freeman and Mark E. Lipsey, wrote Evaluation: A Systematic Approach , and is a pioneer in evaluation research.

17. W. James Popham, a leader in educational evaluation, and authored the volume, Educational Evaluation

18. Jason Millman was a pioneer of teacher evaluation and author of  Handbook of Teacher Evaluation

19.  William R. Shadish co-edited (with Laura C. Leviton and Thomas Cook) of Foundations of Program Evaluation: Theories of Practice . His work in theories of evaluation practice earned him the Paul F. Lazarsfeld Award for Evaluation Theory, from the American Evaluation Association in 1994.

20.   Laura C. Leviton (co-editor with Will Shadish and Tom Cook–see above) of Foundations of Program Evaluation: Theories of Practice has pioneered work in participatory evaluation.

 

 

Although I’ve only list 20 leaders, movers and shakers, in the evaluation field, there are others who also deserve mention:  John Owen, Deb Rog, Mark Lipsey, Mel Mark, Jonathan Morell, Midge Smith, Lois-Ellin Datta, Patricia Rogers, Sue Funnell, Jean King, Laurie Stevahn, John, McLaughlin, Michale Morris, Nick Smith, Don Dillman, Karen Kirkhart, among others.

If you want to meet the movers and shakers, I suggest you attend the American Evaluation Association annual meeting.  In 2011, it will be held in Anaheim CA, November 2 – 5; professional development sessions are being offered October 31, November 1 and 2, and also November 6.  More conference information can be found here.

 

 

Those of you who read this blog know a little about evaluation.  Perhaps you’d like to know more?  Perhaps not…

I think it would be valuable to know who was instrumental in developing the profession to the point it is today; hence, a little history.  This will be fun for those of you who don’t like history.  It will be a matching game.  Some of these folks have been mentioned in previous posts.  I’ll post the keyed responses next week.

Directions:  Match the  name with the evaluation contribution.  I’ve included photos so you know who is who, who you can put with a name and a contribution.

1.  2.  

3. 4.

5. 6.

7. 8.

9. 10.

11. 12.

13.   14. 15. 

16.   17. 18.

19. 20.  

 

 

A.  Michael Scriven                1.  Empowerment Evaluation

B.  Michael Quinn Patton     2.  Mixed Methods

C.  Blaine Worthen                 3.  Naturalistic Inquiry

D.  David Fetterman              4.  CIPP

E.  Thomas Schwandt            5. Formative/Summative

F.  Jennifer Greene                  6. Needs Assessment

G.  James W. Altschuld          7.  Developmental Evaluation

H.  Ernie House                          8.  Case study

I.   Yvonna Lincoln                    9.  Fourth Generation Evaluation

J.  Egon Guba                            10. Evaluation Capacity Building

K.  Lee J. Cronbach                   11.  Evaluation Research

L.  W. James Popham               12.  Teacher Evaluation

M.  Peter H. Rossi                       13.  Logic Models

N.  Hallie Preskill                       14.  Educational Evaluation

O.  Ellen Taylor-Powell            15.  Foundations of Program Evaluation

P.  Robert Stake                           16. Toward Reform of Program Evaluation

Q.  Dan Stufflebeam                  17. Participatory Evaluation

R.  Jason Millman                      18. Evaluation and Policy

S.  Will Shadish                           19. Evaluation and epistomology

T.  Laura Leviton                        20. Evaluation Certification

 

There are others more recent who have made contributions.These represent the folks who did seminal work that built the profession.  It also includes some more recent thinkers.  Have fun.