I don’t know what to write today for this week’s post. I turn to my book shelf and randomly choose a book. Alas, I get distracted and don’t remember what I’m about.  Mama said there would be days like this…I’ve got writer’s block (fortunately, it is not contagious).writers-block (Thank you, Calvin). There is also an interesting (to me at least because I learned a new word–thrisis: a crisis of the thirties) blog on this very topic (here).

So this is what I decided rather than trying to refocus. In the past 48 hours I’ve had the following discussions that relate to evaluation and evaluative thinking.

  1. In a faculty meeting yesterday, there was the discussion of student needs which occur during the students’ matriculation in a program of study. Perhaps it should include assets in addition to needs as students often don’t know what they don’t know and cannot identify needs.
  2. A faculty member wanted to validate and establish the reliability for a survey being constructed. Do I review the survey, provide the reference for survey development, OR give a reference for validity and reliability (a measurement text)? Or all of the above.
  3. There appears to be two virtual focus group transcripts for a qualitative evaluation that have gone missing. How much affect will those missing focus groups have on the evaluation? Will notes taken during the sessions be sufficient?
  4. A candidate came to campus for an assistant professor position who presented a research presentation on the right hand (as opposed to the left hand) [Euphemisms for the talk content to protect confidentiality.] Why even study the right hand when the left hand is what is the assessment?
  5. Reading over a professional development proposal dealing with what is, what could be, and what should be. Are the questions being asked really addressing the question of gaps?

Continue reading

Having just read Harold Jarche’s April 27, 2014 blog, making sense of the network era, about personal knowledge mastery (PKM), I am once again reminded about the challenge of evaluation. I am often asked, “Do you have a form I could use about…?” My nutrition and exercise questions notwithstanding (I do have notebooks of those), this makes evaluation sound like it is routine, standardized, or prepackaged rather than individualized, customized, or specific. For me, evaluation is about the exceptions to the rule; how the evaluation this week may have similarities to something I’ve done before (after all this time, I would hope so…), yet is so different; unique, specific.

You can’t expect to find a pre-made formsurvey 2 for your individual program (unless, of course you are replicating a previously established program). Evaluations are unique and the evaluation approach needs to match that unique program specialness. Whether the evaluation uses a survey, a focus group, or an observation (or any other data gathering approach), that approach to gathering data needs to focus on the evaluation question you want answered. You can start with “What difference did the program make?” Only you, the evaluator, can determine if you have enough resources to conduct the evaluation to answer the specific questions that result from what difference did the program make.  You probably do not have enough resources to determine if the program led your target audience to world peace; you might have enough resources to determine if the intention to do something different is there. You probably have enough resources to decide how to use your findings. It is so important that the findings be used; use may be how world peace may be accomplished.

demographics 4There are a few commonalities in data collection; those are the demographics, the data that tell you what your target audience looks like. Things like gender, age, marital status, education level, SES, probably a few other things depending on the program. Make sure when you ask demographic information that a “choose not to answer” option is provided in the survey. Sometimes you have to ask; observations don’t always provide the answer. You need to make sure you include demographics in your survey as most journals want to know what the target audience looked like.

Readers, what makes your evaluations different, unique, special? I’d like to hear about that. Oh and while you are at it…like and share this post, if you do.

 

Variables.

We all know about independent variables, and dependent variables.  Probably even learned about moderator variables, control variables and intervening variables.  Have you heard of confounding variables?  Variables over which you have no (or very little) control.  They present as a positive or negative correlation with the dependent and independent variable.  This spurious relationship plays havoc with analyses, program outcomes, and logic models.  You see them often in social programs.

Ever encounter one? (Let me know).  Need an example?  Here is one a colleague provided.  There was a program developed to assist children removed from their biologic  mothers (even though the courts typically favor mothers) to improve the children’s choices and chances of success.  The program had included training of key stakeholders (including judges, social service, potential foster parents).  The confounding variable that wasn’t taken into account was the sudden appearance of the biological father.  Judges assumed that he was no longer present (and most of the time he wasn’t); social service established fostering without taking into consideration the presence of the biological father; potential foster parents were not allerted in their training of the possibility.  Needless to say, the program failed.  When biologic fathers appeared (as often happened), the program had no control over the effect they had.  Fathers had not been included in the program’s equation.

Reviews.

Recently, I was asked to review a grant proposal, the award would result in several hundred thousand dollars (and in today’s economy, no small change).  The PI’s passion came through in the proposal’s text.  However, the PI and the PI’s colleagues did some major lumping in the text that confounded the proposed outcomes.  I didn’t see how what was being proposed would result in what was said to happen.  This is an evaluative task.  I was charged to with evaluating the proposal on technical merit, possibility of impact (certainly not world peace), and achievability.  The proposal was lofty and meant well.  The likelihood that it would accomplish what it proposed was unclear, despite the PI’s passion.  When reviewing a proposal, it is important to think big picture as well as small picture.  Most proposals will not be sustainable after the end of funding.  Will the proposed project be able to really make an impact (and I’m not talking here about world peace).

Conversations.

I attended a meeting recently that focused on various aspects of diversity.  (Now among the confounding here is what does one mean by diversity; is it only the intersection of gender and race/ethnicity?  Or something bigger, more?)  One of the presenters talked about how just by entering into the conversation, the participants would be changed.  I wondered, how can that change be measured?  How would you know that a change took place?  Any ideas?  Let me know.

Focus groups.

A colleague asked whether a focus group could be conducted via email.  I had never heard of such a thing (virtual, yes; email, no).  Dick Krueger and Mary Ann Casey only talk about electronic reporting in their 4th edition of their Focus Group book. krueger 4th ed  If I go to Wikipedia (keep in mind it is a wiki…), there is a discussion of online focus groups.  Nothing offered about email focus groups.  So I ask you, readers, is it a focus group if it is conducted by email?

 

 

 

I’m about to start a large scale project, one that will be primarily qualitative (it may end up being a mixed methods study; time will tell); I’m in the planning stages with the PI now.  I’ve done qualitative studies before–how could I not with all the time I’ve been an evaluator?  My go to book for qualitative data analysis has always been Miles and Huberman miles and huberman qualitative data (although my volume is black).  This is their second edition published in 1994.  I loved that book for a variety of reasons: 1) it provided me with a road map to process qualitative data; 2) it offered the reader an appendix for choosing a qualitative software program (now out of date); and 3) it was systematic and detailed in its description of display.  I was very saddened to learn that both the authors had died and there would not be a third edition.  Imaging my delight when I got the Sage flier of a third edition! Qualitative data analysis ed. 3  Of course I ordered it.  I also discovered that Saldana (the new third author on the third edition) has written another book on qualitative data that he sites a lot in this third edition (Coding manual for qualitative researchers coding manual--johnny saldana) and I ordered that volume as well.

Saldana, in the third edition, talks a lot about data display, one of the three factors that qualitative researchers must keep in mind.  The other two are data condensation and conclusion drawing/verification.  In their review, Sage Publications says, “The Third Edition’s presentation of the fundamentals of research design and data management is followed by five distinct methods of analysis: exploring, describing, ordering, explaining, and predicting.”  These five chapters are the heart of the book (in my thinking); that is not to say that the rest of the book doesn’t have gems as well–it does.  The chapter on “Writing About Qualitative Research” and the appendix are two.  The appendix (this time) is an “An Annotated Bibliography of Qualitative Research Resources”, which lists at least 32 different classifications of references that would be helpful to all manner of qualitative researchers.  Because it is annotated, the bibliography provides a one sentence summary of the substance of the book.  A find, to be sure.   Check out the third edition.

I will be attending a professional development session with Mr. Saldana next week.  It will be a treat to meet him and hear what he has to say about qualitative data.  I’m taking the two books with me…I’ll write more on this topic when I return.  (I won’t be posting next week).

 

 

 

I have a few thoughts about causation, which I will get to in a bit…first, though, I want to give my answers to the post last week.

I had listed the following and wondered if you thought they were a design, a method, or an approach. (I had also asked which of the 5Cs was being addressed–clarity or consistency.)  Here is what I think about the other question.

Case study is a method used when gathering qualitative data, that is, words as opposed to numbers.  Bob Stake, Robert Brinkerhoff, Robert Yin, and others have written extensively on this method.

Pretest-post test Control Group is (according to Campbell and Stanley, 1963) an example of  a true experimental design if a control group is used (pg. 8 and 13).  NOTE: if only one group is used (according to Campbell and Stanley, 1963), pretest-post test is considered a pre-experimental design (pg. 7 and 8); still it is a design.

Ethnography is a method used when gathering qualitative data often used in evaluation by those with training in anthropology.  David Fetterman is one such person who has written on this topic.

Interpretive is an adjective use to describe the approach one uses in an inquiry (whether that inquiry is as an evaluator or a researcher) and can be traced back to the sociologists Max Weber and Wilhem Dilthey in the later part of the 19th century.

Naturalistic is  an adjective use to describe an approach with a diversity of constructions and is a function of “…what the investigator does…” (Lincoln and Guba, 1985, pg.8).

Random Control Trials (RCT) is the “gold standard” of clinical trials, now being touted as the be all and end all of experimental design; its proponents advocate the use of RCT in all inquiry as it provides the investigator with evidence that X (not Y) caused Z.

Quasi-Experimental is a term used by Campbell and Stanley(1963) to denote a design where random assignment cannot be made for ethical or practical reasons be accomplished; this is often contrasted with random selection for survey purposes.

Qualitative is an adjective to describe an approach (as in qualitative inquiry), a type of data (as in qualitative data) or
methods (as in qualitative methods).  I think of qualitative as an approach which includes many methods.

Focus Group is a method of gathering qualitative data through the use of specific, structured interviews in the form of questions; it is also an adjective for defining the type of interviews or the type of study being conducted (Krueger & Casey, 2009, pg. 2)

Needs Assessment is method for determining priorities for the allocation of resources and actions to reduce the gap between the existing and the desired.

I’m sure there are other answers to the terms listed above; these are mine.  I’ve gotten one response (from Simon Hearn at BetterEvaluation).  If I get others, I’ll aggregate them and share them with you.  (Simon can check his answers against this post.

Now causation, and I pose another question:  If evaluation (remember the root word here is value) is determining if a program (intervention, policy, product, etc. ) made a difference, and determined the merit or worth (i.e., value) of that program (intervention, policy, product, etc.), how certain are you that your program (intervention, policy, program, etc.) caused the outcome?  Chris Lysy and Jane Davidson have developed several cartoons that address this topic.  They are worth the time to read them.

I came across this quote from Viktor Frankl today (thanks to a colleague)

“…everything can be taken from a man (sic) but one thing: the last of the human freedoms – to choose one’s attitude in any given set of circumstances, to choose one’s own way.” Viktor Frankl (Man’s Search for Meaning – p.104)

I realized that,  especially at this time of year, attitude is everything–good, bad, indifferent–the choice is always yours.

How we choose to approach anything depends upon our previous experiences–what I call personal and situational bias.   Sadler* has three classifications for these biases.  He calls them value inertias (unwanted distorting influences which reflect background experience), ethical compromises (actions for which one is personally culpable), and cognitive limitations (not knowing for what ever reason).

When we approach an evaluation, our attitude leads the way.  If we are reluctant, if we are resistant, if we are excited, if we are uncertain, all these approaches reflect where we’ve been, what we’ve seen, what we have learned, what we have done (or not).  We can make a choice how to proceed.

The America n Evaluation Association (AEA) has long had a history of supporting difference.  That value is imbedded in the guiding principles.  The two principles which address supporting differences are

  • Respect for People:  Evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
  • Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

AEA also has developed a Cultural Competence statement.  In it, AEA affirms that “A culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation.”

Both of these documents provide a foundation for the work we do as evaluators as well as relating to our personal and situational bias. Considering them as we  enter into the choice we make about attitude will help minimize the biases we bring to our evaluation work.  The evaluative question from all this–When has your personal and situational biases interfered with you work in evaluation?

Attitude is always there–and it can change.  It is your choice.

 

 

 

 

Sadler, D. R. (1981). Intuitive data processing as a potential source of bias in naturalistic evaluations.  Education Evaluation and Policy Analysis, 3, 25-31.

Those of you who read this blog know a little about evaluation.  Perhaps you’d like to know more?  Perhaps not…

I think it would be valuable to know who was instrumental in developing the profession to the point it is today; hence, a little history.  This will be fun for those of you who don’t like history.  It will be a matching game.  Some of these folks have been mentioned in previous posts.  I’ll post the keyed responses next week.

Directions:  Match the  name with the evaluation contribution.  I’ve included photos so you know who is who, who you can put with a name and a contribution.

1.  2.  

3. 4.

5. 6.

7. 8.

9. 10.

11. 12.

13.   14. 15. 

16.   17. 18.

19. 20.  

 

 

A.  Michael Scriven                1.  Empowerment Evaluation

B.  Michael Quinn Patton     2.  Mixed Methods

C.  Blaine Worthen                 3.  Naturalistic Inquiry

D.  David Fetterman              4.  CIPP

E.  Thomas Schwandt            5. Formative/Summative

F.  Jennifer Greene                  6. Needs Assessment

G.  James W. Altschuld          7.  Developmental Evaluation

H.  Ernie House                          8.  Case study

I.   Yvonna Lincoln                    9.  Fourth Generation Evaluation

J.  Egon Guba                            10. Evaluation Capacity Building

K.  Lee J. Cronbach                   11.  Evaluation Research

L.  W. James Popham               12.  Teacher Evaluation

M.  Peter H. Rossi                       13.  Logic Models

N.  Hallie Preskill                       14.  Educational Evaluation

O.  Ellen Taylor-Powell            15.  Foundations of Program Evaluation

P.  Robert Stake                           16. Toward Reform of Program Evaluation

Q.  Dan Stufflebeam                  17. Participatory Evaluation

R.  Jason Millman                      18. Evaluation and Policy

S.  Will Shadish                           19. Evaluation and epistomology

T.  Laura Leviton                        20. Evaluation Certification

 

There are others more recent who have made contributions.These represent the folks who did seminal work that built the profession.  It also includes some more recent thinkers.  Have fun.

Statistically significant is a term that is often bandied about. What does it really mean? Why is it important?

First–why is it important?

It is important because it helps the evaluator make decisions based on the data gathered.600px-FUDGE_4dF_probability.svg

That makes sense–evaluators have to make decisions so that the findings can be used. If there  isn’t some way to set the findings apart from the vast morass of information, then it is only  background noise. So those of us who do analysis have learned to look at the probability level (written as a “p” value such as p=0.05). The “p” value helps us determine if something is true, not necessarily that something is important.

probability of occurringSecond–what does that number really mean?

Probability level means–can this  (fill in the blank here) happen by chance? If it can occur by chance, say 95 times out of 100, then it is probably not true.  When evaluators look at probability levels, we want really small numbers. Small numbers say that the likelihood that this change occurred by chance (or is untrue) is really unlikely. So even though a really small number occurs (like 0.05) it really means that there is a 95% chance that this change did not occur by chance and that it is really true. You can convert a p value by subtracting it from 100 (100-5=95; the likelihood that this did not occur by chance)

Convention has it that for something to be statistically significant, the value must be at least 0.05. This convention comes from academic research.  Smaller numbers aren’t necessarily better; they just confirm that the likelihood that true change occurs more often. There are software programs (Statxact for example) that can compute the exact probability; so seeing numbers like 0.047 would occur.2007-01-02-borat-visits-probability

Exploratory research (as opposed to confirmatory) may have a higher p value such as p=0.10.This means that the trend is moving in the desired direction.  Some evaluators let the key stakeholders determine if the probability level (p value) is at a level that indicates importance, for example, 0.062. Some would argue that 94 time out of 100 is not that much different from 95 time out of 100 of being true.

.

There are a three topics on which I want to touch today.

  • Focus group participant composition
  • Systems diagrams
  • Evaluation report usePatton's utilization focused evaluation

In reverse order:

Evaluation use: I neglected to mention Michael Quinn Patton’s book on evaluation use. Patton has advocated use before most everyone else.  The title of his book  is Utilization-Focused Evaluation. The 4th edition is available from the publisher (Sage) or from Amazon (and if I knew how to insert links to those sites, I’d do it…another lesson…).

cartoon of systems diagramSystems diagrams: I had the opportunity last week to work with a group of Extension faculty all involved in Watershed Education (called the WE Team). This was an exciting experience for me. I helped them visualize what their concept of the WE Team looked like using the systems tool of drawing a systems diagram. This is an exercise whereby individuals or small groups quickly draw a visualization of a system (in this case the WE Team).  This is not art; it is not realistic; it is only a representation from one perspective.

This is a useful tool for evaluators because it can help evaluators see where there are opportunities for evaluation; where there are opportunities for leverage; and where there there might be resistance to change (force fields). generic systems diagramIt also helps evaluators see relationships and feedback loops. I have done workshops on using systems tools in evaluating multi-site systems (of which a systems diagram is one tool) with Andrea Hegedus for the American Evaluation Association. Although this isn’t the diagram the WE Team created, it is an example of what a system diagram could look like. I used the soft ware called Inspiration to create the WE Team diagram. Inspiration has a free 30- day download  and it is inexpensive (the download  for V. 9 is $69.00).

Focus group participant composition.

The composition of focus groups is very important if you want to get data that you can use AND that answers your study question(s). Focus groups tend to be homogeneous, with variations to allow for differing opinions. Since the purpose of the focus group is to elicit in-depth opinions, it is important to compose the group with similar demographics (depending on your topic) in

  • age
  • occupation
  • use of program
  • gender
  • background

Comfort and use drive the composition. More on this later.

Welcome back!   For those of you new to this blog–I post every Tuesday, rain or shine…at least I have for the past 6 weeks…:) I guess that is MY new year’s resolution–write here every week; on Tuesdays…now to today’s post…

What one thing are you going to learn this year about evaluation?

Something about survey design?

OR logic modeling?

OR program planning?

OR focus groups?focusgroups

OR…(fill in the blank and let me know…)

A colleague of mine asked me the other day about focus groups.

Specifically, the question was, “What makes a good focus group question?”

I went to Dick Krueger and Mary Anne Casey’s book (Focus Groups, 3rd ed. Sage Publications, 2000).  On page 40, they have a section called “Qualities of Good Questions”. These make sense.They say: Good questions…

focus group book--krueger

  1. …sound conversational
  2. …use words participants would use.
  3. …are easy to say.
  4. …are clear.
  5. …are short.
  6. …are open-ended.
  7. …are one dimensional.
  8. …include good directions.

Let’s explore these a bit.

  1. Since focus groups are a social experience (albeit, a data gathering one), conversational questions help set an informal tone.
  2. If participants don’t/can’t understand your questions (because you use jargon, technical terms, etc.), you won’t get good information. Without good information, your focus group will not help answer your inquiry.
  3. You don’t want to stumble over the words, so avoid complicated sentences.
  4. Make sure your participants know what you are asking. Long introductions can be confusing, not clarifying. Messages may be mixed and thus interpreted in different ways. All this results information that doesn’t answer your inquiry.
  5. Like being clear, short questions tend to avoid ambiguity and yield good data.
  6. To quote Dick and Mary Anne, “Open-ended questions are a hallmark of focus group interviewing.” You want an opinion. You want an explanation. You want rich description. Yes/No doesn’t give you good data.
  7. Using synonyms add richness to questioning–using synonyms confuses the participant. Confused participants yields ambiguous data. Avoid using synonyms–keep questions one-dimensional keeps questions clear.
  8. Participants need clear instructions when asked to do something in the focus group.  “Make a list” needs to have “on the piece of paper in front of you” added.  A list in the participants head may get lost and you loose the data.

Before you convene your focus group, make sure you have several individuals (3 – 5) who are similar to and not included in your target audience review the focus group questions. It is always a good idea to pilot any question you use to gather data.

Ellen Taylor-Powell (at University of Wisconsin Extension) has a Quick Tips sheet on focus groups for more information. To access it go to: http://www.uwex.edu/ces/pdande/resources/pdf/Tipsheet5.pdf