Complexity.

I predict a bright future for complexity. Have you ever considered how complicated things can get, what with one thing always leading to another?

~~E. B. White, Quo VADIMUS? OR THE CASE FOR THE BICYCLE (Garden City. Publishing 1946)

 

Every thing is connected.

One thing does lead to another. And connections between them can be drawn. So let’s connect the dots.

In 2002, as AEA president, I chose for the theme of the meeting, Evaluation a Systematic Process that Reforms Systems.

Evaluation doesn’t stand in isolation; it is not something that is added on at the end as an afterthought.

Many program planners see evaluation that way, unfortunately. Only as an add on, at the end.

Contrary to may peoples’ belief, evaluators need to be included at the outset of the program. They also need to be included at each stage thereafter (Program Implementation, Program Monitoring, and Program Delivery; Data Management and Data Analysis; Program Evaluation Utilization).

Systems Concepts.

Shortly after Evaluation 2002, (in 2004) the Systems Evaluation Topical Interest Group was formed.

AEA published (2007) “Systems Concepts in Evaluation: A Expert Anthology” (scroll to the end). It was edited by Bob Williams and Iraj Imam (who died as this volume was going to press). To order, see this link.

This volume does an excellent job of demonstrating how evaluation and systems concepts are related.

It connects the dots.

In that volume, Gerald Midgley writes about “the intellectual development of systems field, how this has influenced practice and importantly the relevance for all this to evaluators and evaluation”. It is the pivotal chapter (according to the editors).

While it is possible to trace the idea to trace the ideas about holistic thinking back to the ancient Greeks, systems thinking is probably best attributed to the ideas of von Bertalanffy [Bertalanffy, L. von. (1950). Theory of open systems in physics and biology. Science, III: 23-29.]

I would argue that the complexity concept will go back to at least Alexander von Humboldt . (Way before von Bertalanffy.) He was an intrepid explorer and created modern environmentalism. Alexander von Humboldt lived between 1769–1859. Although environmentalism is a complex word, it really is a system. With connections. And complexity.

Suffice it to say, there are no easy answers to the problems faced by professionals today. Complex. Complicated. And one thing leading to another.

my .

Alternative facts.

Never. Never. has evaluation been questioned with the label of “alternative facts.”

Over the years, I have been very aware that evaluation is a political activity.

I have talked about evaluation being political (here, and here, and here, and here ).

But is it? Is it just another way of making the answer be what we want it to be? A form of alternative fact?

I’ve been an evaluator for a long time. I want to make a difference to the people who experience my programs (or the programs for which I’m consulting as an external evaluator). The thought that I might be presenting “alternative facts” is troublesome.

Did I really determine that outcome? Or is the outcome bogus? Liars use statistics, you know. (This is a paraphrase of a quote that Mark Twain attributed to Benjamin Disraeli.)

Big news brings out the fakers. But are evaluation results “big news”? Or…do people not want to hear what is actually happening, what the outcome really is?

Reminds me of 1984 ( George Orwell): War is peace. Freedom is slavery. Ignorance is strength (the English Socialist Party–aka. INGSOC). Kevin Siers added, in his cartoon of Sean Spicer,  “2017 is 1984”.  Two contradictory ideas existing at the same time as correct.

Statistics.

Statistics is a tool that evaluators use on a regular basis. It allows evaluators to tease apart various aspects of a program. The “who” , the “what”, the “when”, maybe even the “why”. Statistics can certainly help determine if I made a difference  But how I see statistics may not be how you see them, interpret them, use them. Two people can look at a set of statistics and say they do not agree. Is that an example of alternative facts?

Bias.

Everyone comes to any program with preconceived bias. You, the evaluator, want to see a difference. Preferably a statistically significant difference, not just a practical significance (although that would be nice as well).

Even if you are dealing with qualitative data, and not with quantitative data yielding statistics, you come to the program with bias. Objectivity is not an option. You wouldn’t be doing the program if you didn’t think that the program will make a difference. Yet, the individuals who have funded the program (or in some other way are the folks who get the final report) can (and do) not accept the report as it is written. That is not what they want to see/hear/read. Does that make the report alternative facts? Or is bias speaking without acknowledging that bias?

Perhaps Kierkegaard is right.

There are only two ways you can be fooled.

 

my .

molly.

Cartoons

 

Chris Lysy chris lysydraws cartoons.

Evaluation  and research cartoons.http://blogs.oregonstate.edu/programevaluation/files/2014/06/evaluation-and-project-working.jpg

 

http://blogs.oregonstate.edu/programevaluation/files/2014/06/research-v.-evaluation.jpg  http://blogs.oregonstate.edu/programevaluation/files/2014/06/I-have-evidence-cartoon.png

Logic Model cartoons.   http://i2.wp.com/freshspectrum.com/wp-content/uploads/2014/03/Too-complex-for-logic-and-evidence.jpg

Presentation cartoons.BS cartoon from fresh spectrum

 

Data cartoons.  http://i0.wp.com/freshspectrum.com/wp-content/uploads/2013/09/wpid-Photo-Sep-27-2013-152-PM1.jpg

More Cartoons

He has offered an alternative to presenting survey data. He has a wonderful cartoon for this.

Survey results are in. Who's ready to spend the next hour looking at poorly formatted pie charts?

He is a wonderful resource. Use him. You can contact him through his blog, fresh spectrum.

my  two cents   .

molly.

 

 

Personal and situational bias are forms of cognitive bias and we all have cognitive bias.

When I did my dissertation on personal and situational biases, I was talking about cognitive bias (only I didn’t know it, then).

According to Wikipedia, the term cognitive bias was introduced in 1972 (I defended my dissertation in 1983) by two psychologists Daniel Kahneman  and Amos Tversky kahneman-tversky1.

Then, I hypothesized that previous research experience (naive or sophisticated)  and the effects of exposure to expected project outcomes (positive, mixed, negative) would affect the participant and make a difference in how the participant would code data. (It did.)  The Sadler article which talked about intuitive data processing was the basis for this inquiry. Now many years later, I am encountering cognitive bias again. Sadler says that “…some biases can be traced to a particular background knowledge…”(or possibly–I think–lack of knowledge), “…prior experience, emotional makeup or world view”. bias 4 (This, I think, falls under the category of, according to Tversky and Kahneman, human judgements and it will differ from rational choice theory (often given that label). Continue reading

Summer reading 2 Many of you have numerous lists for summer reading (NY Times, NPR, Goodreads, Amazon, others…). My question is what are you reading to further your knowledge about evaluation? Perhaps you are; perhaps you’re not. So I’m going to give you one more list 🙂 …yes, it is evaluative.

If you want something light:  Regression to the Mean by Ernest R. House.house--regression to the mean It is a novel. It is about evaluation. It explains what evaluators do from a political perspective.

If you want something qualitative:  Qualitative Data Analysis by Matthew B. Miles, A. Michael Huberman, and Johnny Saldana.Qualitative data analysis ed. 3 It is the new 3rd edition which Sage (the publisher) commissioned. A good thing, too, as both Miles and Huberman are no longer able to do a revision. My new go-to book.

If you want something on needs assessment: Bridging the Gap Between Asset/Capacity Building and Needs Assessment by James W. Altschuld. Bridging the Gap-altschuld Most needs assessments start with what is lacking (i.e., needed); this proposes that an assessment start with what is present (assets) and build  from there, and in the process, meeting needs.

If you want something on higher education:  College (Un)bound by Jeff Selingo.college unbound by jeffry selingo  The state of higher education and some viable alternatives by a contributing editor at the Chronicle of Higher Education. Yes, it is evaluative.

Most of these I’ve mentioned before. I’ve read the above. I recommend them.

Continue reading

In a recent post, I said that 30 was the rule of thumb, i.e., 30 cases was the minimum needed in a group to be able to run inferential statistics and get meaningful results.  How do I know, a colleague asked? (Specifically,  “Would you say more about how it takes approximately 30 cases to get meaningful results, or a good place to find out more about that?”) When I was in graduate school, a classmate (who was into theoretical mathematics) showed me the mathematical formula for this rule of thumb. Of course I don’t remember the formula, only the result. So I went looking for the explanation. I found this site. Although my classmate did go into the details of the chi-square distribution and the formula computations, this article doesn’t do that. It even provides an Excel Demo for calculating sample size and verifying this rule of thumb. I am so relieved that there is another source besides my memory.

 

New Topic:

Continue reading

Had a comment a while back on analyzing survey data…hmm…that is a quandary as most surveys are done on line (see Survey monkey, among others).

If you want to reach a large audience (because your population from which you sampled is large), you will probably use an on-line survey. The on-line survey companies will tabulate the data for you. Can’t guarantee that the tabulations you get will be what you want, or will tell you want you want to know. Typically (in my experience), you can get an Excel file which can be imported into a soft ware program and you can run your own analyses, separate from the on line analyses. Continue reading

I had a topic all ready to write about then I got sick.  I’m sitting here typing this trying to remember what that topic was, to no avail. That topic went the way of much of my recent memory; another day, perhaps.

I do remember the conversation with my daughter about correlation.  She had a correlation of .3 something with a probability of 0.011 and didn’t understand what that meant.  We had a long discussion of causation and attribution and correlation.

We had another long conversation about practical v. statistical significance, something her statistics professor isn’t teaching.  She isn’t learning about data management in her statistics class either.  Having dealt with both qualitative and quantitative data for a long time, I have come to realize that data management needs to be understood long before you memorize the formulas for the various statistical tests you wish to perform.  What if the flood happens????lost data

So today I’m telling you about data management as I understand it, because the flood  did actually happen and, fortunately, I didn’t loose my data.  I had a data dictionary.

Data dictionary.  The first step in data management is a data dictionary.   There are other names for this, which escape me right now…know that a hard copy of how and what you have coded is critical.  Yes, make a back up copy on your hard drive…have a hard copy because the flood might happen. (It is raining right now and it is Oregon in November.)

Take a hard copy of your survey, evaluation form, qualitative data coding sheet and mark on it what every code notation you used means.  I’d show you an example of what I do, only they are at the office and I am home sick without my files.  So, I’ll show you a clip art instead…data management    smiley.  No, I don’t use cards any more for my data (I did once…most of you won’t remember that time…), I do make a hard copy with clear notations.  I find my self doing that with other things to make sure I code the response the same way.  That is what a data dictionary allows you to do–check yourself.

Then I run a frequencies and percentages analysis.  I use SPSS (because that is what I learned first).  I look for outliers, variables that are miscoded, and system generated missing data that isn’t missing.  I look for any anomaly in the data, any humon error (i. e. my error).  Then I fix it.  Then I run my analyses.

There are probably more steps than I’ve covered today.  These are the first steps that absolutely must be done BEFORE you do any analyses.  Then you have a good chance of keeping your data safe.

There has been quite a bit written about data visualization, a topic important to evaluators who want their findings used.  Michael Patton talks about evaluation use in his 4th edition of utilization-focused evaluation. Patton's utilization focused evaluation  He doesn’t however list data visualization in the index; so he may talk about it somewhere–it isn’t obvious.

The current issue of New Directions for Evaluation data visualization NDE is devoted to data visualization and it is the first part (implying, I hope, for at least a part 2).  Tarek Azzam and Stephanie Evergreen are the guest editors.  This volume (the first on this topic in 15 years) sets the stage (chapter 1) and talks about quantitative data visualization and quantitative data visualization.  The last chapter talks about the tools that are available to the evaluator and there are many and they are various.  I cannot do them justice in this space; read about them in the NDE volume.  (If you are an AEA member, the volume is available on line.)

freshspectrum, a blog by Chris Lysy, talks about INTERACTIVE data visualization with illustrations.

Stephanie Evergreen, the co-guest editor of the above NDE, also blogs and in her October 2 post, talks about “Design for Federal Proposals (aka Design in a Black & White Environment)”.  More on data visualization.

The data visualizer that made the largest impact on me was Hans Rosling in his TED talks.  Certainly the software he uses makes the images engaging.  If he didn’t understand his data the way he does, he wouldn’t be able to do what he does.

Data visualization is everywhere.  There will be multiple sessions at the AEA conference next week.  If you can, check them out–get there early as they will fill quickly.

When I did my dissertation, there were several soon-to-be-colleagues who were irate that I did a quantitative study on qualitative data.  (I was looking at cognitive bias, actually.)  I needed to reduce my qualitative data so that I could represent it quantitatively.  This approach to coding is called magnitude coding.  Magnitude coding is just one of the 25 first cycle coding methods that Johnny Saldaña (2013) talks about in his book, The coding manual for qualitative researchers coding manual--johnny saldana (see pages 72-77).  (I know you cannot read the cover title–this is just to give you a visual; if you want to order it, which I recommend, go to Sage Publishers, Inc.)  Miles and Huberman (1994) also address this topic.miles and huberman qualitative data

So what is magnitude coding? It is a form of coding that “consists of and adds a supplemental alphanumeric or symbolic code or sub-code to an existing coded datum…to indicate its intensity, frequency, direction, presence , or evaluative content” (Saldaña, 2013, p. 72-73).  It could also indicate the absence of the characteristic of interest.  Magnitude codes can be qualitative or quantitative and/or nominal.  These codes enhance the description of your data.

Saldaña provides multiple examples that cover many different approaches.  Magnitude codes can be words or abbreviations that suggest intensity or frequency or codes can be numbers which do the same thing.  These codes can suggest direction (i.e., positive or negative, using arrows).  They can also use symbols like a plus (+) or a minus (), or other symbols indicating presence or absence of a characteristic.  One important factor for evaluators to consider is that magnitude coding also suggests evaluative content, that is , did the content demonstrate merit, worth, value?  (Saldaña also talks about evaluation coding; see page 119.)

Saldaña gives an example of analysis showing a summary table.  Computer assisted qualitative data analysis software (CAQDAS)  and Microsoft Excel can also provide summaries.  He notes “that is very difficult to sidestep quantitative representation and suggestions of magnitude in any qualitative research” (Saldaña, 2013, p. 77).  We use quantitative phrases all the time–most, often, extremely, frequently, seldom, few, etc.  These words tend “to enhance the ‘approximate accuracy’ and texture of the prose” (Saldaña, 2013, p. 77).

Making your qualitative data quantitative is only one approach to coding, an approach that is sometimes very necessary.