Sep
26

Cartoons

 

Chris Lysy chris lysydraws cartoons.

Evaluation  and research cartoons.http://blogs.oregonstate.edu/programevaluation/files/2014/06/evaluation-and-project-working.jpg

 

http://blogs.oregonstate.edu/programevaluation/files/2014/06/research-v.-evaluation.jpg  http://blogs.oregonstate.edu/programevaluation/files/2014/06/I-have-evidence-cartoon.png

Logic Model cartoons.   http://i2.wp.com/freshspectrum.com/wp-content/uploads/2014/03/Too-complex-for-logic-and-evidence.jpg

Presentation cartoons.BS cartoon from fresh spectrum

 

Data cartoons.  http://i0.wp.com/freshspectrum.com/wp-content/uploads/2013/09/wpid-Photo-Sep-27-2013-152-PM1.jpg

More Cartoons

He has offered an alternative to presenting survey data. He has a wonderful cartoon for this.

Survey results are in. Who's ready to spend the next hour looking at poorly formatted pie charts?

He is a wonderful resource. Use him. You can contact him through his blog, fresh spectrum.

my  two cents   .

molly.

 

 

Apr
27
Filed Under (Methodology) by Molly on 27-04-2016

NOTE: This was written last week. I didn’t have time to post. Enjoy.

 

Methodologymethodology 2, aka implementationimplementation, monitoringmonitoring-2, and deliverydeliveryis important. What good is it if you just gather the first findings that come to mind. Being rigorous here is just as important as when you are planning and modeling the program. So I’ve searched the last six years of blogs posts and gathered some of them for you. They are all about Survey, a form of methodology.survey image 3 Survey is a methodology that is often used by Extension, as it is easy to use. However, organizing the surveysurvey organization, getting the survey’s backsurvey return, and dealing with non-response are problematicnonresponse (another post, another time).

The previous posts are organized by date from the oldest to the most recent:

 

2010/02/10

2010/02/23

2010/04/09

2010/08/25

2012/08/09

2012/10/12

2013/03/13

2014/03/25

2014/04/15

2014/05/19

2015/06/29

2015/07/24

2015/12/07

2016/04/15

2016/04/21 (today’s post isn’t hyperlinked)

Just a few words on surveys today: A colleague asked about an evaluation survey for a recent conference. It will be an online survey probably using the University system, Qualtrics. My colleague jotted down a few ideas. The thought occurred to me that this book (by Ellen Taylor-Powell and Marcus Renner) would be useful. On page ten of this book, it asks for the type of information that is needed and wanted. It lists five types of possible information:

  1. Participant reaction (some measure of satisfaction);
  2. Teaching and facilitation (strengths and weaknesses of the presenter, who may (or may not) change the next time);
  3. Outcomes (what difference/benefits/intentions did the participant experience);
  4. Future programming (other educational needs/desires); and
  5. Participant background (who is attending and who isn’t can be answered here).

Thinking through these five categories made all the difference for my colleague. (Evaluation was a new area.) I had forgotten about how useful this booklet is for people being exposed to evaluation for the first time and to surveys, as well. I recommend it.

Jul
24
Filed Under (program evaluation) by Molly on 24-07-2015

survey image 3The use of a survey is a valuable evaluation tool, especially in the world of electronic media. The survey allows individuals to gather data (both qualitative and quantitative) easily and relatively inexpensively. When I want information about surveys, I turn to the 4th edition of the Dillman book Dillman 4th ed. (Dillman, Smyth, & Christian, 2014*). Dillman has advocated the “Tailored Design Method” for a long time. (I first became aware of his method, which he called “Total Design Method,” in his 1978 first edition,dillman 1st edition a thin, 320 page volume [as opposed to the 509 page fourth edition].)

Today I want to talk about the “Tailored Design” method (originally known as total design method).

In the 4th edition, Dillman et al. say that “…in order to minimize total survey error, surveyors have to customize or tailor their survey designs to their particular situations.” They are quick to point out (through various examples) that the same procedures won’t work  for all surveys.  The “Tailored Design Method” refers to the customizing survey procedures for each separate survey.  It is based upon the topic of the survey and the audience being surveyed as well as the resources available and the time-line in use.  In his first edition, Dillman indicated that the TDM (Tailored Design Method) would produce a response rate of 75% for mail surveys and an 80%-90% response rate is possible for telephone surveys. Although I cannot easily find the same numbers in the 4th edition, I can provide an example (from the 4th edition on page 21-22) where the response rate is 77% after a combined contact of mail and email over one month time. They used five contacts of both hard and electronic copy.

This is impressive. (Most surveys I and others I work with conduct have a response rate less than 50%.) Dillman et al. indicate that there are three fundamental considerations in using the TDM. They are:

  1. Reducing four sources of survey error–coverage, sampling, nonresponse, and measurement;
  2. Developing a set of survey procedures that interact and work together to encourage all sample members to respond; and
  3. Taking into consideration elements such as survey sponsorship, nature of survey population, and the content of the survey questions.

The use of a social exchange perspective suggests that respondent behavior is motivated by the return that behavior is expected, and usually does, bring. This perspective affects the decisions made regarding coverage and sampling, the way questions are written and questionnaires are constructed, and determines how contacts will produce the intended sample.

If you don’t have a copy of this book (yes, there are other survey books out there) on your desk, get one! It is well worth the cost ($95.00, Wiley; $79.42, Amazon).

* Dillman, D. A., Smyth, J. D. & Christian, L. M. (2014)  Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Hoboken, N. J.: John Wiley & Sons, Inc.

my two cents.

molly.

Feb
11
Filed Under (program evaluation) by Molly on 11-02-2015

I don’t know what to write today for this week’s post. I turn to my book shelf and randomly choose a book. Alas, I get distracted and don’t remember what I’m about.  Mama said there would be days like this…I’ve got writer’s block (fortunately, it is not contagious).writers-block (Thank you, Calvin). There is also an interesting (to me at least because I learned a new word–thrisis: a crisis of the thirties) blog on this very topic (here).

So this is what I decided rather than trying to refocus. In the past 48 hours I’ve had the following discussions that relate to evaluation and evaluative thinking.

  1. In a faculty meeting yesterday, there was the discussion of student needs which occur during the students’ matriculation in a program of study. Perhaps it should include assets in addition to needs as students often don’t know what they don’t know and cannot identify needs.
  2. A faculty member wanted to validate and establish the reliability for a survey being constructed. Do I review the survey, provide the reference for survey development, OR give a reference for validity and reliability (a measurement text)? Or all of the above.
  3. There appears to be two virtual focus group transcripts for a qualitative evaluation that have gone missing. How much affect will those missing focus groups have on the evaluation? Will notes taken during the sessions be sufficient?
  4. A candidate came to campus for an assistant professor position who presented a research presentation on the right hand (as opposed to the left hand) [Euphemisms for the talk content to protect confidentiality.] Why even study the right hand when the left hand is what is the assessment?
  5. Reading over a professional development proposal dealing with what is, what could be, and what should be. Are the questions being asked really addressing the question of gaps?

Read the rest of this entry »

Feb
04
Filed Under (program evaluation) by Molly on 04-02-2015

There has been a somewhat lengthy discussion regarding logic models on EvalTalk,listserv an evaluation listserv sponsored by the American Evaluation AssociationAEA logo. (Check out the listserv archivesEVALTALK Archives.)  This discussion has been called in the subject line, “Logic model for the world?” The discussion started on January 26, 2015. The most telling (at least to me) was a statement that appeared January 30, 2015:

“The problem is not the instrument. All instruments can be mastered as a matter of technique. The problem is that logic models mistake the nature of evaluative knowledge – which is neither linear nor rational.” (Saville Kushner, EvalTalk, January 30, 2015).

The follow-up of this discussion talks about tools, specifically hammers (Bill Fear, EvalTalk, January 30, 2015). Fear says, “Logic is only a tool. It does not exist outside of the construction of the mind.” Read the rest of this entry »

Jan
28
Filed Under (program evaluation) by Molly on 28-01-2015

making a difference 5I am an evaluator, charter member of the American Evaluation Association, and former member of the forerunner organization, Evaluation Network. When you push my “on ” button, I can talk evaluation until (a lot of metaphors could be used here); and often do. (I can also talk about other things with equal passion, though not professionally.) When my evaluation button is pushed or, for that matter, most of the time, I wonder what differencemake a difference am I making. In this case, I wonder what difference I am making with this blog.

One of my readers (I have more than I ever imagined) suggested that I develop an “online” survey that I can include regularly in my posts. I thought that was a good idea. I thought I’d go one better and have it be a part of the blog. Then I would tabulate the findings (if there are any 🙂 ). Just so you know, I DO read all the comments; I get at least six daily. I often do not comment on those, however.

So, reader, here is the making a difference survey . This link will (should) take you to Surveymonkey and the survey. Below, I’ve listed the questions that are in the survey.

Check all that apply.

Reading this blog makes a difference to me by:

  1. _____ Giving me a voice to follow
  2. _____ Providing interesting content
  3. _____ Providing content I can use in my work
  4. _____ Providing dependable post
  5. _____ Providing me with information to share
  6. _____ Building my skills in evaluation
  7. _____ Showing me that there are others in the world concerned with similar things
  8. _____ Offering me good reading about an interesting weekly topic
  9. _____ Offering me content of value to me
  10. _____ Other. Please specify in comment

Please complete the survey.

my two cents.

molly.

 

 

Dec
29
Filed Under (Methodology, program evaluation) by Molly on 29-12-2014

I’ve talked about bias bias 2before (cognitive bias; personal and situational bias); I’ve probably talked about bias in surveys and sampling. Today I want to talk specifically about self report bias…you know, the bias that exists when people answer questions themselves (as opposed to having their behavior be observed).

First, what is self-report bias (often called self-response bias)? It is the bias that exists when people answer survey questions by themselves. Everyone has this bias; it is unavoidable. It can be seen as social desirability bias (what the the respondent thinks the survey writer wants to hear); self-selection bias (a person decides to respond when invited as opposed to not responding); and what I’m going to call a “clarity bias” (whether the respondent understands the survey content).

I’m finding more and more that the five the number 2 5Cs-5-CorrectnessS of good writing are applicable to all writing–fiction, non-fiction, scholarly, SURVEY. If the survey isn’t clear, the respondent isn’t going to be able to answer in a way that is meaningful. If the respondent cannot answer the survey in a way that is meaningful, there will be no meaningful data. If data are not meaningful, then the evaluation will not be able to tell you the value or merit or worth of the project being evaluated.

It is important to

  1. Pilot test the survey before sending it out to the target audience.
  2. Have naive readers read over the survey (different from pilot testing).
  3. Only ask one thing at a time in the questions.

I’m sure there are other things that would help minimize bias–let me know other options used.

Bottom line: Self report bias is always part of evaluation that involves people; it can be minimized.

New topic.

This is the time of year that one thinks about changes and how one will do that in the new year. Yet, those changes often fall by the way side, getting left in the dust (so to speak) of every day life. One way I’ve kept those changes fresh is to follow how the new year presents itself. There is the calendar new yearnew_year_2015 (on January 1); there is the lunar new year 2015-Chinese-New-Year-Free-Design(this year on February 19, the year of the goat); there is the spring equinox spring equinox (norooz, the Persian new year); Rosh Hashanarosh_hashanah (Jewish new year beginning on the evening of September 13); there is the Islamic new yearislamic-new-year-1024x768, the Thai new yearThai new year, the Ethiopian new yearethiopian new year, and the list goes on.  (What is your favorite new year and new year’s celebration?) By refreshing the year regularly, I can keep my “resolutions” alive all year. My wish for you is a prosperous and healthy new year. Welcome 2015.

 

mytwo cents.

molly.

Aug
20
Filed Under (Methodology, program evaluation) by Molly on 20-08-2014

Within the last 24 hours I’ve had two experiences that remind me of how tenuous our connection is to others.

  1. Yesterday, I was at the library to return several books and pick up a hold. As I went to check out using the digitally connected self-check out station, I got an “out of service” message. Not thinking much of it, as I had received that message before, I moved to another machine. And got the same message! So I went to the main desk. There was a person in front of me; she was taking a lot of time. Turns out it wasn’t her; it was the internet (or intranet, don’t know which). There was no connection! After several minutes, a paper system was implemented and I was assured that the book would be listed by this evening. That the library had a back up system impressed me; I’ve often wondered what would happen if the electricity went out for a long periods of time since the card catalogs are no longer available.
  2. Also, yesterday, I received a phone call on my office land line (!), which is a rare occurrence these days. On the other end was a long time friend and colleague. We are working feverishly on finishing a NDE volume. We have an August 22 deadline and I will be out of town taking my youngest daughter to college. Family trumps everything. He was calling because the gardeners at his condo had cut the cable to his internet, television, and most importantly, his wi-fi. He couldn’t Skype me (our usual form of communication)! He didn’t expect resumption of service until the next day (August 20 at 9:47am PT he went back on line–he lives in the Eastern Time Zone). Read the rest of this entry »
Jul
23
Filed Under (Data Analysis, Methodology, program evaluation) by Molly on 23-07-2014

Summer reading 2 Many of you have numerous lists for summer reading (NY Times, NPR, Goodreads, Amazon, others…). My question is what are you reading to further your knowledge about evaluation? Perhaps you are; perhaps you’re not. So I’m going to give you one more list 🙂 …yes, it is evaluative.

If you want something light:  Regression to the Mean by Ernest R. House.house--regression to the mean It is a novel. It is about evaluation. It explains what evaluators do from a political perspective.

If you want something qualitative:  Qualitative Data Analysis by Matthew B. Miles, A. Michael Huberman, and Johnny Saldana.Qualitative data analysis ed. 3 It is the new 3rd edition which Sage (the publisher) commissioned. A good thing, too, as both Miles and Huberman are no longer able to do a revision. My new go-to book.

If you want something on needs assessment: Bridging the Gap Between Asset/Capacity Building and Needs Assessment by James W. Altschuld. Bridging the Gap-altschuld Most needs assessments start with what is lacking (i.e., needed); this proposes that an assessment start with what is present (assets) and build  from there, and in the process, meeting needs.

If you want something on higher education:  College (Un)bound by Jeff Selingo.college unbound by jeffry selingo  The state of higher education and some viable alternatives by a contributing editor at the Chronicle of Higher Education. Yes, it is evaluative.

Most of these I’ve mentioned before. I’ve read the above. I recommend them.

Read the rest of this entry »

May
19
Filed Under (Data Analysis, program evaluation) by Molly on 19-05-2014

Had a comment a while back on analyzing survey data…hmm…that is a quandary as most surveys are done on line (see Survey monkey, among others).

If you want to reach a large audience (because your population from which you sampled is large), you will probably use an on-line survey. The on-line survey companies will tabulate the data for you. Can’t guarantee that the tabulations you get will be what you want, or will tell you want you want to know. Typically (in my experience), you can get an Excel file which can be imported into a soft ware program and you can run your own analyses, separate from the on line analyses. Read the rest of this entry »