Apr
27
Filed Under (Methodology) by Molly on 27-04-2016

NOTE: This was written last week. I didn’t have time to post. Enjoy.

 

Methodologymethodology 2, aka implementationimplementation, monitoringmonitoring-2, and deliverydeliveryis important. What good is it if you just gather the first findings that come to mind. Being rigorous here is just as important as when you are planning and modeling the program. So I’ve searched the last six years of blogs posts and gathered some of them for you. They are all about Survey, a form of methodology.survey image 3 Survey is a methodology that is often used by Extension, as it is easy to use. However, organizing the surveysurvey organization, getting the survey’s backsurvey return, and dealing with non-response are problematicnonresponse (another post, another time).

The previous posts are organized by date from the oldest to the most recent:

 

2010/02/10

2010/02/23

2010/04/09

2010/08/25

2012/08/09

2012/10/12

2013/03/13

2014/03/25

2014/04/15

2014/05/19

2015/06/29

2015/07/24

2015/12/07

2016/04/15

2016/04/21 (today’s post isn’t hyperlinked)

Just a few words on surveys today: A colleague asked about an evaluation survey for a recent conference. It will be an online survey probably using the University system, Qualtrics. My colleague jotted down a few ideas. The thought occurred to me that this book (by Ellen Taylor-Powell and Marcus Renner) would be useful. On page ten of this book, it asks for the type of information that is needed and wanted. It lists five types of possible information:

  1. Participant reaction (some measure of satisfaction);
  2. Teaching and facilitation (strengths and weaknesses of the presenter, who may (or may not) change the next time);
  3. Outcomes (what difference/benefits/intentions did the participant experience);
  4. Future programming (other educational needs/desires); and
  5. Participant background (who is attending and who isn’t can be answered here).

Thinking through these five categories made all the difference for my colleague. (Evaluation was a new area.) I had forgotten about how useful this booklet is for people being exposed to evaluation for the first time and to surveys, as well. I recommend it.

Apr
15

The WECT program arbitrarily divided the WECT program into four parts. Those “modules” are:

  • Program Planning and Logic Modeling;
  • Program Implementation, Monitoring, and Delivery;
  • Data Management and Analysis (divided into Qualitative data and Quantitative data); and
  • Program Evaluation Utilization

Read the rest of this entry »

May
08
Filed Under (Methodology, program evaluation) by Molly on 08-05-2015

Like many people, I find change hard. In fact, I really don’t like change. I think this is the result of a high school experience; one-third of my classmates left each year. (I was a military off-spring; we changed assignments every three years.)

Yet, in today’s world change is probably the only constant. Does that make it fun? Not necessarily. Does that make it easy? Nope. Does that make it necessary? Yep.

Evaluators deal with change regularly. New programs are required; those must be evaluated. Old programs are revised; those must be evaluated. New approaches are developed and presented to the field. (When I first became an evaluator, there wasn’t a systems approach to evaluation; there wasn’t developmental evaluation; I could continue.) New technologies are available and must be used even if the old one wasn’t broken (even for those of us who are techno-peasants).

I just finished a major qualitative evaluation that involved real-time virtual focus groups.virtual focus group When I researched this topic (virtual focus groups), I found a lot of information about non-synchronous focus groups, focus groups using a conferencing software, even synchronous focus groups without pictures. online focus groups I didn’t find anything about using real-time synchronous virtual focus groups. Unfortunately, we didn’t have much money even though there are services available. Read the rest of this entry »

Sep
30
Filed Under (program evaluation) by Molly on 30-09-2014

Recently, I drafted a paper about a capacity building; I’ll be presenting it at the 2014 AEA conference. The example on which I was reporting was regional and voluntary; it took a dedication, a commitment from participants. During the drafting of that paper, I had think about the parts of the program; what would be necessary for individuals who were interested in evaluation and had did not have a degree. I went back to the competencies listed in the AJE article (March 2005) that I cited in a previous post. I found it interesting to see that the choices I made (after consulting with evaluation colleagues) were listed in the competencies identified by Stevahn et al., yet they list so much more. So the question occurs to me is: To be competent, to build institutional evaluation capacity are all those needed? Read the rest of this entry »

Aug
20
Filed Under (Methodology, program evaluation) by Molly on 20-08-2014

Within the last 24 hours I’ve had two experiences that remind me of how tenuous our connection is to others.

  1. Yesterday, I was at the library to return several books and pick up a hold. As I went to check out using the digitally connected self-check out station, I got an “out of service” message. Not thinking much of it, as I had received that message before, I moved to another machine. And got the same message! So I went to the main desk. There was a person in front of me; she was taking a lot of time. Turns out it wasn’t her; it was the internet (or intranet, don’t know which). There was no connection! After several minutes, a paper system was implemented and I was assured that the book would be listed by this evening. That the library had a back up system impressed me; I’ve often wondered what would happen if the electricity went out for a long periods of time since the card catalogs are no longer available.
  2. Also, yesterday, I received a phone call on my office land line (!), which is a rare occurrence these days. On the other end was a long time friend and colleague. We are working feverishly on finishing a NDE volume. We have an August 22 deadline and I will be out of town taking my youngest daughter to college. Family trumps everything. He was calling because the gardeners at his condo had cut the cable to his internet, television, and most importantly, his wi-fi. He couldn’t Skype me (our usual form of communication)! He didn’t expect resumption of service until the next day (August 20 at 9:47am PT he went back on line–he lives in the Eastern Time Zone). Read the rest of this entry »

You implement a program.  You think it is effective; that it makes a difference; that it has merit and worth.  You develop a survey to determine the merit and worth of the program.  You send the survey out to the target audience which is an intact population–that is, all of the participants are in the target audience for the survey.  You get less than 4o% response rate.  What does that mean?  Can you use the results to say that the participants saw merit in the program?  Do the results indicate that the program has value; that it made a difference if only 40% let you know what they thought.

I went looking for some insights on non-responses and non-responders.  Of course, I turned to Dillman  698685_cover.indd(my go to book for surveys…smiley).  His bottom line: “…sending reminders is an integral part of minimizing non-response error” (pg. 360).

Dillman (of course) has a few words of advice.  For example, on page 360, he says, ” Actively seek means of using follow-up reminders in order to reduce non-response error.”  How do you not burden the target audience with reminders, which are “…the most powerful way of improving response rate…” (Dillman, pg. 360).  When reminders are sent they need to be carefully worded and relate to the survey being sent.  Reminders stress the importance of the survey and the need for responding.

Dillman also says (on page 361) to “…provide all selected respondents with similar amounts and types of encouragement to respond.”  Since most of the time incentives are not an option for you the program person, you have to encourage the participants in other ways.  So we are back to reminders again.

To explore the topic of non-response further, there is a booksurvey non-response (Groves, Robert M., Don A. Dillman, John Eltinge, and Roderick J. A. Little (eds.). 2002. Survey Nonresponse. Wiley-Interscience: New York) that deals with the topic. I don’t have it on my shelf, so I can’t speak to it.  I found it while I was looking for information on this topic.

I also went on line to EVALTALK and found this comment which is relevant to evaluators attempting to determine if the program made a difference:  “Ideally you want your non-response percents to be small and relatively even-handed across items. If the number of nonresponds is large enough, it does raise questions as to what is going for that particular item, for example, ambiguous wording or a controversial topic. Or, sometimes a respondent would rather not answer a question than respond negatively to it. What you do with such data depends on issues specific to your individual study.”  This comment was from Kathy Race of Race & Associates, Ltd.,  September 9, 2003.

A bottom line I would draw from all this is respond…if it was important to you to participate in the program then it is important for you to provide feedback to the program implementation team/person.

 

 


 

Aug
29
Filed Under (program evaluation) by Molly on 29-08-2012

The topic of complexity has appeared several times over the last few weeks.  Brian Pittman wrote about it in an AEA365; Charles Gasper used it as a topic for his most recent blog.  Much food for thought, especially as it relates to the work evaluators do.

Simultaneously, Harold Jarche talks about connections.  To me connections and complexity are two side of the same coin. Something which is complex typically has multiple parts.  Something which has multiple parts is connected to the other parts.  Certainly the work done by evaluators has multiple parts; certainly those parts are connected to each other.  The challenge we face is  logically defending those connections and in doing so, make explicit the parts.  Sound easy?  Its not.

 

That’s why I stress modeling your project before you implement it.  If the project is modeled, often the model leads you to discover that what you thought would happen because of what you do, won’t.  You have time to fix the model, fix the program, and fix the evaluation protocol.  If your model is defensible and logical, you still may find out that the program doesn’t get you where you want to go.  Jonny Morell writes about this in his book, Evaluation in the face of uncertaintyThere are worse things than having to fix the program or fix the evaluation protocol before implementation.  Keep in mind that connections are key; complexity is everywhere.  Perhaps you’ll have an Aha! moment.

 

I’ll be on holiday and there will not be a post next week.  Last week was an odd week–an example of complexity and connections leading to unanticipated outcomes.

 

Dec
09

I’m involved in evaluating a program that is developing as it evolves.  There is some urgency to get predetermined, clear, and measurable outcomes to report to the administration.  Typically, I wouldn’t resist (see resistance post) this mandate; only this program doesn’t lend itself to this approach.  Because this program is developing as it is implemented, it can’t easily be rolled out to all 36 counties in Oregon at once, as much as administration would love to see that happen.  So what can we do?

We can document the principles that drive the program and use them to stage the implementation across the state.

We can identify the factors that tell us that the area is ready to implement the program (i.e., the readiness factors).

We can share lessons learned with key stakeholders in potential implementation areas.

These are the approaches that Michael Patton’s Developmental Evaluation advocate.  Michael says, “Developmental evaluation is designed to be congruent with and nurture developmental, emergent, innovative, and trans-formative processes.” I had the good fortune to talk with Michael about this program in light of these processes.  He indicated that identifying principles not a model supports developmental evaluation and a program in development.  By using underlying principles, we inform expansion.  Can these principles be coded…yes.  Are they outcome indicators…possibly.  Are they outcome indicators in the summative sense of the word?  Nope.  Not even close.  These principles, however, can help the program people roll out the next phase/wave of the program.

As an evaluator, employing developmental evaluation, do I ignore what is happening on the ground–at each phase of the program implementation.  Not a chance.  I need to encourage the program people at that level to identify clear and measurable outcomes–because from those clear and measurable outcomes will come the principles needed for the next phase.  (This is a good example of the complexity concepts that Michael talks about in DE and are the foundation for systems thinking.)  The readiness factors will also become clear when looking at individual sites.  From this view, we can learn a lot–we can apply what we have learned and, hopefully, avoid similar mistakes.  Will mistakes still occur?  Yes.  Is it important that those lessons are heeded; shared with administrators; and used to identify readiness factors when the program is going to be implemented in a new site?  Yes.  Is this process filled with ambiguity?  You bet.  No one said it would be easy to make a difference.

We are learning as we go–that is the developmental aspect of this evaluation and this program.

As promised last week, this week is (briefly) on implementation, monitoring, and delivering evaluation.

Implementation. To implement an evaluation, one needs to have a plan, often called a protocol.  Typically, this is a step-by-step list of what you will do to present the program to your target audience.  In presenting your program to your target audience, you will also include a step-by-step list of how you will gather evaluation information (data).  What is important about the plan is that it be specific enough to be replicated by other interested parties.  When a plan is developed, there is typically a specific design behind each type of data to be collected.  For example, specific knowledge change is often measured by a pretest-posttest design; behavioral change is often measured with a repeated measures design.  Campbell and Stanley, in their classic book, Experimental and quasi-experimental designs for research, present a wealth of information about designs that is useful in evaluation (as well as research).

There are numerous designs which will help develop the plan for the implementation of the program AND the evaluation.

Monitoring. Simply put, monitoring is watching to see if what you said would happen, actually does.  Some people think of monitoring as .  Although monitoring may seem like being watched, it is being watched with a plan.  When I first finished my doctorate and became an evaluator, I conceptualized evaluation simply as process, progress, product. This helped stakeholders understand what evaluation was all about.  The monitoring part of evaluation was answered when I asked, “Are we making progress?  Are we where we said we would be at the time we said we would be there?”  This is really important because sometimes, as Jonny Morell points out in his book, evaluation don’t always  go as planned, even with the best monitoring system.

Delivering.  Delivering is the nuts and bolts of what you are going to do.  It addresses the who, what, where, when, how, and why of the implementation plan.  All of these questions interrelate–for example, if you do not identify who will conduct the evaluation, often the evaluation is “squeezed in” at the end of a program because it is required.

In addition to answering these questions when delivering the evaluation, one thinks about the models, or evaluation approaches.  Stufflebeam, Madaus, and Kellaghan  (in Evaluation models:  Viewpoints on educational and human services evaluation) discuss various approaches and state that the approach used by the evaluator will provide a framework for conducting an evaluation as well as  presenting and using the evaluation results.