Jul
17
Filed Under (program evaluation) by Molly on 17-07-2015

“fate is chance; destiny is choice”.destiny-calligraphy-poster-c123312071

Went looking for who said that originally so that I could give credit. Found this as the closest saying: “Destiny is no matter of chance. It is a matter of choice: It is not a thing to be waited for, it is a thing to be achieved.

William Jennings Bryan

 

Evaluation is like destiny. There are many choices to make. How do you choose? What do you choose?

Would you listen to the dictates of the Principal Investigator even if you know there are other, perhaps better, ways to evaluate the program?

What about collecting data? Are you collecting it because it would be “nice”? OR are you collecting it because you will use the data to answer a question?

What tools do you use to make your choices? What resources do you use?

I’m really curious. It is summer and although I have a list (long to be sure) of reading, I wonder what else is out there, specifically relating to making choices? (And yes, I could use my search engine; I’d rather hear from my readers!)

Let me know. PLEASE!

my two cents.

molly.

Jul
24
Filed Under (program evaluation) by Molly on 24-07-2012

AEA hosts an online, free, and open to anyone list serve called EVALTALK, managed by the University of Alabama.  This week, Felix Herzog posted the following question.

 

How much can/should an evaluation cost? 

 

This is a question I often get asked, especially by Extension faculty.  This question is especially relevant because more Extension faculty are responding to request for proposals (RFP) or request for contract (RFC) that call for an evaluation.  The questions arise about how does one budget for the evaluation.  Felix  compiled what he discovered and I’ve listed it below.  It is important to note that this is not just the evaluator’s salary, rather all expenses which relate to evaluating the program–data entry, data management, data analysis, report writing, as well as the evaluator’s salary, data collection instrument development, pilot testing, and the salaries of those who do the above mentioned tasks.  Felix thoughtfully provided references with sources so that you can read them.  He did note that the most useful citation (the Reider, 2011) is in German.

–           The benefit of the evaluation should be at least as high as its cost (Rieder, 2011)

–           “Rule of thumb”: 1 – 10% of the costs for a policy program (personnal communication from an administrator)

–           5 – 7 % of a program (Kellog Foundation, p. 54)

–           1 – 15 %of the total cost of a program (Rieder, 2011, 5 quotes in Table 5 p. 82)

–           0.5% of a program (EC, 2004, p. 32 ff.)

–           Up to 10% of a program (EC 2008, p. 47 ff.)

 

REFERENCES

EC (2004) Evaluating EU activities and practical guide for the Commission services. Bruxelles. http://ec.europa.eu/dgs/secretariat_general/evaluation/docs/eval_activities_en.pdf.

EC (2008) EVALSED: The resource for the evaluation of socioeconomic development. Bruxelles. http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evalsed/downloads/guide2008_evalsed.pdf .

Kellogg Foundation.  (1984).  W. K. Kellogg Foundation Evaluation Handbook. <http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&sqi=2&ved=0CE8QFjAA&url=http%3A%2F%2Fwww.wkkf.org%2F%7E%2Fmedia%2F62EF77BD5792454B807085B1AD044FE7.ashx&ei=gUsAUIOBMIXaqgG6sdiqBw&usg=AFQjCNHorPDftVA4z54-Kx_d8WZT0p5eQg&sig2=0nQwUeUHCR0_4wuaGdwnBw>.

Rieder S. (2011) Kosten von Evaluationen. LEGES 2011(1), 73 – 88.

 

Recognizing the value of your evaluation work, being able to put a dollar value to that work, and being able to communicate it helps build organizational capacity in evaluation.

Sep
02
Filed Under (program evaluation) by Molly on 02-09-2011

A colleague asked me what I considered an output in a statewide program we were discussing.  This is a really good example of assumptions and how they can blind side an individual–in this case me.  Once I (figuratively) picked myself up, I proceeded to explain how this terminology applied to the program under discussion.  Once the meeting concluded, I realized that perhaps a bit of a refresher was in order.  Even the most seasoned evaluators can benefit from a reminder every so often.

 

So OK–inputs, outputs, outcomes.

As I’ve mentioned before, Ellen Taylor-Powell, former UWEX Evaluation specialist has a marvelous tutorial on logic modeling.  I recommend you go there for your own refresher.  What I offer you here is a brief (very) overview of these terms.

Logic models whether linear or circular are composed of various focus points.  Those focus points include (in addition to those mention in the title of this post) the situation, assumptions, and external factors.  Simply put, the situation is a what is going on–the priorities, the needs, the problems that led to the program you are conducting–that is program with a small p (we can talk about sub and supra models later).

Inputs are those resources you need to conduct the program. Typically, they are lumped into personnel, time, money, venue, equipment.  Personnel covers staff, volunteers, partners, any stakeholder.  Time is not just your time–also the time needed for implementation, evaluation, analysis, and reporting.  Money (speaks for itself).  Venue is where the program will be held.  Equipment is what stuff you will need–technology, materials, gear, etc.

Outputs are often classified into two parts–first, participants (or target audience) and the second part, activities that are conducted.  Typically (although not always), those activities are counted and are called bean counts..  In the example that started this post, we would be counting the number of students who graduated high school; the number of students who matriculated to college (either 2 or 4 year); the number of students who transferred from 2 year to 4 year colleges; the number of students who completed college in 2 or 4 years; etc.  This bean  count could also be the number of classes offered; the number of brochures distributed; the number of participants in the class; the number of  (fill in the blank).  Outputs are necessary and not sufficient to determine if a program is being effective.  The field of evaluation started with determining bean counts and satisfactions.

Outcomes can be categorized as short term, medium/intermediate term, or long term.  Long term outcomes are often called impacts.  (There are those in the field who would classify impacts as something separate from an outcome–a discussion for another day.)  Whatever you choose to call the effects of your program, be consistent–don’t use the terms interchangeably; it confuses the reader.  What you are looking for as an outcome is change–in learning; in behavior; in conditions.  This change is measured in the target audience–individuals, groups, communities, etc.

I’ll talk about assumptions and external factors another day.  Have a wonderful holiday weekend…the last vestiges of summer–think tomatoes, corn-on-the-cob , state fair, and  a tall cool drink.

Jan
11
Filed Under (program evaluation) by Molly on 11-01-2011

A faculty member asked me to provide evaluation support for a grant application.  Without hesitation, I agreed.

I went to the web site for funding to review what was expected for an evaluation plan.  What was provided was their statement about why evaluation is important.

Although I agree with what is said in that discussion, I think we have a responsibility to go further.  Here is what I know.

Extension professionals evaluate programs because there needs to be some evidence that the imputs for the program–time, money, personnel, materials, facilities, etc.–are being used advantageously, effectively.  Yet, there is more to the question, “Why evaluate” than accountability. (Michael Patton talks about the various uses to which evaluation findings can be put–see his book on Utilization Focused Evaluation.) Programs are evaluated to determine if people are satisfied, if their expectations were met, whether the program was effective in changing something.

This is what I think.  None of what is stated above addresses the  “so what” part of “why evaluate”.  I think that answering this question (or attempting to) is a compelling reason to justify the effort of evaluating.  It is all very well and good to change people’s knowledge of a topic; it is all very well and good to change people’s behavior related to that topic; and it is all very well and good to have people intend to change (after all, stated intention to change is the best predictor of actual change).  Yet, it isn’t enough.  Being able to answer the “so what” question gives you more information.   And doing that–asking and answering the “so what” question–makes evaluation an everyday activity.   And, who knows.  It may even result in world peace.