Jul
24
Filed Under (program evaluation) by Molly on 24-07-2012

AEA hosts an online, free, and open to anyone list serve called EVALTALK, managed by the University of Alabama.  This week, Felix Herzog posted the following question.

 

How much can/should an evaluation cost? 

 

This is a question I often get asked, especially by Extension faculty.  This question is especially relevant because more Extension faculty are responding to request for proposals (RFP) or request for contract (RFC) that call for an evaluation.  The questions arise about how does one budget for the evaluation.  Felix  compiled what he discovered and I’ve listed it below.  It is important to note that this is not just the evaluator’s salary, rather all expenses which relate to evaluating the program–data entry, data management, data analysis, report writing, as well as the evaluator’s salary, data collection instrument development, pilot testing, and the salaries of those who do the above mentioned tasks.  Felix thoughtfully provided references with sources so that you can read them.  He did note that the most useful citation (the Reider, 2011) is in German.

–           The benefit of the evaluation should be at least as high as its cost (Rieder, 2011)

–           “Rule of thumb”: 1 – 10% of the costs for a policy program (personnal communication from an administrator)

–           5 – 7 % of a program (Kellog Foundation, p. 54)

–           1 – 15 %of the total cost of a program (Rieder, 2011, 5 quotes in Table 5 p. 82)

–           0.5% of a program (EC, 2004, p. 32 ff.)

–           Up to 10% of a program (EC 2008, p. 47 ff.)

 

REFERENCES

EC (2004) Evaluating EU activities and practical guide for the Commission services. Bruxelles. http://ec.europa.eu/dgs/secretariat_general/evaluation/docs/eval_activities_en.pdf.

EC (2008) EVALSED: The resource for the evaluation of socioeconomic development. Bruxelles. http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evalsed/downloads/guide2008_evalsed.pdf .

Kellogg Foundation.  (1984).  W. K. Kellogg Foundation Evaluation Handbook. <http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&sqi=2&ved=0CE8QFjAA&url=http%3A%2F%2Fwww.wkkf.org%2F%7E%2Fmedia%2F62EF77BD5792454B807085B1AD044FE7.ashx&ei=gUsAUIOBMIXaqgG6sdiqBw&usg=AFQjCNHorPDftVA4z54-Kx_d8WZT0p5eQg&sig2=0nQwUeUHCR0_4wuaGdwnBw>.

Rieder S. (2011) Kosten von Evaluationen. LEGES 2011(1), 73 – 88.

 

Recognizing the value of your evaluation work, being able to put a dollar value to that work, and being able to communicate it helps build organizational capacity in evaluation.

Jul
18
Filed Under (program evaluation) by Molly on 18-07-2012

Bright ideas are often the result of  “Aha” moments.  Aha moments  are “The sudden understanding or grasp of a concept…an event that is typically rewarding and pleasurable.  Usually, the insights remain in our memory as lasting impressions.” — Senior News Editor for Psych Central.

How often have you had an “A-ha” moment when you are evaluating?  A colleague had one, maybe several, that made an impression on her.  Talk about building capacity–this did.  She has agreed to share that experience, soon (the bright idea).

Not only did it make an impression on her, her telling me made an impression on me.  I am once again reminded of how much I take evaluation for granted.  Because evaluation is an everyday activity, I often assume that people know what I’m talking about.  We all know what happens when we assume something….  I am also reminded how many people don’t know what I consider basic  evaluation information, like constructing a survey item (Got  Dillman on your shelf, yet?).

 

What is this symbol called?  No, it is not the square root sign–although that is its function.  “It’s called a radical…because it gets at the root…the definition of radical is: of or going to the root or origin.”–Guy McPherson

How radical are you?  How does that relate to evaluation, you wonder?  Telling truth to power is a radical concept (the definition here is departure from the usual or traditional); one to which evaluators who hold integrity sacrosanct adhere. (It is the third AEA guiding principle.)  Evaluators often, if they are doing their job right, have to speak truth to power–because the program wasn’t effective, or it resulted in something different than what was planned, or it cost too much to replicate, or it just didn’t work out .  Funders, supervisors, program leaders need to know the truth as you found it.


“Those who seek to isolate will become isolated themselves.”Diederick Stoel  This sage piece of advice is the lead for Jim Kirkpatrick’s quick tip for evaluating training activities.  He says, “Attempting to isolate the impact of the formal training class at the start of the initiative is basically discounting and disrespecting the contributions of other factors…Instead of seeking to isolate the impact of your training, gather data on all of the factors that contributed to the success of the initiative, and give credit where credit is due. This way, your role is not simply to deliver training, but to create and orchestrate organizational success. This makes you a strategic business partner who contributes to your organization’s competitive advantage and is therefore indispensable.”  Extension faculty conduct a lot of trainings and want to take credit for the training effectiveness.  It is important to recognize that there may be other factors at work–mitigating factors; intermediate factors; even confounding factors.  As much as Extension faculty want to isolate (i.e., take credit), it is important to share the credit.

Jul
12
Filed Under (program evaluation) by Molly on 12-07-2012

Harold Jarche says that “most learning happens informally on the job.  Formal instruction, or training, accounts for less than 20%, and some research shows it is about 5% of workplace learning.” He divides learning into dependent, interdependent, and independent–that is, formal instruction like you get in school; social and collaborative learning like you get when you engage colleagues; and learning supported by tools and information.

As an evaluator, what do you do with that other 95%?  Do you read? Tweet? Talk to folks?  Just how do you learn more about evaluation?  I don’t think there is a best way.  I think that the individual needs to look what their strengths are (assets, if you will), where their passions lie, where their questions occur (and those may or may not be needs–shift the paradigm, people).  Sometimes learning emerges from a place never before explored.  A good example–I’ve been charged with the evaluation of a organizational change.  Although I’ve looked at references for organizational change, actually had a course in organizational behavior in graduate school, I haven’t really gone looking for answers…until this evaluation was assigned.  Then, at this years AEA annual conference, one of the professional development session captured much of what I’ve been puzzling– not that it will have answers; but maybe I’ll learn something, something I can take back with me; something I could use; perhaps even something in that 95%.  This professional development session (informal learning and interdependent) will afford me an opportunity for learning; for content I haven’t experienced.  I’d put it in the other 95%.

Social media falls into the category of the other 95%–it connects folks.  It provides information.  It builds community where one has not been before.  Can it take the place of formal education; no, I don’t think so.  Can it provide a source of information; possibly (it then becomes a matter of reliability).  My take away for today–explore other types of learning; share what you know.

 

 

 

Jul
05
Filed Under (program evaluation) by Molly on 05-07-2012

 

 

 

Yesterday was the 236th anniversary of the US independence from England (and George III, in his infinite wisdom, is said to have said nothing important happened…right…oh, all right, how WOULD he have known anything had happened several thousand miles away?).  And yes, I saw fireworks.  More importantly, though, I thought a lot about what does independence mean?  And then, because I’m posting here, what does independence mean for evaluation and evaluators?

In thinking about independence, I am reminded about intercultural communication and the contrast between individualism and collectivism.  To make this distinction clear, think “I- centered” vs. “We-centered”.  Think western Europe, US vs. Asia, Japan.  To me, individualism is reflective of independence and collectivism is reflective of networks, systems if you will.  When we talk about independence, the words “freedom” and “separate” and “unattached” are bandied about and that certainly applies to the anniversary celebrated yesterday.  Yet, when I contrast it with collectivism and think of the words that are often used in that context (“interdependence”, “group”, “collaboration”), I become aware of other concepts.

Like, what is missing when we are independent?  What have we lost being independent?  What are we avoiding by being independent?  Think “Little Red Hen”.  And conversely, what have we gained by being collective, by collaborating, by connecting?  Think “Spock and Good of the Many”.

There is in AEA a topical interest group of “Independent Consulting”.  This TIG is home to those evaluators who function outside of an institution and who have made their own organization; who work independently, on contract.  In their mission statement, they pro port to “Foster a community of independent evaluators…”  So by being separate, are they missing community and need to foster that aspect?  They insist that they are “…great at networking”, which doesn’t sound very independent; it sounds almost collective.  A small example, and probably not the best.

I think about the way the western world is today; other than your children and/or spouse/significant other are you connected to a community? a network? a group?  not just in membership (like at church or club); really connected (like in extended family–whether of the heart or of the blood)?  Although the Independent Consulting TIG says they are great at networking and some even work in groups, are they connected?  (Social media doesn’t count.)  Is the “I” identity a product of being independent?  It certainly is a characteristic of individualism.  Can you measure the value, merit, worth of the work you do by the level of independence you possess?  Do internal evaluators garner all the benefits of being connected.  (As an internal evaluator, I’m pretty independent, even though there is a critical mass of evaluators where I work.)

Although being an independent evaluator has its benefits–less bias, different perspective (do I dare say, more objective?), is the distance created, the competition for position, the risk taking worth the lack of relational harmony that can accompany relationships? Is the US better off as its own country?  I’d say probably.   My musings only…what do you think?