Last week, I mentioned that I would address contribution analysis–an approach to exploring cause and effect.  Although I had seen the topic appear several times over the last 3 – 4 years, I never pursued it.  Recently, though, the issue has come to the forefront of many conversations.  I hear Extension faculty saying that their program caused this outcome.  This statement is implied when they come to ask how to write “good” impact statements, not acknowledging that the likelih0od of actually having an impact is slim–long term outcomes, maybe.  Impact?  Probably not.  So finding a logical defensible approach to discussing the lack of causality (as in the A caused B of randomized control trials-type of causality) that is inherent in Extension programing is important.  John Mayne, an independent advisor on public sector performance, writes articulately on this topic (citations are listed below).

The article I read, and from which this blog entry is based, was written in 2008.  Mayne has been writing on this topic since 1999, when he was with the Canadian Office of the Auditor General.  For him the question became critical when the use of randomized control trials (RCT) was not appropriate yet program performance needed to be addressed.

In that article, referenced below, he details six iterative steps in contribution analysis:

  1. Set out the attribution problem to be addressed;
  2. Develop a theory of change and risks to that theory of change;
  3. Gather the existing evidence on the theory of change;
  4. Assemble and assess the contribution story, and challenges to that story;
  5. Seek out additional evidence; and
  6. Revise and strengthen the contribution story

He loops step six back to step four (the iterative process).

By exploring the contribution the program is making to the observed results, one can address the attribution of the program to the desired results.  He goes on to say that (and since I’m quoting, I’m using the Canadian spellings), “Causality is inferred from the following evidence:

  1. The programme is based on a reasoned theory of change: the assumptions behind why the program is expected to work are sound, are plausible, and are agreed upon by at least some of the key players.
  2. The activities of the programme were implemented.
  3. The theory of change is verified by evidence: the chain of expected results occurred.
  4. Other factors influencing the programme were assessed and were either shown not to have made a significant contribution or, if  they did, the relative contributionwas recognised.”

He focuses on clearly defining the theory of change; modeling that theory of change, and revisiting that theory of change regularly across the life of the program.

 

REFERENCES:

Mayne, J. (1999).  Addressing Attribution Through Contribution Analysis: Using Performance Measures Sensibly.  Available at: dsp-psd.pwgsc.gc.ca/Collection/FA3-31-1999E.pdf

Mayne, J. (2001).  Addressing attribution through contribution analysis: Using performance measures sensibly.  Canadian Journal of Program Evaluation, 16: 1 – 24.  Available at:  http://www.evaluationcanada.ca/secure/16-1-001.pdf

Mayne, J. & Rist, R. (2006). Studies are not enough:  The necessary transformation of evaluation.  Canadian Journal of Program Evaluation, 21: 93-120.  Available at: http://www.evaluationcanada.ca/secure/21-3-093.pdf

Mayne, J. (2008).  Contribution analysis:  An approach to exploring cause and effect. Institutional Learning and Change Initiative, Brief 16.  Available at:  http://www.cgiar-ilac.org/files/publications/briefs/ILAC_Brief16_Contribution_Analysis.pdf

 

 

Print Friendly, PDF & Email

Comments are closed.