Here’s a link to the handout for the anzea symposium session today entitled:
If a reported â€œoutcomeâ€ is not caused by a programme, it is not an outcome at all; it’s a coincidence. Simply measuring variables that may or may not be causally related to a programme (i.e. could just be coincidences – who knows?) doesn’t tell you anything about the quality or value of the programme, therefore it can’t be referred to as outcome evaluation – it’s just measurement.
Isn’t causal attribution heinously expensive, almost never feasible, and doesn’t it require some form of experimental design? Not necessarily. In this interactive seminar, Jane will use case examples to illustrate eight strategies for inferring (or ruling out) causal links between programmes and suspected outcomes: (1) Ask those who have observed or experienced the causal effect, (2) Check if the content of the intervention matches the nature of the outcome; (3) Look for distinctive effect patterns (modus operandi method), (4) Check whether the timing of outcomes makes sense, (5) Examine the relationship between program â€œdoseâ€ and â€œresponseâ€, (6) Use a comparison or control, (7) Control statistically for extraneous variables, and (8) Identify and check the causal mechanisms. These strategies are outlined in Jane’s (2004) book, â€œEvaluation Methodology Basics: The nuts and bolts of sound evaluationâ€ (Sage).