Experimental and quasi-experimental evaluations usually define a programme effect as the difference between (a) the actual outcome following a social programme and (b) an estimate of what the outcome would have been without the programme – the counterfactual outcome. (The latter might be a competing programme or some genre of “business as usual”.)
It is also usually argued that qualitative or so-called “theory-based” approaches to evaluation are not counterfactual evaluations. Reichardt (2022) adds to a slowly accumulating body of work that challenges this and argues that any approach to evaluation can be understood in counterfactual terms.
Reichardt provides seven examples of evaluation approaches, quantitative and qualitative, and explains how a counterfactual analysis is relevant:
- Comparisons Across Participants. RCTs and friends. The comparison group is used to estimate the counterfactual. (Note: the comparison group is not the counterfactual. A comparison group is factual.)
- Before-After Comparisons. The baseline score is often treated as counterfactual outcome (though it’s probably not, thanks, e.g., due to regression to the mean).
- What-If Assessments. Asking participants to reflect on a counterfactual like, “How would you have felt without the programme?” Participants provide the estimate of the counterfactual, the evaluators use it to estimate the effect.
- Just-Tell-Me Assessments. Cites Copestake (2014): “If we are interested in finding out whether particular men, women or children are less hungry as a result of some action it seems common-sense just to ask them.” In this case participants may be construed as carrying out the “What-If” assessment of the previous point and using this to work out the programme effect themselves.
- Direct Observation. Simply seeing the causal effect rather than inferring. An example given is of tapping a car brake and seeing the effect. Not sure I buy this one and neither does Reichardt. Whatever it is, I agree a counterfactual of some sort is needed (and inferred): you need to have a theory to explain what would have happened had you not tapped the brake.
- Theories-of-Change Assessments. Contribution analysis and realist evaluation are offered as examples. The gist is, despite what proponents of these approaches claim, to use a theory of change to work out whether the programme is responsible for or “contributes to” outcomes, you need to use the theory of change to think about the counterfactual. I’ve blogged about realist evaluation and contribution analysis elsewhere and their definitions of a causal effect.
- The Modus Operandi (MO) Method. The evaluator looks for evidence of traces or tell-tales that the programme worked. Not sure I quite get how this differs from theory-of-change assessments. Maybe it doesn’t. It sounds like potentially another way to evidence the causal chains in a theory of change.
The conclusion:
“I suspect there is no viable alternative to the counterfactual definition of an effect and that when the counterfactual definition is not given explicitly, it is being used implicitly. […] Of course, evaluators are free to use an alternative to the counterfactual definition of a program effect, if an adequate alternative can be found. But if an alternative definition is used, evaluators should explicitly describe that alternative definition and forthrightly demonstrate how their definition undergirds their methodology […].”
I like four of the seven, as kinds of evidence used to infer the counterfactual outcome. I also propose a fifth: evaluator opinion.
- Comparisons Across Participants.
- Before-After Comparisons.
- What-If Assessments.
- Just-Tell-Me Assessments.
- Evaluator opinion.
The What-If and Just-Tell-Me assessments could involve subject experts rather than only beneficiaries of a programme, which would have an impact on how those assessments are interpreted, particularly if the experts have a vested interest. To me, the Theory of Change Assessment in Reichardt’s original could be carried out with the help of one or more of these five. They are all ways to justify causal links (mediating variables or intermediate variables), not just evaluate outcomes, and help assess the validity of a theory of change. Though readers may not find them all equally compelling, particularly the last.
References
Copestake, J. (2014). Credible impact evaluation in complex contexts: Confirmatory and exploratory approaches. Evaluation, 20(4), 412–427.
Reichardt, C. S. (2022). The Counterfactual Definition of a Program Effect. American Journal of Evaluation, 43(2), 158–174.