Challenging the theory-based/counterfactual binary

Randomised controlled trials (RCTs) are an instance of the counterfactual kind of impact evaluation and contribution analysis of the theory-based kind. For example, the UK’s Magenta Book recommends using theory-based evaluation if you can’t find a comparison group (HM Treasury, 2020, p. 47). The founding texts on contribution analysis present it as using a non-counterfactual approach to causation (e.g., Mayne, 2019, pp. 173–4). In December 2023, I gave a talk to the UK Evaluation Society (UKES) exploring what happens if we challenge this theory-based/counterfactual binary. This post summarises what I said.

Good experiments and quasi-experiments are theory-based

Theory is required to select the variables used in RCTs and quasi-experimental designs (QEDs); that is, to decide what data to gather and include in analyses. We need to peek inside the “black box” of a programme to work out what we are evaluating and how. These variables include outcomes, moderators, mediators, competing exposures, and, in QEDs, confounders.

This diagram denotes a causal directed acyclic graph (causal DAG). Therapy is conjectured to lead to behavioural activation, which alleviates depression. Supportive friends are a “competing exposure” that can alleviate depression. Reading self-help texts is a confounder as it predicts both whether someone seeks therapy and their outcomes, so this (made up) model is of a quasi-experimental evaluation.

In QEDs, many different model structures are compatible with the data. The slogan “correlation doesn’t imply causation” can be reformulated as A causes B and B causes A are Markov equivalent if all we know is that A and B are correlated. The number of models that are Markov equivalent rises exponentially as the number of variables and statistical associations increases. Theory is needed to select between the models: the data alone cannot tell us.

Although RCTs give us an unbiased estimate of an average treatment effect, that is, the average of each individual’s unmeasurable treatment effect (difference between measured actual outcome and counterfactual outcome), they cannot tell us what that difference represents. To do that, we need a theory of the ingredients and processes in the two programmes being compared; for example, what are the similarities and differences between CBT and humanistic counselling or whatever “treatment as usual” is in practice.

Counterfactual evaluations do not need a comparison group

There is a long history of research on counterfactual reasoning in the absence of a control group. How should we determine the truth of a counterfactual such as “If Oswald had not killed Kennedy, no one else would have” (e.g., Adams, 1970)? Clearly we ponder statements like these without running violent RCTs. Another strand of research investigates how children, adults, and animals perform counterfactual reasoning in practice (e.g., Rafetseder, et al., 2010). This research on non-experimental counterfactual reasoning includes literature in evaluation, for instance by White (2010) and more recently Reichardt (2022).

Halpern (2016) introduces a formal framework for defining causal relationships and estimating counterfactual outcomes, regardless of the sources of evidence. The diagram below illustrates the simplest version of the framework where causes and effects are binary. Equations, annotating each node, define the causal relationships and allow counterfactual outcomes to be inferred.

This is one of the models I used to illustrate counterfactual inference without a comparison group. In the factual situation, Alex was feeling down, spoke to their counsellor, and then felt better. In this counterfactual scenario, we are exploring what would have happened if Alex hadn’t spoken to their counsellor. In this case, they would have spoken to their friend instead and again felt better. Although there is no difference between the factual and counterfactual outcome, Halpern’s framework allows us to infer that Alex speaking to the counsellor was an actual cause of Alex feeling better.

This provides a concrete illustration of why RCTs and QEDs are unnecessary to reason counterfactually; however, the causal model obviously must be correct for the correct conclusions to be drawn, so one might reasonably ask what sorts of evidence we can use to build these models and persuade people they are true. There is also hearty debate in the literature concerning what constitutes an “actual cause” and algorithms for determining this are fiddly to apply, even if it is assumed we have a true model of the causal relationships. I explored Halpern’s approach in a previous blog post.

Summary

The original definition of theory-based evaluation by Fitz-Gibbon and Morris (1975) included the full range of approaches, qualitative and quantitative – including RCTs and QEDs. Many others follow in the tradition. For instance, Weiss (1997, p. 512) cites path analysis (a kind of structural equation model) as being “conceptually compatible with TBE [theory-based evaluation] and has been used by evaluators”. Chen and Rossi (1980) explain how theory-based (what they term theory-driven) RCTs that include well-chosen covariates (competing exposures) yield more precise estimates of effects (reducing the probability of Type II error), even though those covariates are not needed to control Type I error. Counterfactual queries do not need a comparison group. They do need a model of how facts came about that can be modified to predict the counterfactual outcome.

Challenging the theory-based/counterfactual binary does not mean that all evaluations are the same. There can still obviously be variation in the strength of evidence used to develop and test theories and how well theories withstand those tests. However, taking a more nuanced view of the differences and similarities between approaches leads to better evaluations.

(If you found this post interesting, please do say hello and let me know!)

References

Adams, E. W. (1970). Subjunctive and Indicative Conditionals. Foundations of Language, 6, 89–94.

Chen, H.-T., & Rossi, P. H. (1983). Evaluating With Sense: The Theory-Driven Approach. Evaluation Review, 7(3), 283–302

Fitz-Gibbon, C. T., & Morris, L. L. (1975). Theory-based evaluation. Evaluation Comment, 5(1), 1–4. Reprinted in Fitz-Gibbon, C. T., & Morris, L. L. (1996). Theory-based evaluation. Evaluation Practice, 17(2), 177–184.

Halpern, J. Y. (2016). Actual causality. The MIT press.

HM Treasury. (2020). Magenta Book.

Mayne, J. (2019). Revisiting contribution analysis. Canadian Journal of Program Evaluation, 34(2), 171–191.

Rafetseder, E., Cristi‐Vargas, R., & Perner, J. (2010). Counterfactual reasoning: Developing a sense of β€œnearest possible world”. Child Development, 81(1), 376-389.

Reichardt, C. S. (2022). The Counterfactual Definition of a Program Effect. American Journal of Evaluation,Β 43(2), 158–174.

Weiss, C. H. (1997). How can theory-based evaluation make greater headway? Evaluation Review, 21(4), 501–524.

White, H. (2010). A contribution to current debates in impact evaluation. Evaluation, 16(2), 153–164.