“No rigorous distinction is possible…”

“No rigorous distinction is possible between theory-driven evaluation (science), TBE [theory-based evaluation], theory-anchored evaluation, realistic evaluation, contribution analysis, evaluation based on a logic model, and a theory of change. I shall here refer pragmatically only to TBE as an umbrella term for all evaluation strategies and approaches where a core task is to construct and clarify a set of assumptions about why and how an intervention works explicitly in the form of a written or graphic description of a causal chain of events (the program theory).

“The term program theory is habitual and should not be restricted to programs only because a similar logic can be applied in evaluations of policies, projects, and many kinds of interventions (Rogers, 2007). In a similar vein, the term “theory” has rough edges. A theory used in TBE may refer to something more or less explicit and articulate, more or less abstract and formal, and more or less stakeholder based versus anchored in general social science theory.”

– Dahler-Larsen, P. (2018, p. 9). Theory-Based Evaluation Meets Ambiguity: The Role of Janus Variables. American Journal of Evaluation, 39(1), 6–23.

People still read blogs!

Thanks very much to Thomas Aston (2024) for critical engagement in the Evaluation journal:

“… there were somewhat more thoughtful debates on the integration of experiments with theory-based evaluation. Of course, this is not a new discussion, but it reemerged, as I discussed in Randomista mania (Aston, 2023c), during a Kantar Public (2023) (now Verian) webinar on ensuring rigor in theory-based evaluation. In the United Kingdom, the Magenta Book guidance from HM Treasury (2020) includes a decision tree which implies, to some readers, that experimental designs cannot be theory-based. During the event, Alex Hurrell pointed out that theory-based evaluation and experimental methods are not necessarily irreconcilable. To this end, Andi Fugard (2023) wrote a blog arguing that “counterfactual” is not synonymous with “control group” and later conducted a thoughtful webinar for the UK Evaluation Society (2023) on challenging the theory-based counterfactual binary. In my view, Fugard is right that there is not a strict binary which implies that counterfactual approaches should not be theory-based. They have been moving in that direction for years (White, 2009). But perhaps the decision tree is less about the benefits of integrating theory into counterfactual approaches and more about the epistemic, practical, and ethical limits of experimental impact evaluation approaches and the importance of exploring alternative options when they are neither possible nor appropriate.”

I wish I shared Thomas’s optimism that the theory-based/counterfactual binary is already blurring. My reading is, the original 1975 definition of theory-based evaluation was inclusive and still is for those in the theory-driven camp (e.g., Huey Chen‘s work). But in many UK evaluation contexts, theory-driven is synonymous with contribution analysis, qualitative comparative analysis, and process tracing, applied to qualitative data. RCTs and QEDs are not allowed. There are notable exceptions.

References

Challenging the theory-based/counterfactual binary

Randomised controlled trials (RCTs) are an instance of the counterfactual kind of impact evaluation and contribution analysis of the theory-based kind. For example, the UK’s Magenta Book recommends using theory-based evaluation if you can’t find a comparison group (HM Treasury, 2020, p. 47). The founding texts on contribution analysis present it as using a non-counterfactual approach to causation (e.g., Mayne, 2019, pp. 173–4). In December 2023, I gave a talk to the UK Evaluation Society (UKES) exploring what happens if we challenge this theory-based/counterfactual binary. This post summarises what I said.

Good experiments and quasi-experiments are theory-based

Theory is required to select the variables used in RCTs and quasi-experimental designs (QEDs); that is, to decide what data to gather and include in analyses. We need to peek inside the “black box” of a programme to work out what we are evaluating and how. These variables include outcomes, moderators, mediators, competing exposures, and, in QEDs, confounders.

This diagram denotes a causal directed acyclic graph (causal DAG). Therapy is conjectured to lead to behavioural activation, which alleviates depression. Supportive friends are a “competing exposure” that can alleviate depression. Reading self-help texts is a confounder as it predicts both whether someone seeks therapy and their outcomes, so this (made up) model is of a quasi-experimental evaluation.

In QEDs, many different model structures are compatible with the data. The slogan “correlation doesn’t imply causation” can be reformulated as A causes B and B causes A are Markov equivalent if all we know is that A and B are correlated. The number of models that are Markov equivalent rises exponentially as the number of variables and statistical associations increases. Theory is needed to select between the models: the data alone cannot tell us.

Although RCTs give us an unbiased estimate of an average treatment effect, that is, the average of each individual’s unmeasurable treatment effect (difference between measured actual outcome and counterfactual outcome), they cannot tell us what that difference represents. To do that, we need a theory of the ingredients and processes in the two programmes being compared; for example, what are the similarities and differences between CBT and humanistic counselling or whatever “treatment as usual” is in practice.

Counterfactual evaluations do not need a comparison group

There is a long history of research on counterfactual reasoning in the absence of a control group. How should we determine the truth of a counterfactual such as “If Oswald had not killed Kennedy, no one else would have” (e.g., Adams, 1970)? Clearly we ponder statements like these without running violent RCTs. Another strand of research investigates how children, adults, and animals perform counterfactual reasoning in practice (e.g., Rafetseder, et al., 2010). This research on non-experimental counterfactual reasoning includes literature in evaluation, for instance by White (2010) and more recently Reichardt (2022).

Halpern (2016) introduces a formal framework for defining causal relationships and estimating counterfactual outcomes, regardless of the sources of evidence. The diagram below illustrates the simplest version of the framework where causes and effects are binary. Equations, annotating each node, define the causal relationships and allow counterfactual outcomes to be inferred.

This is one of the models I used to illustrate counterfactual inference without a comparison group. In the factual situation, Alex was feeling down, spoke to their counsellor, and then felt better. In this counterfactual scenario, we are exploring what would have happened if Alex hadn’t spoken to their counsellor. In this case, they would have spoken to their friend instead and again felt better. Although there is no difference between the factual and counterfactual outcome, Halpern’s framework allows us to infer that Alex speaking to the counsellor was an actual cause of Alex feeling better.

This provides a concrete illustration of why RCTs and QEDs are unnecessary to reason counterfactually; however, the causal model obviously must be correct for the correct conclusions to be drawn, so one might reasonably ask what sorts of evidence we can use to build these models and persuade people they are true. There is also hearty debate in the literature concerning what constitutes an “actual cause” and algorithms for determining this are fiddly to apply, even if it is assumed we have a true model of the causal relationships. I explored Halpern’s approach in a previous blog post.

Summary

The original definition of theory-based evaluation by Fitz-Gibbon and Morris (1975) included the full range of approaches, qualitative and quantitative – including RCTs and QEDs. Many others follow in the tradition. For instance, Weiss (1997, p. 512) cites path analysis (a kind of structural equation model) as being “conceptually compatible with TBE [theory-based evaluation] and has been used by evaluators”. Chen and Rossi (1980) explain how theory-based (what they term theory-driven) RCTs that include well-chosen covariates (competing exposures) yield more precise estimates of effects (reducing the probability of Type II error), even though those covariates are not needed to control Type I error. Counterfactual queries do not need a comparison group. They do need a model of how facts came about that can be modified to predict the counterfactual outcome.

Challenging the theory-based/counterfactual binary does not mean that all evaluations are the same. There can still obviously be variation in the strength of evidence used to develop and test theories and how well theories withstand those tests. However, taking a more nuanced view of the differences and similarities between approaches leads to better evaluations.

(If you found this post interesting, please do say hello and let me know!)

References

Adams, E. W. (1970). Subjunctive and Indicative Conditionals. Foundations of Language, 6, 89–94.

Chen, H.-T., & Rossi, P. H. (1983). Evaluating With Sense: The Theory-Driven Approach. Evaluation Review, 7(3), 283–302

Fitz-Gibbon, C. T., & Morris, L. L. (1975). Theory-based evaluation. Evaluation Comment, 5(1), 1–4. Reprinted in Fitz-Gibbon, C. T., & Morris, L. L. (1996). Theory-based evaluation. Evaluation Practice, 17(2), 177–184.

Halpern, J. Y. (2016). Actual causality. The MIT press.

HM Treasury. (2020). Magenta Book.

Mayne, J. (2019). Revisiting contribution analysis. Canadian Journal of Program Evaluation, 34(2), 171–191.

Rafetseder, E., Cristi‐Vargas, R., & Perner, J. (2010). Counterfactual reasoning: Developing a sense of “nearest possible world”. Child Development, 81(1), 376-389.

Reichardt, C. S. (2022). The Counterfactual Definition of a Program Effect. American Journal of Evaluation43(2), 158–174.

Weiss, C. H. (1997). How can theory-based evaluation make greater headway? Evaluation Review, 21(4), 501–524.

White, H. (2010). A contribution to current debates in impact evaluation. Evaluation, 16(2), 153–164.

Reclaiming the term “theory-based”

Excited to discover a triple of 2024 publications that use the broader conception of theory-based evaluation that includes trials and quasi-experiments, as the term was introduced by Fitz-Gibbon and Morris (1975) and used by, e.g., Chen and Rossi (1980), Coryn et al., (2011), Weiss (1997), Funnell and Rogers, P. J. (2011), Chen (2015), and many others.

I hope the next Magenta Book update has a more nuanced approach that includes RCTs and QEDs under theory-based, alongside, e.g., QCA and uses of Bayes’ rule to reason about qual evidence.

The new:

Bonell, C., Melendez-Torres, G. J., & Warren, E. (2024). Realist trials and systematic reviews: Rigorous, useful evidence to inform health policy. Cambridge University Press.

Matta, C., Lindvall, J., & Ryve, A. (2024). The Mechanistic Rewards of Data and Theory Integration for Theory-Based Evaluation. American Journal of Evaluation, 45(1), 110–132.

Schmidt, R. (2024). A graphical method for causal program attribution in theory-based evaluation. Evaluation, online first.

Key older texts:

Chen, H.-T., & Rossi, P. H. (1980). The Multi-Goal, Theory-Driven Approach to Evaluation: A Model Linking Basic and Applied Social Science. Social Forces, 59, 106–122.

Chen, H.-T., & Rossi, P. H. (1983). Evaluating With Sense: The Theory-Driven Approach. Evaluation Review, 7(3), 283–302.

Chen, H. T. (2015). Practical program evaluation: Theory-driven evaluation and the integrated evaluation perspective (2nd edition). Sage Publications.

Coryn, C. L. S., Noakes, L. A., Westine, C. D., & Schröter, D. C. (2011). A systematic review of theory-driven evaluation practice from 1990 to 2009. American Journal of Evaluation, 32(2), 199–226.

Fitz-Gibbon, C. T., & Morris, L. L. (1975). Theory-based evaluation. Evaluation Comment, 5(1), 1–4. Reprinted in Fitz-Gibbon, C. T., & Morris, L. L. (1996). Theory-based evaluation. Evaluation Practice, 17(2), 177–184.

Funnell, S. C., Rogers, P. J. (2011). Purposeful Program Theory: Effective Use of Theories of Change and Logic Models. Jossey-Bass.

Weiss, C. H. (1997). How can theory-based evaluation make greater headway? Evaluation Review, 21(4), 501–524.

TBE QEDs

‘In TBE [theory-based evaluation] practice […] theory as represented is not specific enough to support causal conclusions in inference […]. For example, in contribution analysis “causal assumptions” refer to a “causal package” consisting of the program intervention and a set of contextual conditions that together may explain an observed change in the outcome […]. In realist evaluation, the causal mechanisms that are triggered by the intervention are specified in “configuration” with their context and the outcome. Often, however, the causal structure of the configuration is not clear […]. Moreover, the main TBE approaches to inference do not have standard practices, conventions, for treating bias in evidence […].

‘TBE practitioners may borrow from other methods to test theoretical assumptions […]. Sometimes TBE employs regression analysis or quasi-experimental propensity score matching in inference (our running example in this article of an actual TBE program evaluation does so).’

Schmidt, R. (2024). A graphical method for causal program attribution in theory-based evaluation. Evaluation, online first.

 

Is a theory of change different to a logic model? Depends

Evaluation purists: It’s only a Logic Model if it comes from the W. K. Kellogg Foundation region of Michigan, otherwise it’s just sparkling boxes and arrows.

Alternatively:

“A program theory is an explicit theory or model of how an intervention contributes to a set of specific outcomes through a series of intermediate results. The theory needs to include an explanation of how the program’s activities contribute to the results, not simply a list of activities followed by the results, with no explanation of how these are linked, apart from a mysterious arrow.” (Funnell & Rogers, 2011, p. 31)

“A program theory is usually displayed in a diagram called a logic model.” (Funnell & Rogers, 2011, p. 32)

“… sometimes the terms logic model and theory of change have been distinguished in particular ways that are different to the ways we are using the terms here. For example Heléne Clark, from ActKnowledge, and Andrea Anderson, from the Aspen Institute Roundtable on Community Change, who employ program theory extensively, have used these terms to make the distinction between ways of representing program theory that we have labeled pipeline logic models and outcomes chain logic models (Clark and Anderson, 2004). Mhairi Mackenzie and Avril Blamey (2005) have used theories of change to refer specifically to the type of logic model advocated by the Aspen Institute […]. The key message here is to define your terms carefully and ask others to do so as well. It cannot be assumed that you mean the same thing when you use the same term or that you mean something different when you use a different term.” (Funnell & Rogers, 2011, p. 26)

Funnell, S. C., & Rogers, P. J. (2011). Purposeful Program Theory: Effective Use of Theories of Change and Logic Models. Jossey-Bass.

Ignorance of history in evaluation

“Despite occasional statements that program theory is a new approach, its roots go back more than fifty years. […] The history of program theory evaluation is not one of a steady increase in understanding. Instead, many of the key ideas have been well articulated and then ignored or forgotten in descriptions of the approach. It is not unusual to have statements that demonstrate a lack of knowledge of previous empirical and theoretical developments, such as a call for proposals from the Agency for Healthcare Research and Quality (2008) that claimed that “‘theory-based evaluation’ is a relatively new approach” (p. 14).”

Sue C. Funnell and Patricia J. Rogers (2011, pp. 15–16). Purposeful Program Theory: Effective Use of Theories of Change and Logic Models. Jossey-Bass.

Degtiar & Rose (2023) – A Review of Generalizability and Transportability

“This article presents a framework for addressing external validity bias, including a synthesis of approaches for generalizability and transportability, and the assumptions they require, as well as tests for the heterogeneity of treatment effects and differences between study and target populations.”

References

Degtiar, I., & Rose, S. (2023). A Review of Generalizability and Transportability. Annual Review of Statistics and Its Application, 10(1), 501–524.

The kind of theory of theory-driven evaluation

“… the kind of theory we have in mind is not the global conceptual schemes of the grand theorists, but much more prosaic theories that are concerned with how human organizations work and how social problems are generated. It advances evaluation practice very little to adopt one or another of current global theories in attacking, say, the problem of juvenile delinquency, but it does help a great deal to understand the authority structure in schools and the mechanisms of peer group influence and parental discipline in designing and evaluating a program that is supposed to reduce disciplinary problems in schools. Nor are we advocating an approach that rests exclusively on proven theoretical schema that have received wide acclaim in published social science literatures. What we are strongly advocating is the necessity for theorizing, for constructing plausible and defensible models of how programs can be expected to work before evaluating them. Indeed the theory-driven perspective is closer to what econometricians call ‘model specification’ than are more complicated and more abstract and general theories.”

Chen, H.-T., & Rossi, P. H. (1983, p. 285). Evaluating With Sense: The Theory-Driven Approach. Evaluation Review, 7(3), 283–302.

 

Intervening mechanism evaluation

‘The intervening mechanism evaluation approach assesses whether the causal assumptions underlying a program are functioning as stakeholders had projected (Chen, 1990). […] It is not always labeled in the same way by those who apply it. Some evaluators have referred to it as “theory of change evaluation” (Connell, Kubisch, Schorr, & Weiss, 1995) or “theory-based evaluation” (Rogers, Hasci, Petrosino, & Huebner, 2000; Weiss, 1997).’

Chen, H. T. (2015, p. 312). Practical Program Evaluation: Theory-Driven Evaluation and the Integrated Evaluation Perspective. SAGE Publications Ltd.