Suppose we find that the probability of a successful programme outcome (out) depends on treatment (treat) and mediator (med) as per the Bayes network depicted in part 1 of the figure below. Suppose also that there are no other unmeasured variables. This model defines \(P(\mathit{out} | \mathit{treat}, \mathit{med})\), \(P(\mathit{med} | \mathit{treat})\), and \(P(\mathit{treat})\). The arrows denote these probabilistic relationships.

Interpreting the arrows as causal relations, then all six models above are consistent with the conditional probabilities. Model 2 says that treatment and outcome are associated with each other because the mediator is a common cause. Model 3 says that the outcome causes treatment assignment. Model 4 says that the treatment causes mediator and outcome; however, outcome causes mediator. And so on. These six models are all members of the same Markov equivalence class (see Verma & Pearl, 1990).

We need something beyond the data and statistical assocations to distinguish between them: theory. Some of the theory might be trivial, e.g., that the outcome followed treatment and can’t have caused the treatment because we have ruled out time travel.

References

Verma, T., & Pearl, J. (1990). Equivalence and synthesis of causal models. Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence, 255–270.

The core idea of Katharine Jenkins’ (2018) norm-relevancy account of gender is that someone’s gender is defined in terms of the gender norms they experience as relevant – whether or not they comply with those norms. I’m not sure yet if this is enough to define gender – it could be; however, I think it’s a very interesting theory of how people – especially trans and nonbinary people – might decode their gender. Jenkins uses a crisp classical binary logic approach. This blog post is an attempt to explore what happens if we add probabilities.

I’m using Bayesian networks because they are well-supported by software that does the sums (the analyses below were produced using GeNIe Modeler). The direction of the arrows below is not meant to imply causation. Rather, the idea is from the assumption that someone is a particular gender, it is straightforward to guess the probability that a particular gender norm would be relevant. The Bayes trick then is to go in reverse from experiencing the relevance of particular norms to decoding one’s gender.

Let’s get started with some pictures.

The network below shows the setup in the absence of evidence.

The goal is to infer gender and at present the probabilities are 49-49% for man/woman and 1% for nonbinary. That’s probably too high for the latter. Also I’m assuming there are only three discrete gender identities, which is false.

Each node with an arrow leading into it represents a conditional probability. The table below shows a conditional probability probability distribution defined for one of the norm-relevancy nodes.

Norm

Man

Woman

Nonbinary

Relevant

20%

80%

50%

Irrelevant

80%

20%

50%

So, in this case if someone is a man then this norm is 80% likely to be irrelevant; if someone is a woman then it is 80% likely to be relevant; and if someone is nonbinary there is a 50-50 split. I’ve set up all the nodes in this pattern, just flipping the 80% to 20% and vice versa depending on whether a norm is for men or for women.

The idea then is to the use the Bayesian network to calculate how likely it is that someone is a man, woman, or nonbinary based on the relevance or irrelevance of the norms.

The Spaces node top left is a convenient way to change the prior probabilities of each gender; so in LGBT spaces the prior probability for nonbinary raises from 1% to 20% since there are likely to be more nonbinary people around. This also captures the intuition that it’s easier to work out whether a particular identity applies to you if you meet more people who hold that identity. See the picture below. Note how LGBT is now bold and underlined over top left. That means we are conditioning on that, i.e., assuming that it is true.

But let’s go back to cisgendered spaces.

Suppose most (but not necessarily all) of the male norms are experienced as irrelevant and most (but not necessarily all) of the female norms are perceived as relevant. As you can see below, the probability that someone is a woman increases to over 90%

Similarly, for the converse where most male norms are relevant and most female norms are irrelevant now the probability that someone is a man rises to over 90%:

Now what if all the norms are relevant? Let’s also reset the evidence on whether someone is in a cis or LGBT space.

The probability of being nonbinary has gone up a little to 4%, but in this state there is most likely confusion about whether the gender is male or female since they both have the highest probability and that probability is the same.

Similarly, if all the norms are irrelevant, then the probability of nonbinary is 4%. Again, it is unlikely that you would infer that you are nonbinary.

But increasing the prior probability of nonbinary gender, for instance through meeting more nonbinary people in LGBTQ+ spaces, now makes nonbinary the most likely gender.

To emphasise again, there are many more varieties of gender identity here and an obvious thought might be that someone could be gender nonconforming but still a cis man or woman – especially if someone views gender as coupled to chromosomes/genitals. I think it’s also interesting how the underdetermination of scientific theories can apply to people’s ruminations about identity given how they feel and what other evidence they have.

The situation can also be fuzzier, e.g., where the difference between one of the binary genders and nonbinary is closer:

We undoubtedly don’t have conscious access to mental probabilities to two decimal places(!), so scenarios like these may feel equiprobable.

So far we have explored the simple situation where people are only aware of three male norms and three female norms. What happens if we had more, but kept the probability distributions on each the same…? Now we’re tip-toeing towards a more realistic scenario:

Everything works as before for men and women; however, something different now happens for nonbinary people. Suppose all the norms are experienced as irrelevant (it works the same for relevant):

Now the most probable gender is nonbinary (though man and woman are still far from zero: 24%).

This is true even in cis spaces:

Finally, there’s another way to bump up the probability of nonbinary. Let’s go back to two gender norms, one male and one female. However, set the probabilities so that if you’re a woman, it’s 99.99% probable that the female norm will apply (and similar for men and male norms). Set it to 50-50 for nonbinary. Now we get a strong inference towards nonbinary if neither or both norms are relevant, even in cis spaces.

In summary:

It is possible to view norm-relevancy through probabilities and as a sort of Bayesian self-identity decoding process.

When there is a small number of norms and (say) 80% chance of a norm being relevant for a particular binary gender, the prior probability of nonbinary has a big impact on whether someone decodes their gender that way.

As the number of norms increases, it is easier to infer nonbinary as a possibility.

Additionally, if there are only a few norms, but the probability that they apply for men and women is very high, then seeing them as all relevant or irrelevant is strong evidence for nonbinary.

It is a cliché that randomised controlled trials (RCTs) are the gold standard if you want to evaluate a social policy or intervention and quasi-experimental designs (QEDs) are presumably the silver standard. But often it is not possible to use either, especially for complex policies. Theory-Based Evaluation is an alternative that has been around for a few decades, but what exactly is it?

In this post I will sketch out what some key texts say about Theory-Based Evaluation; explore one approach, contribution analysis; and conclude with discussion of an approach to assessing evidence in contribution analyses (and a range of other approaches) using Bayes’ rule.

theory (lowercase)

Let’s get the obvious out of the way. All research, evaluation included, is “theory-based” by necessity, even if an RCT is involved. Outcome measures and interviews alone cannot tell us what is going on; some sort of theory (or story, account, narrative, …) – however flimsy or implicit – is needed to design an evaluation and interpret what the data means.

If you are evaluating a psychological therapy, then you probably assume that attending sessions exposes therapy clients to something that is likely to be helpful. You might make assumptions about the importance of the therapeutic relationship to clients’ openness, of any homework activities carried out between sessions, etc. RCTs can include statistical mediation tests to determine whether the various things that happen in therapy actually explain any difference in outcome between a therapy and comparison group (e.g., Freeman et al., 2015).

It is great if a theory makes accurate predictions, but theories are underdetermined by evidence, so this cannot be the only criterion for preferring one theory’s explanation over another (Stanford, 2017) – again, even if you have an effect size from an RCT. Lots of theories will be compatible with any RCT’s results. To see this, try a particular social science RCT and think hard about what might be going on in the intervention group beyond what the intervention developers have explicitly intended.

In addition to accuracy, Kuhn (1977) suggests that a good theory should be consistent with itself and other relevant theories; have broad scope; bring “order to phenomena that in its absence would be individually isolated”; and it should produce novel predictions beyond current observations. There are no obvious formal tests for these properties, especially where theories are expressed in ordinary language and box-and-arrow diagrams.

Theory-Based Evaluation (title case)

Theory-Based Evaluation is a particular genre of evaluation that includes realist evaluation and contribution analysis. According the UK’s government’s Magenta Book (HM Treasury, 2020, p. 43), Theory-Based methods of evaluation

“can be used to investigate net impacts by exploring the causal chains thought to bring about change by an intervention. However, they do not provide precise estimates of effect sizes.”

The Magenta Book acknowledges (p. 43) that “All evaluation methods can be considered and used as part of a [Theory-Based] approach”; however, Figure 3.1 (p. 47) is clear. If you can “compare groups affected and not affected by the intervention”, you should go for experiments or quasi-experiments; otherwise, Theory-Based methods are required.

Theory-Based Evaluation attempts to draw causal conclusions about a programme’s effectiveness in the absence of any comparison group. If a quasi-experimental design (QED) or randomised controlled trial (RCT) were added to an evaluation, it would cease to be Theory-Based Evaluation, as the title case term is used.

Example: Contribution analysis

Contribution analysis is an approach to Theory-Based Evaluation developed by JohnMayne (28 November 1943 – 18 December 2020). Mayne was originally concerned with how to use monitoring data to decide whether social programmes actually worked when quasi-experimental approaches were not feasible (Mayne, 2001), but the approach evolved to have broader scope.

According to a recent summary (Mayne, 2019), contribution analysis consists of six steps (and an optional loop):

Step 1: Set out the specific cause-effect questions to be addressed.

Step 2: Develop robust theories of change for the intervention and its pathways.

Step 3: Gather the existing evidence on the components of the theory of change model of causality: (i) the results achieved and (ii) the causal link assumptions realized.

Step 4: Assemble and assess the resulting contribution claim, and the challenges to it.

Step 5: Seek out additional evidence to strengthen the contribution claim.

Step 6: Revise and strengthen the contribution claim.

Step 7: Return to Step 4 if necessary.

Here is a diagrammatic depiction of the kind of theory of change that could be plugged in at Step 2 (Mayne, 2015, p. 132), which illustrates the cause-effect links an evaluation would aim to evaluate.

In this example, mothers are thought to learn from training sessions and materials, which then persuades them to adopt new feeding practices. This leads to children having more nutritious diets. The theory is surrounded by various contextual factors such as food prices. (See also Mayne, 2017, for a version of this that includes ideas from the COM-B model of behaviour.)

Step 4 is key. It requires evaluators to “Assemble and assess the resulting contribution claim”. How are we to carry out that assessment? Mayne (2001, p. 14) suggests some questions to ask:

“How credible is the story? Do reasonable people agree with the story? Does the pattern of results observed validate the results chain? Where are the main weaknesses in the story?”

For me, the most credible stories would include experimental or quasi-experimental tests, with mediation analysis of key hypothesised mechanisms, and qualitative detective work to get a sense of what’s going on beyond the statistical associations. But the quant part of that would lift us out of the Theory-Based Evaluation wing of the Magenta Book flowchart. In general, plausibility will be determined outside contribution analysis in, e.g., quality criteria for whatever methods for data collection and analysis were used. Contribution analysis says remarkably little on this key step.

Although contribution analysis is intended to fill a gap where no comparison group is available, Mayne (2001, p. 18) suggests that further data might be collected to help rule out alternative explanations of outcomes, e.g., from surveys, field visits, or focus groups. He also suggests reviewing relevant meta-analyses, which could (I presume) include QED and RCT evidence.

It is not clear to me what the underlying theory of causation is in contribution analysis. It is clear what it is not (Mayne, 2019, pp. 173–4):

“In many situations a counterfactual perspective on causality—which is the traditional evaluation perspective—is unlikely to be useful; experimental designs are often neither feasible nor practical…”

“[Contribution analysis] uses a stepwise (generative) not a counterfactual approach to causality.”

(We will explore counterfactuals below.) I can guess what this generative approach could be, but Mayne does not provide precise definitions. It clearly isn’t the idea from generative social science in which causation is defined in terms of computer simulations (Epstein, 1999).

One way to think about it might be in terms of mechanisms: “entities and activities organized in such a way that they are responsible for the phenomenon” (Illari & Williamson, 2011, p. 120). We could make this precise by modelling the mechanisms using causal Bayesian networks such that variables (nodes in a network) represent the probability of activities occurring, conditional on temporally earlier activities having occurred – basically, a chain of probabilistic if-thens.

Why do people get vaccinated for Covid-19? Here is the beginning of a (generative?) if-then theory:

If you learned about vaccines in school and believed what you learned and are exposed to an advert for Covid-19 jab and are invited by text message to book an appointment for one, then (with a certain probability) you use your phone to book an appointment.

If you have booked an appointment, then (with a certain probability) you travel to the vaccine centre in time to attend the appointment.

If you attend the appointment, then (with a certain probability) you are asked to join a queue.

… and so on …

In a picture:

This does not explain how or why the various entities (people, phones, etc.) and activities (doing stuff like getting the bus as a result of beliefs and desires) are organised as they are, just the temporal order in which they are organised and dependencies between them. Maybe this suffices.

What are counterfactual approaches?

Counterfactual impact evaluation usually refers to quantitative approaches to estimate average differences as understood in a potential outcomes framework (or generalisations thereof). The key counterfactual is something like:

“If the beneficiaries had not taken part in programme activities, then they would not have had the outcomes they realised.”

Logicians have long worried how to determine the truth of counterfactuals, “if A had been true, B.” One approach, due to Stalnaker (1968), proposes that you:

Start with a model representing your beliefs about the factual situation where A is false. This model must have enough structure so that tweaking it could lead to different conclusions (causal Bayesian networks have been proposed; Pearl, 2013).

Add A to your belief model.

Modify the belief model in a minimal way to remove contradictions introduced by adding A.

Determine the truth of B in that revised belief model.

This broader conception of counterfactual seems compatible with any kind of evaluation, contribution analysis included. White (2010, p. 157) offered a helpful intervention, using the example of a pre-post design where the same outcome measure is used before and after an intervention:

“… having no comparison group is not the same as having no counterfactual. There is a very simple counterfactual: what would [the outcomes] have been in the absence of the intervention? The counterfactualis that it would have remained […] the same as before the intervention.”

The counterfactual is untested and could be false – regression to the mean would scupper it in many cases. But it can be stated and used in an evaluation. I think Stalnaker’s approach is a handy mental trick for thinking through the implications of evidence and producing alternative explanations.

Cook (2000) offers seven reasons why Theory-Based Evaluation cannot “provide the valid conclusions about a program’s causal effects that have been promised.” I think from those seven, two are key: (i) it is usually too difficult to produce a theory of change that is comprehensive enough for the task and (ii) the counterfactual remains theoretical – in the arm-chair, untested sense of theoretical – so it is too difficult to judge what would have happened in the absence of the programme being evaluated. Instead, Cook proposes including more theory in comparison group evaluations.

Bayesian contribution tracing

Contribution analysis has been supplemented with a Bayesian variant of process tracing (Befani & Mayne, 2014; Befani & Stedman-Bryce, 2017; see also Fairfield & Charman, 2017, for a clear introduction to Bayesian process tracing more generally).

The idea is that you produce (often subjective) probabilities of observing particular (usually qualitative) evidence under your hypothesised causal mechanism and under one or more alternative hypotheses. These probabilities and prior probabilities for your competing hypotheses can then be plugged into Bayes’ rule when evidence is observed.

Suppose you have two competing hypotheses: a particular programme led to change versus pre-existing systems. You may begin by assigning them equal probability, 0.5 and 0.5. If relevant evidence is observed, then Bayes’ rule will shift the probabilities so that one becomes more probable than the other.

Process tracers often cite Van Evera’s (1997) tests such as the hoop test and smoking gun. I find definitions of these challenging to remember so one thing I like about the Bayesian approach is that you can think instead of specificity and sensitivity of evidence, by analogy with (e.g., medical) diagnostic tests. A good test of a causal mechanism is sensitive, in the sense that there is a high probability of observing the relevant evidence if your causal theory is accurate. A good test is also specific, meaning that the evidence is unlikely to be observed if any alternative theory is true. See below for a table (lighted edited from Befani & Mayne, 2014, p. 24) showing the conditional probabilities of evidence for each of Van Evera’s tests given a hypothesis and alternative explanation.

Van Evera test
if Eᵢ is observed

P(Eᵢ | Hyp)

P(Eᵢ | Alt)

Fails hoop test

Low

—

Passes smoking gun

—

Low

Doubly-decisive test

High

Low

Straw-in-the-wind test

High

High

Let’s take the hoop test. This applies to evidence which is unlikely if your preferred hypothesis were true. So if you observe that evidence, the hoop test fails. The test is agnostic about the probability under the alternative hypothesis. Straw-in-the-wind is hopeless for distinguishing between your two hypotheses, but could suggest that neither holds if the test fails. The double-decisive test has high sensitivity and high specificity, so provides strong evidence for your hypothesis if it passes.

The arithmetic is straightforward if you stick to discrete multinomial variables and use software for conditional independence networks. Eliciting the subjective probabilities for each source of evidence, conditional on each hypothesis, may be less straightforward.

Conclusions

I am with Cook (2000) and others who favour a broader conception of “theory-based” and suggest that better theories should be tested in quantitative comparison studies. However, it is clear that it is not always possible to find a comparison group – colleagues and I have had to make do without (e.g., Fugard et al., 2015). Using Theory-Based Evaluation in practice reminds me of jury service: a team are guided through thick folders of evidence, revisiting several key sections that are particularly relevant, and work hard to reach the best conclusion they can with what they know. There is no convenient effect size to consult, just a shared (to some extent) and informal idea of what intuitively feels more or less plausible (and lengthy discussion where there is disagreement). To my mind, when quantitative comparison approaches are not possible, Bayesian approaches to assessing qualitative evidence are the most compelling way to synthesise qualitative evidence of causal impact and make transparent how this synthesis was done.

Finally, it seems to me that the Theory-Based Evaluation category is poorly named. Better might be, Assumption-Based Counterfactual approaches. Then RCTs and QEDs are Comparison-Group Counterfactual approaches. Both are types of theory-based evaluation and both use counterfactuals; it’s just that approaches using comparison groups gather quantitative evidence to test the counterfactual. However, the term doesn’t quite work since RCTs and QEDs rely on assumptions too… Further theorising needed.

Edited to add: Reichardt’s (2022), The Counterfactual Definition of a Program Effect, is a very promising addition to the literature and, I think, offers a clear way out of the theory-based versus non-theory-based and counterfactual versus not-counterfactual false dichotomies. I’ve blogged about it here.

(If you found this post interesting, please do say hello and let me know!)

Cook, T. D. (2000). The false choice between theory-based evaluation and experimentation. In A. Petrosino, P. J. Rogers, T. A. Huebner, & T. A. Hacsi (Eds.), New directions in evaluation: Program Theory in Evaluation: Challenges and Opportunities (pp. 27–34). Jossey-Bass.

Kuhn, T. S. (1977). Objectivity, Value Judgment, and Theory Choice. In The Essential Tension: Selected Studies in Scientific Tradition and Change (pp. 320–339). The University of Chicago Press.