Dipping into How to Think Like a Realist

Raw Pawson’s new book (Pawson, 2024) is an introduction to scientific practice, with social science as a corollary, drawing on the philosophy of science and Pawson’s experience conducting applied research. The book is on my reading stack. For now, I have taken a purposive sample of a few episodes (as Pawson calls the chapters) to get a sense of what’s in store. Here are some clues.

The main objective of the book (p. xvi)

“… is revealed in the structure of the text, which journeys across physical science and clinical science, before landing squarely in social science. The coverage here represents a commitment to the ‘unity of science’ (Oppenheim and Putnam, 1958). This heroic proposition claims that there are core explanatory principles which underpin science in all it guises, and the book makes tiny, tentative steps in tracing the common realist tenets. Perforce, I am also committed to the ‘unity of social science’.”

What about those context, mechanism, outcome triads that realist evaluators use? (p. 42)

“All scientific investigation utilises explanations relating mechanisms and contexts to empirical patterns.”

And (p. 48):

“With Harré, I have characterised generative causation in physical science as the analysis of mechanisms, contexts, and regularities (MCR). In clinical research the focus, quite properly, is on mechanisms, contexts, and outcomes (MCO). In social research, it might be wise to begin with the shorthand mechanisms, contexts, and change (MCC).”

(I wonder what the implications of the book are for realist evaluation as a distinct genre – will read those bits with interest.)

On social science and people who don’t follow science (p. xviii):

“Despite the habitual use of the appellation ‘social science’, many of my colleagues would reject any claim to follow science, dismiss any interest in causality, deny any need for objectivity, and scorn the possibility of generalisation. They are beyond hope. I don’t seek to convert them. But in following their chosen paths these various tribes – constructivists, post-modernists, emancipators, critics, essayists, relativists, and so on – have found time to say why causality, objectivity, and generality are false idols. So, in defending the science in social science, their criticisms also need to be overturned.”

More on what Pawson aims to do (pp. 251-252):

“What I’ve come up with here might well be entitled The Old Rules of Sociological Method. I have attempted to extract and justify some realist principles for conducting social research on the back of a generous portfolio of existing examples. Those illustrations reach across many research domains and a broad portfolio of practical methods. But they remain a pinprick; I could have called upon a thousand others. Accordingly, there is another way of perceiving my efforts. The book is no more and no less than an attempt to codify and formalise existing practices. I have tried to capture a tradition. So, just as Monsieur Jourdain spoke prose without knowing what it was, it may well be that you, dear reader, have been thinking like a realist without knowing it!”

I had to google Jourdain. He’s the main character in a comedy by Molière, Le Bourgeois gentilhomme (The Bourgeois Gentleman). The play satirises “the pretensions of the social climber whose affectations are absurd to everyone but himself”, which is a curious reference, dear reader.

To be continued…


Pawson, R. (2024). How to Think Like a realist: A methodology for social science. Edward Elgar Publishing Limited.

ChatGPT has indifference towards the truth of outputs

An interesting analysis of the output of LLMs (Hicks et al., 2024), according to a typology developed by Harry Frankfurt. That ChatGPT and co aren’t people with agency constrains the genre that can apply, similarly to analyses of trust (see, e.g., Castelfranchi and Falcone on whether a computer can trust).


Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38.

Choosing a sample size for a thematic analysis

About a decade ago, Henry Potts and I (Fugard & Potts, 2015) developed a method to help think about choosing a sample size for thematic analyses. The paper explaining the approach has been cited over a thousand times. A small number of those citations report correct uses of the approach. Here are five of them:

  • “Framing the necessary sample size in terms of the likelihood of capturing important ideas, a sample size of at least 15 per group would have a 90% probability for capturing ideas held by 25% of the population.” (Weller et al., 2017, p. 2)
  • “An estimation of the sample size was done prior to the second selection to ensure that the final number of participants would be large enough to yield a rich data set. Twenty-one interviews would be needed according to Fugard’s sample size calculation method for 80% power. This calculation assumed: a lowest prevalence of 60% of a theme worth discovering, 90% of the informants having something to say about the theme, and that the theme should be recognised at least ten times in the data.” (Malmborg et al., 2020, p. 170)
  • “In this study of the reasons for open online student dropout, appropriate sample size was determined based on Fugard and Potts’ (2015) thematic analysis sample size tool, which highlights the required sample size as a function of anticipated theme prevalence in a population. In line with this tool, a sample of 200 has a 99% probability of detecting five theme instances for a theme prevalent in 6% of the population; thus, 200 was set as the minimum sample target for this study.” (Greenland & Moore, 2022, p. 652)
  • “To evaluate the saturation of the codes, we used the Fugard and Potts [30] method to predict saturation based on probability theory. This approach was appropriate for our data set, given our large, random sample of reviews and our predominantly deductive approach to data analysis [30,31]. Our data set provided >80% power to identify 5 instances of themes mentioned by 1% of the population. We chose a cutoff of 1% to reflect the shallow nature of this data set, assuming that not all who experienced a code would describe it in their review, and 5 instances because this was typically the number of observations required to achieve repetition of content within the codes.” (Polhemus et al., 2022, p. 4)
  • “According to the calculation method of Fugard and Potts [28], the target study cohort size of 28 patients would provide 90% power to detect any theme of interest in at least 1 interview if the true prevalence of the patient experience captured by that theme was between 5 and 10% in the underlying PH1 population represented by the study cohort.” (Danese et al., 2023, p. 3) (Actually prevalence 7.9%.)


Danese, D., Goss, D., Romano, C., & Gupta, C. (2023). Qualitative assessment of the patient experience of primary hyperoxaluria type 1: An observational study. BMC Nephrology, 24(1), 319.

Fugard, A. J. B. & Potts, H. W. W. (2015). Supporting thinking on sample sizes for thematic analyses: A quantitative toolInternational Journal of Social Research Methodology, 18, 669–684. (There’s an app for that.)

Greenland, S. J., & Moore, C. (2022). Large qualitative sample and thematic analysis to redefine student dropout and retention strategy in open online education. British Journal of Educational Technology, 53, 647–667.

Malmborg, A., Brynte, L., Falk, G., Brynhildsen, J., Hammar, M., & Berterö, C. (2020). Sexual function changes attributed to hormonal contraception use – a qualitative study of women experiencing negative effects. The European Journal of Contraception & Reproductive Health Care, 25(3), 169–175.

Polhemus, A., Simblett, S., Dawe-Lane, E., Gilpin, G., Elliott, B., Jilka, S., Novak, J., Nica, R. I., Temesi, G., & Wykes, T. (2022). Health Tracking via Mobile Apps for Depression Self-management: Qualitative Content Analysis of User Reviews. JMIR Human Factors, 9(4), e40133.

Weller, S. C., Baer, R., Nash, A., & Perez, N. (2017). Discovering successful strategies for diabetic self-management: A qualitative comparative study. BMJ Open Diabetes Research and Care, 5, e000349.

LaLonde (1986) after Nearly Four Decades: Lessons Learned (Guido Imbens, Yiqing Xu)

“We show that modern methods, when applied in contexts with significant covariate overlap, yield robust estimates for the adjusted differences between the treatment and control groups. However, this does not mean that these estimates are valid. To assess their credibility, validation exercises (such as placebo tests) are essential, whereas goodness of fit tests alone are inadequate. Our findings highlight the importance of closely examining the assignment process, carefully inspecting overlap, and conducting validation exercises when analyzing causal effects with nonexperimental data.”


Ioannidis and Psillos (2018) on mechanisms

“Mechanisms are causal pathways described in theoretical language that have certain functions; these descriptions can be enriched by offering more detailed or fine‐grained descriptions; the same mechanism can then be described at various levels using different theoretical vocabularies (e.g., cytological vs biochemical descriptions in the case of apoptosis); lastly, the descriptions of biomedically important mechanisms are often such that they contain specific causal information that can be used to make interventions for therapeutic purposes.” (Ioannidis & Psillos, 2018, pp. 1180–1)

Ioannidis, S., & Psillos, S. (2018). Mechanisms in practice: A methodological approach. Journal of Evaluation in Clinical Practice, 24(5), 1177–1183.

Russian Media Monitor (Julia Davis)

If you want a glimpse of how violent and disconnected from reality Russia’s national propaganda has become, check out Julia Davis’s Russian Media Monitor.

This (3 Jun 2024) clip of a popular show hosted by Vladimir Solovyov, featuring guests Andrey Gurulyov (member of the State Duma) and Andrey Bezrukov (also known as Donald Heathfield) among others, violently argues that Russia should start a nuclear war.

See also this BBC Monitoring report (29 April 2024).