“Many Black people I know say that they capitalize Black as a show of respect, pride, and celebration, and they don’t want to afford the same courtesy to Whiteness. But we frequently capitalize words for reasons other than respect – words like Holocaust, or Hell […]. When we ignore the specificity and significance of Whiteness – the things that it is, the things that it does – we contribute to its seeming neutrality and thereby grant it power to maintain its invisibility.”
– Prof Eve L. Ewing (2020), I’m a Black Scholar Who Studies Race. Here’s Why I Capitalize ‘White.’
Tag: social science
From metaphysics to goals in social research and evaluation
Some of the social research and evaluation papers I encounter include declarations of the authors’ metaphysical stance: social constructionist, realist (critical or otherwise), phenomenologist – and sometimes a dig at positivism. This is one way research and researchers are classified. Clearly there are different kinds of research; however, might it be easiest to see the differences in terms of research goals rather than jargon-heavy isms? Here are three examples of goals, to try to explore what I mean.
Evoke empathy. If you can’t have a chat with someone then the next best way to empathise with them is via a rich description by or about them. There is a bucket-load of pretentiousness in the literature (search for “thick description” to find some). But skip over this and there are wonderful works that are simply stories. Biographies you read which make you long to meet the subject. Film documentaries, though not fitting easily into traditional research output, are another. Anthologies gathering expressions of people’s lived experience without a researcher filter. “Interpretative Phenomenological Analyses” manage to include stories too, though with more metaphysics.
Classify. This may be the classification of perspectives, attitudes, experiences, processes, organisations, or other stuff-that-happens in society. For example: social class, personality, experiences people have in psychological therapy, political orientation, emotional experiences. The goal here is to develop patterns, whether from thematic analysis of interview responses, latent class analysis of answers on Likert scales, or some other kind of data and analysis. There’s no escaping theory, articulated and debated or unarticulated and unchallenged, when doing this.
Predict. Do people occupying a particular social class location tend to experience some mental health difficulties more often than others? Does your personality predict the kinds of books you like to read. Do particular events predict an emotion you will feel? Other predictions concern the impact of interventions of various kinds (broadly construed). What would happen if you funded national access to cognitive behavioural therapy or universal basic income? Theory matters here too, usually involving a story or model of why variables relate to each other. Prediction can be statistical or may involve gathering views on expert opinion (expert by lived experience or profession).
These goals cannot be straightforwardly mapped onto quantitative and qualitative data and analysis. As a colleague and I wrote (Fugard & Potts, 2016):
“Some qualitative research develops what looks like a taxonomy of experiences or phenomena. Much of this isn’t even framed as qualitative. Take for example Gray’s highly-cited work classifying type 1 and type 2 synapses. His labelled photos of cortex slices illustrate beautifully the role of subjectivity in qualitative analysis and there are clear questions about generalisability. Some qualitative analyses use statistical models of quantitative data, for example latent class analyses showing the different patterns of change in psychological therapies.”
What I personally want to see, as an avid reader of research, is a summary of the theory – topic-specific, substantive theory rather than metaphysical – that researchers had before launching into gathering data; how they plan to analyse the data; and what they think about the theory when they finished. Ideally I also want to know something about the politics driving the research, whether expressed in terms of conflicts of interest or the authors’ position on inequity or oppression investigated in a study. Reflections on ontological realism and epistemic relativity – less so.
Predictors of Transgender Prejudice: A Meta-Analysis
Hailey Hatch et al. (2022) conducted a systematic review and meta-analysis of predictors of transphobia. The final analysis consisted of 82 studies with a total of 36,285 participants. Main findings in the table below. A higher score indicates more transphobia. Right wing authoritarianism (RWA) is one of the strongest predictors, “characterized by the tendency to submit to authority figures, to adhere to conventional social norms, and to aggress against those who may be considered threatening or perceived to go against conventional norms”.
References
“Counterfactual” is not a synonym for “control group”
“Counterfactual” is not a synonym for “control group”. In fact, the treatment group’s actual outcomes are used when estimating the control group’s counterfactual outcomes, which is necessary to estimate the average treatment effect on control (ATC) or average treatment effect (ATE) estimands.
An individual’s treatment effect is defined as a within-person difference between the potential outcome following treatment and the potential outcome following control. This individual treatment effect is impossible to measure, since only one potential outcome is realised depending on which group the individual was in. However, various averages of the treatment effects can be estimated.
For ATC, we are interested in estimating averages of these treatment effects for control group participants. We know the control group’s actual outcomes. We also need to answer the counterfactual query:
If individuals in the control group had been assigned treatment, what would their average outcome have been?
To estimate ATC using matching, we need to find a treatment group match for each individual in the control group. Those treatment group matches are used to estimate the control group’s counterfactual outcomes.
For ATE, we are interested in estimating averages of these treatment effects for all participants. This means we need a combination of answers to the following counterfactual queries:
(a) If individuals in the treatment group had been assigned control, what would their average outcome have been?
(b) If individuals in the control group had been assigned treatment, what would their average outcome have been?
To estimate ATE using matching, each treatment individual needs a control group match and each control group individual needs a treatment group match. So, for ATE, both treatment and control groups could be considered counterfactuals, in the sense that they are both used to estimate the other group’s counterfactual outcomes. However, I think it is clearer if we draw a distinction between group (treatment or control) and what we are trying to estimate using data from a group (actual or counterfactual outcomes).
Mediation analysis effects
Are you wondering how to extract estimates of the following estimands from linear models underlying mediation analyses…?
- total average causal effect (ACE) / average treatment effect (ATE)
- average direct effect (ADE)
- average causal mediation effect (ACME)
- proportion mediated effect
You might be interested in this example I put together, using {mediation} in R, showing the arithmetic!
Approaches to consent in public health research in secondary schools
“Seeking active parent consent can undermine secondary school students’ autonomy, and limit participation, particularly among disadvantaged students, so biasing research. Our analysis suggests that active student consent and passive parent/carer consent be standard practice for most research procedures in secondary schools. More intrusive data collection, such as blood and saliva samples, would require parent/carer active consent since such procedures would be defined as diagnostic procedures so being classed as an investigational product. However, we would argue that for questionnaire completion, observation or routine data, student consent and autonomy should have primacy with parents having the right and means to receive full information, ask questions and withdraw their children from research should they wish. This approach gives proper primacy to student autonomy while also respecting parent/carer autonomy.”
Also helpful thoughts on issues arising with consent for whole-class interventions.
Research Design in the Social Sciences
“This book introduces a new way of thinking about research designs in the social sciences. Our hope is that this approach will make it easier to develop and to share strong research designs.
“At the heart of our approach is the MIDA framework, in which a research design is characterized by four elements: a model, an inquiry, a data strategy, and an answer strategy. We have to understand each of the four on their own and also how they interrelate.”
Uses {DeclareDesign} in R.
Sentimentality
Do you often cry while having a warm feeling in the heart because you find something beautiful? If so, you might be intrigued by this study of the Geneva Sentimentality Scale!
A cynical view of SEMs
It is all too common for a box and arrow diagram to be cobbled together in an afternoon and christened a “theory of change”. One formalised version of such a diagram is a structural equation model (SEM), the arrows of which are annotated with coefficients estimated using data. Here is John Fox (2002) on SEM and informal boxology:
“A cynical view of SEMs is that their popularity in the social sciences reflects the legitimacy that the models appear to lend to causal interpretation of observational data, when in fact such interpretation is no less problematic than for other kinds of regression models applied to observational data. A more charitable interpretation is that SEMs are close to the kind of informal thinking about causal relationships that is common in social-science theorizing, and that, therefore, these models facilitate translating such theories into data analysis.”
References
Fox, J. (2002). Structural Equation Models: Appendix to An R and S-PLUS Companion to Applied Regression. Last corrected 2006.
Do “Growth Mindset” interventions improve students’ academic attainment?
“We conducted a systematic review and multiple meta-analyses of the growth mindset intervention literature. Our goal was to answer two questions: (a) Do growth mindset interventions generally improve students’ academic achievement? and (b) Are growth mindset intervention effects due to instilling growth mindsets in students or are apparent effects due to shortcomings in study designs, analyses, and reporting? To answer these questions, we systematically reviewed the literature and conducted multiple meta-analyses imposing varying degrees of quality control. Our results indicated that apparent effects of growth mindset interventions are possibly due to inadequate study designs, reporting flaws, and bias. In particular, the systematic review yielded several concerning patterns of threats to internal validity.”
Here’s a pic: