Half-baked thoughts

These are a collection of half-baked ideas that I would like to push along. If you happened upon this page and any of this interests you, pop me an email.

Policy evaluation
  • The difference between theory-based evaluation (as the term is often used in UK policy evaluation) and counterfactual impact evaluation (RCTs and QEDs) is that the former relies on expert judgment to define and rule-out alternative explanations of change whereas the latter uses statistical inference. Both require theory (theory-driven evaluation as defined by Rossi and Chen, rhetorically, is the easiest term to use here as examples includes experiments and quasi-experiments). Both may or may not rely on counterfactual notions of causality. There can be two Bayesian networks with identical probability distributions, one obtained through expert elicitation the other obtained through data analysis. The distinction is a matter of degree, e.g., Bayesian statistical analysis can rely on prior distributions that were elicited by experts. Experts need to choose the variables and theory is often incomplete or unevidenced. Early thoughts on this are in the talk I gave to the UK Evaluation Society (UKES) on 5 December 2023, which I have finally written up as a blog post.
  • The existence of programme evaluation as a transdiscipline is possible because of the centrality of methodology, e.g., issues in designing and running RCTs have similarities cross-topic. But from a theory-driven perspective this is boring. What evaluation could be doing (is doing?) is learning from individual topic theories and transporting them between each other. Examples of what this might look like include behavioural change theories (though psychology “owns” these); theories of policy implementation (owned by org psych or public administration?); theories of how professional-beneficiary working relationships facilitate change (e.g, work on therapeutic alliance).
  • Bayesian networks and/or nonparametric causal DAGs can be used in theory of change workshops to encourage programme developers and beneficiaries/service uses to spell out why they think a programme might help and how. But the causal DAGs alone do not suffice for the theory of change. An interesting question is what additional information is required and whether it is necessarily prose or if some other formal framework would suffice.
  • The way trans and nonbinary binary people develop their gender identity can be modelled as Bayesian norm-relevancy. This explains how meeting LGBTQ+ people, in person, via social media, or in literature, changes prior probabilities and impacts on inferences about identity.
Social research methodology
  • There are three goals of social research: (1) evoke empathy, (2) classify, and (3) predict. These cut across the quant/qual distinction, e.g., subjective expert judgement can be used to make predictions without any statistical analysis; qualitative thematic analysis is used to classify but so too is statistical latent class analysis and cluster analysis. I’m not sure a statistical model can evoke empathy.