Tired of people equating “effect size” with “standardised measure of effect size”? Here’s an antidote, thanks to Shinichi Nakagawa and Innes C. Cuthill (2007). [Effect size, confidence interval and statistical significance: a practical guide for biologists. Biol. Rev. (2007), 82, pp. 591โ605.]
They review the different meanings of “effect size”:
- “Firstly, effect size can mean a statistic which estimates the magnitude of an effect (e.g. mean difference, regression coefficient, Cohenโs d, correlation coefficient). We refer to this as an โeffect statisticโ (it is sometimes called an effect size measurement or index).
- “Secondly, it also means the actual values calculated from certain effect statistics (e.g. mean difference = 30 or r = 0.7; in most cases, โeffect sizeโ means this, or is written as โeffect size valueโ).
- “The third meaning is a relevant interpretation of an estimated magnitude of an effect from the effect statistics. This is sometimes referred to as the biological importance of the effect, or the practical and clinical importance in social and medical sciences.”
They argue in favour of confidence intervals, as these “are not simply a tool for NHST [signifcance testing], but show a range of probable effect size estimates with a given confidence.”
They also cite Wilkinson, L & The Task Force on Statistical Inference (1999) [Statistical methods in psychology journals. American Psychologist 54, 594โ604]:
“our focus on these two standardised effect statistics does not mean priority of standardised effect statistics (r or d) over unstandardised effect statistics (regression coefficient or mean difference) and other effect statistics (e.g. odds ratio, relative risk and risk difference). If the original units of measurement are meaningful, the presentation of unstandardised effect statistics is preferable over that of standardised effect statistics (Wilkinson & the Task Force on Statistical Inference, 1999).”
Good stuff, this.