Cognitive biases and prior beliefs strongly influence how individuals interpret study results. When research findings align with an individual’s preconceived notions, they are often accepted uncritically. Conversely, when findings contradict personal experience, their validity, relevance, or integrity may be questioned. This cognitive dissonance contributes to the inconsistent application of evidence-based medicine (EBM), as clinicians may either dismiss research due to anecdotal experience or fail to recognize methodological flaws in studies that confirm their biases.
A significant challenge in medical research and its interpretation is the prevalence of statistical fallacies. Many of these errors, including the base rate fallacy, Simpson’s paradox, collider bias, and others, are well-documented in statistical literature but frequently overlooked by both researchers and consumers of scientific information. These fallacies can lead to misinterpretations of study results, inappropriate clinical applications, and ultimately, suboptimal patient care.
Additionally, research findings are often misrepresented—either unintentionally due to methodological limitations or intentionally due to biases in data analysis and reporting. Common issues include inadequate study design, selective reporting (e.g., p-hacking, publication bias), and the miscommunication of statistical outcomes (e.g., conflating relative risk with absolute risk). As a result, the translation of research findings into clinical practice is frequently flawed.
The tension between empirical evidence and clinical experience remains a critical issue in medicine. While anecdotal experience is valuable, it is inherently susceptible to bias. One potential solution lies in Bayesian statistical approaches, which provide a framework for integrating prior knowledge with new evidence in a probabilistic manner. This methodology can help clinicians and researchers more accurately interpret study results while mitigating the influence of cognitive biases.
Several of these issues, including statistical fallacies and Bayesian approaches to evidence evaluation, are explored in depth in Learning Bayesian Statistics, Episode 97.
Links from the show:
- LBS #41, Thinking Bayes, with Allen Downey: https://learnbayesstats.com/episode/41-think-bayes-allen-downey/
- Allen’s blog: https://www.allendowney.com/blog/
- Allen on Twitter: https://twitter.com/allendowney
- Allen on GitHub: https://github.com/AllenDowney
- Order Allen’s book, Probably Overthinking It, at a 30% discount with the code UCPNEW: https://press.uchicago.edu/ucp/books/book/chicago/P/bo206532752.html
- The Bayesian Killer App: https://www.allendowney.com/blog/2023/03/20/the-bayesian-killer-app/
- Bayesian and Frequentist Results Are Not the Same, Ever: https://www.allendowney.com/blog/2021/04/25/bayesian-and-frequentist-results-are-not-the-same-ever/
- Allen’s presentation on the Overton paradox: https://docs.google.com/presentation/d/1-Uvby1Lfe1BTsxNv5R6PhXfwkLUgsyJgdkKtO8nUfJo/edit#slide=id.g291c5d4559e_0_0
- Video on the Overton Paradox, from PyData NYC 2022: https://youtu.be/VpuWECpTxmM
- Thompson sampling as a dice game: https://allendowney.github.io/TheShakes/
- Causal quartets – Different ways to attain the same average treatment effect: http://www.stat.columbia.edu/~gelman/research/unpublished/causal_quartets.pdf
- LBS #89, Unlocking the Science of Exercise, Nutrition & Weight Management, with Eric Trexler: https://learnbayesstats.com/episode/89-unlocking-science-exercise-nutrition-weight-management-eric-trexler/
- How Minds Change, David McRaney: https://www.davidmcraney.com/howmindschangehome