One of the mysteries to anyone trained in the frequentist hypothesis-testing paradigm of statistics, as I was, and still adhering to it, as I do, is how Bayesian approaches seemed to have taken the academy by storm. One wonders, first, how a theory based – and based explicitly – on a measure of uncertainty defined in terms of subjective personal beliefs, could be considered even for a moment for an inter-subjective (ie, social) activity such as Science.
One wonders, second, how a theory justified by appeals to such socially-constructed, culturally-specific, and readily-contestable activities as gambling (ie, so-called Dutch-book arguments) could be taken seriously as the basis for an activity (Science) aiming for, and claiming to achieve, universal validity. One wonders, third, how the fact that such justifications, even if gambling presents no moral, philosophical or other qualms, require infinite sequences of gambles is not a little troubling for all of us living in this finite world. (You tell me you are certain to beat me if we play an infinite sequence of gambles? Then, let me tell you, that I have a religion promising eternal life that may interest you in turn.)
One wonders, fourthly, where are recorded all the prior distributions of beliefs which this theory requires investigators to articulate before doing research. Surely someone must be writing them down, so that we consumers of science can know that our researchers are honest, and hold them to potential account. That there is such a disconnect between what Bayesian theorists say researchers do and what those researchers demonstrably do should trouble anyone contemplating a choice of statistical paradigms, surely. Finally, one wonders how a theory that requires non-zero probabilities be allocated to models of which the investigators have not yet heard or even which no one has yet articulated, for those models to be tested, passes muster at the statistical methodology corral.
To my mind, Bayesianism is a theory from some other world – infinite gambles, imagined prior distributions, models that disregard time or requirements for constructability, unrealistic abstractions from actual scientific practice – not from our own.
So, how could the Bayesians make as much headway as they have these last six decades? Perhaps it is due to an inherent pragmatism of statisticians – using whatever techniques work, without much regard as to their underlying philosophy or incoherence therein. Or perhaps the battle between the two schools of thought has simply been asymmetric: the Bayesians being more determined to prevail (in my personal experience, to the point of cultism and personal vitriol) than the adherents of frequentism. Greg Wilson’s 2001 PhD thesis explored this question, although without finding definitive answers.
Now, Andrew Gelman and the indefatigable Cosma Shalizi have written a superb paper, entitled “Philosophy and the practice of Bayesian statistics”. Their paper presents another possible reason for the rise of Bayesian methods: that Bayesianism, when used in actual practice, is most often a form of hypothesis-testing, and thus not as untethered to reality as the pure theory would suggest. Their abstract:
A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.
Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.
References:
Andrew Gelman and Cosma Rohilla Shalizi [2010]: Philosophy and the practice of Bayesian statistics. Available from Arxiv. Blog post here.
Gregory D. Wilson [2001]: Articulation Theory and Disciplinary Change: Unpacking the Bayesian-Frequentist Paradigm Conflict in Statistical Science. PhD Thesis, Rhetoric and Professional Communication Programme, New Mexico State University. Las Cruces, NM, USA. July 2001.