next up previous contents
Next: Minimum Description Length Models Up: Bayesian Methods Previous: Bayesian Statistics

Subjective Bayesians

There are people who feel in their bones that it makes sense to assign a prior distribution to such things as the different models for the coin tossing in the last section, so as to represent their own internal states of belief. Such a person might say `Well, I think that p(H) = 0.5 is reasonable as a result of looking at the coin and its symmetry. I suppose I could be persuaded that p(H) = 0.4 or p(H) = 0.6 are reasonably likely, while I am much more sceptical about values outside that region. The preceding sentences express my prejudices in English, which is a very scruffy and imprecise language, what I really mean is that my degree of belief in the model p(H) = m is precisely 6m(1-m) for m between 0 and 1. '

This kind of statement may strike one as rather odd. It suggests, at the very least, a good deal of confidence in one's powers of introspection. Most people can barely make up their minds what to have for dinner tonight, while a subjective Bayesian appears to have his preferences expressible by analytic functions. The idea that beliefs can be expressed with such precision and certainty, to as many decimal places as one wishes to use, has been criticised severely by, for instance, Peter Walley.

We may doubt that an individual's beliefs can be known to him with such precision, but if we regard any animal as a learning machine which at any time has amassed a certain amount of knowledge of how the world works, then any situation in which it finds itself might be said to be a generator of expectations of what will happen next. These expectations are, one presumes, the result of the animal's brain having stored counts of what happened next, in roughly similar circumstances, in the past.[*] How the brain stores and uses this information is of course still largely conjectural, but it would appear to be what brains are for. And it is not beyond belief that the different possible outcomes of a situation may be assigned different degrees of confidence in some representation accomplished by neurons, so there may indeed be some discrete approximation to a Bayesian prior pdf sitting in some distributed fashion in the brain of the animal. And if the animal happens to be a subjective Bayesian, he may feel competent to tell us what this pdf feels like, even unto using the language of analytic functions. This last is the hardest part to believe, but dottier ideas have come good in Quantum Mechanics, as my old Mother used to say.

When the animal is fresh out of the egg, it may have rather bland preferences; these may be expressed by uniform priors, for example the belief that all values of m are equally likely. Or we may suppose that far from having a tabula rasa for a brain, kindly old evolution has ensured that its brain has in fact many pre-wired expectations. If an animal is born with wildly wrong expectations, then kindly old evolution kills it stone dead before it can pass on its loony preconceptions to junior.

It is not absolutely impossible then that at some future time, when the functioning of brains is much better understood than it is at present, by some careful counting of molecules of transmitter substances at synapses using methods currently undreamt of, it might be possible to actually measure somebody's prior pdf for some event. Whether this might be expected to give results in good agreement with the opinion of the owner of the brain on the matter is open to debate. Subjective Bayesians propose to carry on as if the experiments had been carried out and given the answers they want to hear; this seems foolhardy, but one can admire their courage.

Objective Bayesians are in Schism with Subjective Bayesians; just as the Fascists were in schism with the Communists. And it is pretty hard for outsiders to tell the difference in both cases, although the insiders feel strongly about them. Jaynes has written extensively about sensible priors used to describe ones ignorance or indifference in a way which should be carefully studied by anyone with a leaning toward Fuzzy Sets. An objective Bayesian believes that one can work out not what prior is in his head, but what prior ought to be in his head, given the data he has. Thus he has a commitment to the view that there really is a best thing to do given one's data and the need for decision. If the subjective Bayesian believes he is modelling his own belief structures, the objective Bayesian is modelling what he thinks he ought to believe. Presumably he believes he ought to believe in objective Bayesianism; the possibility of an objective Bayesian concluding that there is insufficient evidence for objective Bayesianism but deciding to believe in it anyway, ought not to be ruled out. The idea that there is a best prior, a right and reasonable prior, but that it might be very difficult to find out what it is, leads to sensitivity analysis which aims to determine if a particular choice of prior is likely to lead you into terrible trouble, or if it doesn't much matter which prior you choose from a class.

It is much easier to philosophise in a woolly manner, and get nowhere, than to come to clear conclusions on matters such as these. This makes the subject fascinating for some and maddening for others.

I.J. Good has written on the philosopical issues involved, if you like that sort of thing.


next up previous contents
Next: Minimum Description Length Models Up: Bayesian Methods Previous: Bayesian Statistics
Mike Alder
9/19/1997