next up previous contents
Next: Bayesian Methods Up: Maximum Likelihood Models Previous: Maximum Likelihood Models

Where do Models come from?

It is reasonable to ask in the case of coin tossing, where the model comes from. It may well come out of a hat or the seething subconscious of the statistician analysing the data. If I collect some data and you give me two discrete models, I can find which gives me the larger probability. If they are continuous models, I can, using the idea of the pdf as a limit of histograms, calculate the likelihood, the value of the pdf at the data value, for each of the data points, and multiply the numbers together to get a total likelihood for the whole data set, on that model. Or as statisticians prefer to think of it, the likelihood of those model parameters for that particular data set. This was discussed, somewhat casually, in the particular case of gaussian models in chapter one. The heuristic rule of picking the larger of the likelihoods was advanced, without anything other than an appeal to intuition to justify it. In the case where the models were all gaussians it is possible to find that model, in the now infinite manifold of all such models, which gives the maximum likelihood. But then we can ask, where does this family of models come from? Maybe gaussians are daft.

There is no algorithm for producing a good family of models; given two models we can produce rules for choosing between them. When the two models are different members of a family with some fixed number of parameters, we can imagine ourselves swanning about the manifold of all such models, looking for the one for which the likelihood of getting our data is maximal. When the models are of very different types, we can still choose the model which gives the maximum likelihood, but this could be a very stupid thing to do. There is always one model which produces exactly the data which was obtained, and no other, with probability one. In the case of the coin which came down Heads 8 times out of 10, there is always the theory that says this was bound to happen. It has certainly been confirmed by the data! This `predestination' model[*], easily confused with the data set, has probability 1, if it can be assigned a probability at all, but you are unlikely to feel kindly disposed towards it, if only because it says absolutely nothing about what might happen if you tossed the coin again.


next up previous contents
Next: Bayesian Methods Up: Maximum Likelihood Models Previous: Maximum Likelihood Models
Mike Alder
9/19/1997