next up previous contents
Next: Bayesian Decision Up: Computing PDFs: Gaussians Previous: The EM algorithm for

Other Possibilities

We might find cases where gaussians in arbitrary numbers look silly, perhaps by eyeballing the data under projection, perhaps by knowing something about the process which generated the data points, and it is a straightforward matter to select other families of distributions from those in the standard books introducing probability theory to the young. It is possible to choose some quantisation level and produce a histogram. And it is possible to use nearest neighbour methods as mentioned in chapter one.

It is also possible to prefer other than maximum likelihood models; for example we may want to use gaussian mixtures but have strong prejudices about where they should be, or to believe that the covariance matrices should all be equal, or something equally implausible a priori. It is not hard to make the appropriate changes to the algorithms to deal with all of the above.

In the next section, we suppose that we have obtained, by hook or by crook, some pdf for each of the categories of data point in the vicinity of a new data point, and we set about determining the category of the new point, that is we solve the standard supervised learning pattern classification problem.


next up previous contents
Next: Bayesian Decision Up: Computing PDFs: Gaussians Previous: The EM algorithm for
Mike Alder
9/19/1997