A Hidden Markov Model is merely a device for compressing strings, otherwise a stochastic grammar. It does not behave well when there is much noise in the strings, symbols which don't occur very often and which have no significance, and it does not do a particularly good job of extracting the brief and conceivably critical transitions which may well characterise, say, stop consonants.
Alternatives, such as simply finding substrings which recur, up to some matching criterion, can be employed and are much faster. Local grammars and variants of them designed to accommodate noise conveniently may be used with only a small amount of ingenuity, and these will be discussed in the next chapter.
HMMs have been used in handwritten character and
word recognition; one might be forgiven for
suspecting that the choice is dictated by the
crudest forms of research practice which requires
absolutely no understanding of what you are doing:
you copy out an algorithm used for tackling one
problem and use it for tackling one which is sufficiently
similar to offer hope.
There are more intelligent ways
which will be discussed later.