A LMM consists of a number of layers, each composed of a finite set of WFSA. Each layer represents a knowledge source that models its units in terms of lower level layer units. For example, the set of pronunciation models of a speech decoder represents the words in terms of phonemes, diphonemes or similar units. In other words, each layer is connected to the underlying one by mapping its alphabet into the underlying models.
The states of a LMM, called meta-states, are characterized by a vector made up of a model-state pair per layer, whereas the alphabet of the whole model is that of the bottom layer. Regardless of the number of layers and type of WFSA involved, a LMM can be seen as a non-deterministic WFSA. There are no limitations on the number of layers, models, states or symbols that set up a LMM. Therefore, integrating different knowledge sources into a single stochastic automaton turns out to be a highly flexible solution. |