r4 - 14 Mar 2007 - 17:36:51 - MikelPenagarikanoYou are here: TWiki >  Sautrela Web  > LayeredMarkovModel

Layered Markov Model

A LMM consists of a number of layers, each composed of a finite set of WFSA. Each layer represents a knowledge source that models its units in terms of lower level layer units. For example, the set of pronunciation models of a speech decoder represents the words in terms of phonemes, diphonemes or similar units. In other words, each layer is connected to the underlying one by mapping its alphabet into the underlying models.

The states of a LMM, called meta-states, are characterized by a vector made up of a model-state pair per layer, whereas the alphabet of the whole model is that of the bottom layer. Regardless of the number of layers and type of WFSA involved, a LMM can be seen as a non-deterministic WFSA. There are no limitations on the number of layers, models, states or symbols that set up a LMM. Therefore, integrating different knowledge sources into a single stochastic automaton turns out to be a highly flexible solution.

LMM.png
Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r4 < r3 < r2 < r1 | More topic actions
 
Arquitectura software y sistema para Procesamiento de Señal orientado a Reconocimeinto Automático del habla A Highly Modular Open Source
Speech Recognition Framework
"debile principium melior fortuna sequatur",Bernat Etxepare, 1545
Copyright © Mikel PenagarikanoValid XHTML 1.0 Transitional