ISCA Archive - Viseme-dependent weight optimization for CHMM-based audio-visual speech recognition
ISCA Archive Interspeech 2010
ISCA Archive Interspeech 2010

Viseme-dependent weight optimization for CHMM-based audio-visual speech recognition

Alexey Karpov, Andrey Ronzhin, Konstantin Markov, Miloš Železný

The aim of the present study is to investigate some key challenges of the audio-visual speech recognition technology, such as asynchrony modeling of multimodal speech, estimation of auditory and visual speech significance, as well as stream weight optimization. Our research shows that the use of viseme-dependent significance weights improves the performance of state asynchronous CHMM-based speech recognizer. In addition, for a state synchronous MSHMM-based recognizer, fewer errors can be achieved using stationary time delays of visual data with respect to the corresponding audio signal. Evaluation experiments showed that individual audio-visual stream weights for each viseme-phoneme pair lead to relative reduction of WER by 20%.