Xiaodong Cui, Vaibhava Goel, et al.
INTERSPEECH 2017
End-to-end models (or sequence-to-sequence models) based on deep neural networks have recently become popular within the machine learning community. These techniques are also increasingly used in automatic speech recognition as an alternative to the state-of-the-art, hybrid HMM-DNN (hidden Markov model, deep neural network) system. The end-to-end systems contain a purely neural architecture that eliminates the need of any time alignment between the input acoustic feature vector sequence and output phone sequence. In this paper, we present progress within the IBM Watson Multimodal Group on end-to-end models for spoken language processing. We present our work on two types of end-to-end models applied to speech-to-text and keyword search tasks, namely, 1) recurrent neural networks (RNNs) based on connectionist temporal classification loss, and 2) attention-based encoder-decoder RNNs. We present results on several languages (such as Pashto, Mongolian, Javanese, Amharic, Guarani, Dholuo, Igbo, and Georgian) from the Intelligence Advanced Research Projects Activity funded Babel Program. We also present a detailed analysis of some salient characteristics of these models compared with the state-of-the-art HMM-DNN hybrid systems, and also discuss future challenges in using such models for spoken language processing.
Xiaodong Cui, Vaibhava Goel, et al.
INTERSPEECH 2017
Sameer R. Maskey, Andrew Rosenberg, et al.
INTERSPEECH 2008
George Saon, Mukund Padmanabhan
ICSLP 2000
Kartik Audhkhasi, Bhuvana Ramabhadran, et al.
INTERSPEECH 2017