Low-Resource Speech Recognition of 500-Word Vocabularies
Sabine Deligne, Ellen Eide, et al.
INTERSPEECH - Eurospeech 2001
In this paper, we study discriminative training of acoustic models for speech recognition under two criteria: maximum mutual information (MMI) and a novel "error-weighted" training technique. We present a proof that the standard MMI training technique is valid for a very general class of acoustic models with any kind of parameter tying. We report experimental results for subspace constrained Gaussian mixture models (SCGMMs), where the exponential model weights of all Gaussians are required to belong to a common "tied" subspace, as well as for subspace precision and mean (SPAM) models which impose separate subspace constraints on the precision matrices (i.e., inverse covariance matrices) and means. It has been shown previously that SCGMMs and SPAM models generalize and yield significant error rate improvements over previously considered model classes such as diagonal models, models with semitied covariances, and extended maximum likelihood linear transformation (EMLLT) models. We show here that MMI and error-weighted training each individually result in over 20% relative reduction in word error rate on a digit task over maximum-likelihood (ML) training. We also show that a gain of as much as 28% relative can be achieved by combining these two discriminative estimation techniques. © 2006 IEEE.
Sabine Deligne, Ellen Eide, et al.
INTERSPEECH - Eurospeech 2001
Vaibhava Goel, Scott Axelrod, et al.
INTERSPEECH - Eurospeech 2003
Vaibhava Goel, Peder Olsen
INTERSPEECH 2009
Brian Kingsbury, Lidia Mangu, et al.
INTERSPEECH - Eurospeech 2003