Etienne Marcheret, Vaibhava Goel, et al.
ASRU 2009
In this paper, we study discriminative training of acoustic models for speech recognition under two criteria: maximum mutual information (MMI) and a novel "error-weighted" training technique. We present a proof that the standard MMI training technique is valid for a very general class of acoustic models with any kind of parameter tying. We report experimental results for subspace constrained Gaussian mixture models (SCGMMs), where the exponential model weights of all Gaussians are required to belong to a common "tied" subspace, as well as for subspace precision and mean (SPAM) models which impose separate subspace constraints on the precision matrices (i.e., inverse covariance matrices) and means. It has been shown previously that SCGMMs and SPAM models generalize and yield significant error rate improvements over previously considered model classes such as diagonal models, models with semitied covariances, and extended maximum likelihood linear transformation (EMLLT) models. We show here that MMI and error-weighted training each individually result in over 20% relative reduction in word error rate on a digit task over maximum-likelihood (ML) training. We also show that a gain of as much as 28% relative can be achieved by combining these two discriminative estimation techniques. © 2006 IEEE.
Etienne Marcheret, Vaibhava Goel, et al.
ASRU 2009
Dan Ning Jiang, Dimitri Kanevsky, et al.
INTERSPEECH 2012
Scott Axelrod, Vaibhava Goel, et al.
IEEE Transactions on Speech and Audio Processing
Petr Fousek, Pierre Dognin, et al.
ICASSP 2015