Discriminative estimation of subspace precision and mean (SPAM) models
Abstract
The SPAM model was recently proposed as a very general method for modeling Gaussians with constrained means and covariances. It has been shown to yield significant error rate improvements over other methods of constraining covariances such as diagonal, semi-tied covariances, and extended maximum likelihood linear transformations. In this paper we address the problem of discriminative estimation of SPAM model parameters, in an attempt to further improve its performance. We present discriminative estimation under two criteria: maximum mutual information (MMI) and an "error-weighted" training. We show that both these methods individually result in over 20% relative reduction in word error rate on a digit task over maximum likelihood (ML) estimated SPAMmodel parameters. We also show that a gain of as much as 28% relative can be achieved by combining these two discriminative estimation techniques. The techniques developed in this paper also apply directly to an extension of SPAM called subspace constrained exponential models.