Maximum-likelihood nonlinear transformation for acoustic adaptation
Abstract
In this paper, we describe an adaptation method for speech recognition systems that is based on a nonlinear transformation of the feature space. In contrast to most existing adaptation methods which assume some form of affine transformation of either the feature vectors or the acoustic models that model the feature vectors, our proposed method composes a general nonlinear transformation from two transformations, one of these being an affine transformation that combines the dimensions of the original feature space, and the other being a nonlinear transformation that is applied independently to each dimension of the previously transformed feature space leading to a general multidimensional nonlinear transformation of the original feature space. This method also differs from other affine techniques in the way the parameters of the transform are shared. In most previous methods, the parameters of the transformation are shared on the basis of the phonetic class, in our method, the parameters of the nonlinear transformation are shared not on the basis of the phonetic class, but rather on die location in the feature space. Experimental results show that the method outperforms affine methods providing upto a 25% relative improvement in the word error rate in an in-car speech recognition task.