Front-end feature transforms with context filtering for speaker adaptation
Abstract
Feature-space transforms such as feature-space maximum likelihood linear regression (FMLLR) are very effective speaker adaptation technique, especially on mismatched test data. In this study, we extend the full-rank square matrix of FMLLR to a non-square matrix that uses neighboring feature vectors in estimating the adapted central feature vector. Through optimizing an appropriate objective function we aim to filter out and transform features through the correlation of the feature context. We compare to FMLLR that just consider the current feature vector only. Our experiments are conducted on the automobile data with different speed conditions. Results show that context filtering improves 23% on word error rate over conventional FMLLR on noisy 60mph data with adapted ML model, and 7%/9% improvement over the discriminatively trained FMMI/BMMI models. © 2011 IEEE.