Robert C. Durbeck
IEEE TACON
In this paper, we propose a bilevel joint unsupervised and supervised training (BL-JUST) framework for automatic speech recognition. Compared to the conventional pretraining and fine-tuning strategy which is a disconnected twostage process, BL-JUST tries to optimize an acoustic model such that it simultaneously minimizes both the unsupervised and supervised loss functions. Because BL-JUST seeks matched local optima of both loss functions, acoustic representations learned by the acoustic model strike a good balance between being generic and task-specific. We solve the BL-JUST problem using penaltybased bilevel gradient descent and evaluate the trained deep neural network acoustic models on various datasets with a variety of architectures and loss functions. We show that BL-JUST can outperform the widely-used pre-training and fine-tuning strategy and some other popular semi-supervised techniques.
Robert C. Durbeck
IEEE TACON
Minkyong Kim, Zhen Liu, et al.
INFOCOM 2008
William Hinsberg, Joy Cheng, et al.
SPIE Advanced Lithography 2010
Raghu Krishnapuram, Krishna Kummamuru
IFSA 2003