Online speaker diarization using adapted i-vector transforms
Weizhong Zhu, Jason Pelecanos
ICASSP 2016
We study large-scale kernel methods for acoustic modeling and compare to DNNs on performance metrics related to both acoustic modeling and recognition. Measuring perplexity and frame-level classification accuracy, kernel-based acoustic models are as effective as their DNN counterparts. However, on token-error-rates DNN models can be significantly better. We have discovered that this might be attributed to DNN's unique strength in reducing both the perplexity and the entropy of the predicted posterior probabilities. Motivated by our findings, we propose a new technique, entropy regularized perplexity, for model selection. This technique can noticeably improve the recognition performance of both types of models, and reduces the gap between them. While effective on Broadcast News, this technique could be also applicable to other tasks.
Weizhong Zhu, Jason Pelecanos
ICASSP 2016
Asaf Rendel, Raul Fernandez, et al.
ICASSP 2016
Kartik Audhkhasi, Abhinav Sethy, et al.
ICASSP 2016
Huan Songg, Jayaraman J. Thiagarajan, et al.
ICASSP 2016