TOWARDS A COMMON SPEECH ANALYSIS ENGINE
Hagai Aronowitz, Itai Gat, et al.
ICASSP 2022
Recent advances in End-to-End (E2E) Spoken Language Understanding (SLU) have been primarily due to effective pretraining of speech representations. One such pretraining paradigm is the distillation of semantic knowledge from state-of-the-art text-based models like BERT to speech encoder neural networks. This work is a step towards doing the same in a much more efficient and fine-grained manner where we align speech embeddings and BERT embeddings on a token-by-token basis. We introduce a simple yet novel technique that uses a cross-modal attention mechanism to extract token-level contextual embeddings from a speech encoder such that these can be directly compared and aligned with BERT based contextual embeddings. This alignment is performed using a novel tokenwise contrastive loss. Fine-tuning such a pretrained model to perform intent recognition using speech directly yields state-of-the-art performance on two widely used SLU datasets. Our model improves further when fine-tuned with additional regularization using SpecAugment especially when speech is noisy, giving an absolute improvement as high as 8% over previous results.
Hagai Aronowitz, Itai Gat, et al.
ICASSP 2022
Andrew Rouditchenko, Angie Boggust, et al.
INTERSPEECH 2021
Kaizhi Qian, Yang Zhang, et al.
ICML 2020
Vishal Sunder, Samuel Thomas, et al.
ICASSP 2022