Neural Model Reprogramming with Similarity Based Mapping for Low-Resource Spoken Command Classification
Abstract
We propose a novel adversarial reprogramming (AR) approach for low-resource spoken command recognition (SCR) and build an AR-SCR system. The AR procedure aims at repurposing a pretrained SCR model (from the source domain) to modify the acoustic signals (from the target domain). To solve the label mismatches between source and target domains and further improve the stability of AR, we propose a novel similarity-based label mapping technique to align classes. In addition, the transfer learning (TL) technique is combined with the original AR process to improve the model adaptation capability. We evaluate the proposed AR-SCR system on three low-resource SCR datasets, including Arabic, Lithuanian, and dysarthric Mandarin speech. Experimental results show that with a pretrained acoustic model trained on a large-scale English dataset, the proposed AR-SCR system outperforms the current state-of-the-art results on Lithuanian and Arabic datasets, with only a limited amount of training data.