Image Manipulation via Neuro-Symbolic Networks
Harman Singh, Poorva Garg, et al.
NeurIPS 2022
Recently proposed methods for discriminative language modeling require alternate hypotheses in the form of lattices or N-best lists. These are usually generated by an Automatic Speech Recognition (ASR) system on the same speech data used to train the system. This requirement restricts the scope of these methods to corpora where both the acoustic material and the corresponding true transcripts are available. Typically, the text data available for language model (LM) training is an order of magnitude larger than manually transcribed speech. This paper provides a general framework to take advantage of this volume of textual data in the discriminative training of language models. We propose to generate probable N-best lists directly from the text material, which resemble the N-best lists produced by an ASR system by incorporating phonetic confusability estimated from the acoustic model of the ASR system. We present experiments with Japanese spontaneous lecture speech data, which demonstrate that discriminative LM training with the proposed framework is effective and provides modest gains in ASR accuracy. © 2011 Elsevier B.V. All rights reserved.
Harman Singh, Poorva Garg, et al.
NeurIPS 2022
Yaniv Altshuler, Vladimir Yanovski, et al.
ICARA 2009
Girmaw Abebe Tadesse, Oliver Bent, et al.
IEEE SPM
Ritendra Datta, Jianying Hu, et al.
ICPR 2008