Speech recognition performance on a voicemail transcription task
M. Padmanabhan, E. Eide, et al.
ICASSP 1998
We are interested in the ability of a text-to-speech system to convey emotion. Towards that end, we examine the ability to preserve the underlying emotion present in a training corpus. We consider the emotions "lively", "sad", and "angry", and compare the ability to convey each of these in synthesized speech with the neutral speech baseline. We also look at the confusion rates encountered when listeners are asked to identify the emotional state in which the speaker appears to have been. We conclude from our experiments that a viable method for building a text-to-speech system which conveys a certain emotion is simply to collect data spoken in that emotional state. However, our experiments show that in order for us to achieve a given level of emotion perceived in the synthetic speech, we must record natural speech which has a higher level than that desired in the synthetic output. Finally, we discuss how an emotional TTS system might be used.
M. Padmanabhan, E. Eide, et al.
ICASSP 1998
E. Eide, B. Maison, et al.
ICSLP 2000
E. Eide, A. Aaron, et al.
SSW 2004
A. Aaron, S. Chen, et al.
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings