Guo-Jun Qi, Charu Aggarwal, et al.
IEEE TPAMI
A description with regards to the experiments done in emotive spoken language user interfaces is given. It has been found out that when the use of multimodal, synthesizing, and recognizing information has been optimized in both the audio and video modalities, there has been an improvement when it comes to recognition accuracy and synthesis quality. Specific topics being covered include: the speech and emotion recognition by humans; the automatic audiovisual speech and emotion recognition; the audiovisual speech synthesis; the emotive prosody; and finally the emotionally nuanced audiovisual speech.
Guo-Jun Qi, Charu Aggarwal, et al.
IEEE TPAMI
A.R. Conn, Nick Gould, et al.
Mathematics of Computation
Robert F. Gordon, Edward A. MacNair, et al.
WSC 1985
Timothy J. Wiltshire, Joseph P. Kirk, et al.
SPIE Advanced Lithography 1998