Robust Emotion Recognition using Spectral and Prosodic Features
In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner.
The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.
Deals with emotions in terms of how to characterize the emotions, how to acquire the emotion-specific information from speech conversations and finally how to incorporate the acquired emotion-specific information to synthesize the desired emotions Proposes pitch synchronous and sub-syllabic spectral features for characterizing emotions Explores global and local prosodic features at syllable, word and phrase levels to capture the emotion-discriminative information Demonstrates real life emotions using hierarchical models based on speaking rate