Publications

Multimodal embeddings from language models for emotion recognition in the wild

Abstract

Word embeddings such as ELMo and BERT have been shown to model word usage in language with greater efficacy through contextualized learning on large-scale language corpora, resulting in significant performance improvement across many natural language processing tasks. In this work we integrate acoustic information into contextualized lexical embeddings through the addition of a parallel stream to the bidirectional language model. This multimodal language model is trained on spoken language data that includes both text and audio modalities. We show that embeddings extracted from this model integrate paralinguistic cues into word meanings and can provide vital affective information by applying these multimodal embeddings to the task of speaker emotion recognition.

Date
2021
Authors
Shao-Yen Tseng, Shrikanth Narayanan, Panayiotis Georgiou
Journal
IEEE Signal Processing Letters
Volume
28
Pages
608-612
Publisher
IEEE