Multimodal multimodel emotion analysis as linked data

View/ Open
Date
2017-10-23Author
Sánchez-Rada, J. Fernando
Iglesias, Carlos A.
Sagha, Hesam
Schuller, Björn
Ian D. Wood, Ian D.
Buitelaar, Paul
Metadata
Show full item recordUsage
This item's downloads: 280 (view details)
Recommended Citation
Sánchez-Rada, J. Fernando, Iglesias, Carlos A., Sagha, Hesam, Schuller, Björn, Ian D. Wood, Ian D., & Buitelaar, Paul. (2017). Multimodal multimodel emotion analysis as linked data. Paper presented at the 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), San Antonio, TX, USA, 23-26 October, pp. 111-116, doi: 10.1109/ACIIW.2017.8272599
Published Version
Abstract
The lack of a standard emotion representation model
hinders emotion analysis due to the incompatibility of annotation
formats and models from different sources, tools and annotation
services. This is also a limiting factor for multimodal
analysis, since recognition services from different modalities
(audio, video, text) tend to have different representation models
(e. g., continuous vs. discrete emotions).
This work presents a multi-disciplinary effort to alleviate
this problem by formalizing conversion between emotion models.
The specific contributions are: i) a semantic representation
of emotion conversion; ii) an API proposal for services that
perform automatic conversion; iii) a reference implementation
of such a service; and iv) validation of the proposal through
use cases that integrate different emotion models and service
providers.