Show simple item record

dc.contributor.authorWood, Ian D.
dc.contributor.authorMcCrae, John P.
dc.contributor.authorAndryushechkin, Vladimir
dc.contributor.authorBuitelaar, Paul
dc.contributor.editorchair), Nicoletta Calzolari (Conference and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Hasida, Koiti and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, Hélène and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios and Tokunaga, Takenobu
dc.identifier.citationWood, Ian D. , McCrae, John P. , Andryushechkin, Vladimir , & Buitelaar, Paul (2018). A Comparison Of Emotion Annotation Schemes And A New Annotated Data Set. Paper presented at the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, 7-12 May.en_IE
dc.description.abstractWhile the recognition of positive/negative sentiment in text is an established task with many standard data sets and well developed methodologies, the recognition of more nuanced affect has received less attention, and in particular, there are very few publicly available gold standard annotated resources. To address this lack, we present a series of emotion annotation studies on tweets culminating in a publicly available collection of 2,019 tweets with scores on four emotion dimensions: valence, arousal, dominance and surprise, following the emotion representation model identified by Fontaine (Fontaine et al., 2007). Further, we make a comparison of relative vs. absolute annotation schemes. We find improved annotator agreement with a relative annotation scheme (comparisons) on a dimensional emotion model over a categorical annotation scheme on Ekman’s six basic emotions (Ekman et al., 1987), however when we compare inter-annotator agreement for comparisons with agreement for a rating scale annotation scheme (both with the same dimensional emotion model), we find improved inter-annotator agreement with rating scales, challenging a common belief that relative judgements are more reliable.en_IE
dc.description.sponsorshipWe would like to thank volunteers from the Insight Centre for Data Analytics for their efforts in pilot study annotations. This work was supported in part by the Science Foundation Ireland under Grant Number 16/IFB/4336 and Grant Number SFI/12/RC/2289 (Insight). The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements No. 644632 (MixedEmotions).en_IE
dc.publisherEuropean Languages Resources Association (ELRA)en_IE
dc.relation.ispartofProceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)en
dc.titleA comparison of emotion annotation schemes and a new annotated data seten_IE
dc.typeConference Paperen_IE
dc.contributor.funderScience Foundation Irelanden_IE
dc.contributor.funderHorizon 2020en_IE
dc.local.contactIan Wood. Email:
dcterms.projectinfo:eu-repo/grantAgreement/EC/H2020::IA/644632/EU/Social Semantic Emotion Analysis for Innovative Multilingual Big Data Analytics Markets/MixedEmotionsen_IE
dcterms.projectinfo:eu-repo/grantAgreement/SFI/SFI Research Centres/12/RC/2289/IE/INSIGHT - Irelands Big Data and Analytics Research Centre/

Files in this item

Attribution-NonCommercial-NoDerivs 3.0 Ireland
This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. Please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply.

The following license files are associated with this item:


This item appears in the following Collection(s)

Show simple item record