Multimodal meme dataset (MultiOFF) for identifying offensive content in image and text

View/ Open
Date
2020-05-11Author
Suryawanshi, Shardul
Chakravarthi, Bharathi Raja
Arcan, Mihael
Buitelaar, Paul
Metadata
Show full item recordUsage
This item's downloads: 103 (view details)
Recommended Citation
Suryawanshi, Shardul, Chakravarthi, Bharathi Raja, Arcan, Mihael, & Buitelaar, Paul. (2020). Multimodal meme dataset (MultiOFF) for identifying offensive content in image and text. Paper presented at the Language Resources and Evaluation Conference (LREC 2020) Second Workshop on Trolling, Aggression and Cyberbullying, Marseille, France, 11-16 May.
Published Version
Abstract
A meme is a form of media that spreads an idea or emotion across the internet. As posting meme has become a new form of
communication of the web, due to the multimodal nature of memes, postings of hateful memes or related events like trolling,
cyberbullying are increasing day by day. Hate speech, offensive content and aggression content detection have been extensively
explored in a single modality such as text or image. However, combining two modalities to detect offensive content is still a
developing area. Memes make it even more challenging since they express humour and sarcasm in an implicit way, because of
which the meme may not be offensive if we only consider the text or the image. Therefore, it is necessary to combine both
modalities to identify whether a given meme is offensive or not. Since there was no publicly available dataset for multimodal
offensive meme content detection, we leveraged the memes related to the 2016 U.S. presidential election and created the MultiOFF multimodal meme dataset for offensive content detection dataset. We subsequently developed a classifier for this task
using the MultiOFF dataset. We use an early fusion technique to combine the image and text modality and compare it with
a text- and an image-only baseline to investigate its effectiveness. Our results show improvements in terms of Precision, Recall, and F-Score. The code and dataset for this paper is published in https://github.com/bharathichezhiyan/
Multimodal-Meme-Classification-Identifying-Offensive-Content-in-Image-and-Text