Hate2Explain: Crowdsourced Explanations as a Cultural Bridge in Understanding Hateful Memes
Journal
Proceedings of the 9th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2021)
Date Issued
2021
Author(s)
Abstract
Detection of hateful memes depends on semantic understanding of the juxtaposition of short texts over image (s). Independently innocent texts or images may become hateful when they are combined in specific ways, which can be tricky for people without knowledge of the cultural context to understand. This work presents a new approach to generating explanations that help bridge the cultural gap in understanding hateful memes. Inspired by prior research, a three-stage crowdsourcing workflow is proposed to guide crowd workers to generate, annotate, and revise explanations of hateful memes. To ensure the quality of explanations, a selfassessment rubric is designed to evaluate the explanations using four criteria: target, clarity, explicitness, and utility. We evaluated the proposed workflow in an online study with 66 participants, compared to a single-stage workflow. The results showed that the three-stage workflow guided crowds to generate explanations that meet more criteria than the explanations generated by a single-stage workflow.
Event(s)
9th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2021)
Publisher
Association for the Advancement of Artificial Intelligence
Type
conference paper
