Combating hate : how multilingual transformers can help detect topical hate speech

Loading...
Thumbnail Image

Date

Authors

Srikissoon, Trishanta
Marivate, Vukosi

Journal Title

Journal ISSN

Volume Title

Publisher

Easychair

Abstract

Automated hate speech detection is important to protecting people’s dignity, online experiences, and physical safety in Society 5.0. Transformers are sophisticated pre-trained language models that can be fine-tuned for multilingual hate speech detection. Many studies consider this application as a binary classification problem. Additionally, research on topical hate speech detection use target-specific datasets containing assertions about a particular group. In this paper we investigate multi-class hate speech detection using target-generic datasets. We assess the performance of mBERT and XLM-RoBERTA on high and low resource languages, with limited sample sizes and class imbalance. We find that our fine-tuned mBERT models are performant in detecting gender-targeted hate speech. Our Urdu classifier produces a 31% lift on the baseline model. We also present a pipeline for processing multilingual datasets for multi-class hate speech detection. Our approach could be used in future works on topically focused hate speech detection for other low resource languages, particularly African languages which remain under-explored in this domain.

Description

Keywords

Hate speech, Machine learning, Natural language processing, SDG-08: Decent work and economic growth

Sustainable Development Goals

SDG-09: Industry, innovation and infrastructure

Citation

Srikissoon, T. & Marivate, V. 2023, 'Combating hate : how multilingual transformers can help detect topical hate speech', EPiC SeriesinComputing, vol. 93, pp. 203-215. DOI:10.29007/1cm6.