Artigo em revista científica
Leveraging transfer learning for hate speech detection in Portuguese social media posts
Gil Ramos (Ramos, G.); Fernando Batista (Batista, F.); Ricardo Ribeiro (Ribeiro, R.); Pedro Fialho (Fialho, P.); Sérgio Moro (Moro, S.); António Fonseca (Fonseca, A.); Rita Guerra (Guerra, R.); Paula Carvalho (Carvalho, P.); Catarina Marques (Marques, C.); Cláudia Silva (Silva, C.); et al.
Título Revista
IEEE Access
Ano (publicação definitiva)
2024
Língua
Inglês
País
Estados Unidos da América
Mais Informação
Web of Science®

Esta publicação não está indexada na Web of Science®

Scopus

Esta publicação não está indexada na Scopus

Google Scholar

Esta publicação não está indexada no Google Scholar

Abstract/Resumo
The rapid rise of social media has brought about new ways of digital communication, along with a worrying increase in online hate speech (HS), which, in turn, has led researchers to develop several Natural Language Processing methods for its detection. Although significant strides have been made in automating HS detection, research focusing on the European Portuguese language remains scarce (as it happens in several under-resourced languages). To address this gap, we explore the efficacy of various transfer learning models, which have been shown in the literature to have better performance for this task than other Deep Learning models. We employ BERT-like models pre-trained on Portuguese text, such as BERTimbau and mDeBERTa, as well as GPT, Gemini and Mistral generative models, for the detection of HS within Portuguese online discourse. Our study relies on two annotated corpora of YouTube comments and tweets, both annotated as HS and non-HS. Our findings show that the best model for the YouTube corpus was a variant of BERTimbau retrained with European Portuguese tweets and fine-tuned for the HS task, with an F-score of 87.1% for the positive class, outperforming the baseline models by more than 20% and with a 1.8% increase compared with base BERTimbau. The best model for the Twitter corpus was GPT-3.5, with an F-score of 50.2% for the positive class. We also assess the impact of using in-domain and mixed-domain training sets, as well as the impact of providing context in generative model prompts on their performance.
Agradecimentos/Acknowledgements
--
Palavras-chave
Hate speech,Transfer learning,Transformer models,Generative models,Text classification
Registos de financiamentos
Referência de financiamento Entidade Financiadora
101049306 Comissão Europeia