Skip to main navigation Skip to search Skip to main content

A novel transformer attention-based approach for sarcasm detection

  • Shumaila Khan*
  • , Iqbal Qasim
  • , Wahab Khan
  • , Khursheed Aurangzeb
  • , Javed Ali Khan
  • , Muhammad Shahid Anwar*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

14 Scopus citations

Abstract

Sarcasm detection is challenging in natural language processing (NLP) due to its implicit nature, particularly in low-resource languages. Despite limited linguistic resources, researchers have focused on detecting sarcasm on social media platforms, leading to the development of specialized algorithms and models tailored for Urdu text. Researchers have significantly improved sarcasm detection accuracy by analysing patterns and linguistic cues unique to the language, thereby advancing NLP capabilities in low-resource languages and facilitating better communication within diverse online communities. This work introduces UrduSarcasmNet, a novel architecture using cascaded group multi-head attention, which is an innovative deep-learning approach that employs cascaded group multi-head attention techniques to enhance effectiveness. By employing a series of attention heads in a cascading manner, our model captures both local and global contexts, facilitating a more comprehensive understanding of the text. Adding a group attention mechanism enables simultaneous consideration of various sub-topics within the content, thereby enriching the model's effectiveness. The proposed UrduSarcasmNet approach is validated with the Urdu-sarcastic-tweets-dataset (UST) dataset, which has been curated for this purpose. Our experimental results on the UST dataset show that the proposed UrduSarcasmNet framework outperforms the simple-attention mechanism and other state-of-the-art models. This research significantly enhances natural language processing (NLP) and provides valuable insights for improving sarcasm recognition tools in low-resource languages like Urdu.

Original languageEnglish
Article numbere13686
JournalExpert Systems
Volume42
Issue number1
DOIs
StatePublished - Jan 2025
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2024 John Wiley & Sons Ltd.

Keywords

  • attention models
  • deep learning
  • machine learning
  • natural language processing
  • sarcasm identification
  • sentiment analyses

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Theoretical Computer Science
  • Computational Theory and Mathematics
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A novel transformer attention-based approach for sarcasm detection'. Together they form a unique fingerprint.

Cite this