Exaggeration-based Fake Cybersecurity News Detection

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We address the challenge of detecting exaggeration in cybersecurity tweets on X, where misinformation spreads rapidly. Our novel framework uses local Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) to gather evidence and assess tweets' rhetorical intensity, offering graded exaggeration scores. Validated through a human study and a pilot that matches LLM results with human labels, this work lays the groundwork for improved misinformation detection tools.

Original languageEnglish
Title of host publication3D-Sec 2025 - Proceedings of the 1st ACM Workshop on Deepfake, Deception and Disinformation Security
EditorsSimon S. Woo, Shahroz Tariq, Sharif Abuadbba, Kristen Moore, Tim Walita, Mario Fritz, Bimal Viswanath
PublisherAssociation for Computing Machinery, Inc
Pages1-4
Number of pages4
ISBN (Electronic)9798400719028
DOIs
StatePublished - 12 Oct 2025
Event1st ACM Workshop on Deepfake, Deception and Disinformation Security, 3D-Sec 2025 - Taipei, Taiwan, Province of China
Duration: 13 Oct 202517 Oct 2025

Publication series

Name3D-Sec 2025 - Proceedings of the 1st ACM Workshop on Deepfake, Deception and Disinformation Security

Conference

Conference1st ACM Workshop on Deepfake, Deception and Disinformation Security, 3D-Sec 2025
Country/TerritoryTaiwan, Province of China
CityTaipei
Period13/10/2517/10/25

Bibliographical note

Publisher Copyright:
© 2025 Copyright held by the owner/author(s).

Keywords

  • Cybersecurity
  • Fake news
  • LLM
  • Misinformation detection

ASJC Scopus subject areas

  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Exaggeration-based Fake Cybersecurity News Detection'. Together they form a unique fingerprint.

Cite this