Seeing Through the Fake: Explainable AI With Multiple CNNs for Deepfake Detection

  • Muhammad Aleem
  • , Muhammad Umair
  • , Muhammad Zubair
  • , Rozeena Ibrahim
  • , Muhammad Tahir Naseem*
  • , Muhammad Mohsin Raza
  • , Muhammad Nadeem Ali
  • , Byung Seo Kim*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The rapid advancement of deepfake generation techniques poses significant challenges to the credibility of digital content by producing highly realistic manipulated content. This study explores two primary scenarios: the first is model configuration, and the second is training strategies. In the model configuration scenario, we combine deep learning-based feature extraction with machine learning-based classification, where deep learning serves as the encoder and machine learning serves as the classifier. Additionally, under training strategies, we investigate how different components of the model, whether fixed or learnable, affect detection performance. Specifically, we explore three learning modes: Learnable–Learnable (LL), Learnable–Fixed (LF), and Fixed–Learnable (FL), each of which affects the model’s adaptability and efficiency in detecting deepfakes. Through extensive experimentation on publicly available benchmark datasets, including DFDC, Celeb-DF, and FaceForensics++, our framework demonstrates how learning dynamics influence both detection accuracy and computational efficiency. Furthermore, we assess the robustness of our approach against adversarial attacks to ensure the model’s resilience in real-world scenarios. Additionally, the integration of Explainable AI (XAI) techniques enhances model interpretability by identifying the critical features that drive the model’s predictions. The performance analysis across these datasets, combined with robustness testing, provides valuable insights for designing scalable, efficient, and explainable deepfake detection systems suitable for real-world deployment.

Original languageEnglish
Pages (from-to)131-162
Number of pages32
JournalIEEE Access
Volume14
DOIs
StatePublished - 2026

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • Deepfakes
  • ResNet
  • VGG16
  • XceptionNet
  • classifiers
  • deep learning
  • digital content
  • explainable AI
  • inception ResNet
  • pre-trained Densenet201

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'Seeing Through the Fake: Explainable AI With Multiple CNNs for Deepfake Detection'. Together they form a unique fingerprint.

Cite this