A Framework Integrating Federated Learning and Fog Computing Based on Client Sampling and Dynamic Thresholding Techniques

  • Dang van Thang
  • , Artem Volkov
  • , Ammar Muthanna
  • , Ibrahim A. Elgendy*
  • , Reem Alkanhel
  • , Dushantha Nalin K. Jayakody
  • , Andrey Koucheryavy
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The exponential growth in the number of Internet of Things (IoT) devices and the vast quantity of data they generate present a significant challenge to the efficacy of traditional centralized training models. Federated Learning (FL) is a machine learning framework that effectively addresses this issue and other concerns about data privacy. Furthermore, fog computing represents a robust distributed computing methodology with the potential to bolster and propel the advancement of FL. An integrated distributed architecture combining FL and fog computing (FC) has the potential to overcome the limitations of traditional centralized architectures, offering a promising solution for the future. One of the objectives of implementing this novel architectural framework is to alleviate the burden on communication links within the core network by training a model on distributed training data across many clients. Various techniques and frameworks have been developed and implemented, including approaches to model compression and those addressing data and device heterogeneity. These have demonstrated effectiveness in specific contexts. In this paper, we introduce a novel gradient-driven client-sampling framework that tightly couples Federated Learning with Fog Computing. By dynamically adjusting per-round thresholds based on local gradient change rates, our method selects only the most informative clients and leverages fog nodes for partial aggregation, thereby minimizing redundant transmissions, accelerating convergence under heterogeneous data, and offloading the central server. Extensive simulations on MNIST and CIFAR-10 demonstrate that our approach reduces cumulative communication by 39% and 31%, respectively, without sacrificing convergence speed or final accuracy.

Original languageEnglish
Pages (from-to)95019-95033
Number of pages15
JournalIEEE Access
Volume13
DOIs
StatePublished - 2025

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • Federated learning
  • client sampling
  • dynamic thresholding
  • fog computing

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'A Framework Integrating Federated Learning and Fog Computing Based on Client Sampling and Dynamic Thresholding Techniques'. Together they form a unique fingerprint.

Cite this