Abstract
This article presents Privacy-Preserving and Quantized ADMM (PP-QADMM), a novel federated learning (FL) algorithm that is both privacy-preserving and communication-efficient, built upon the Alternating Direction Method of Multipliers (ADMM). PP-QADMM enhances privacy through three core mechanisms. First, dual variables are randomly initialized at the workers’ side. Second, these dual variables are updated solely by the workers throughout the learning process. Third, only a combined representation of the primal and dual variables is transmitted to the parameter server (PS). This design prevents the PS from performing model inversion, as the individual model updates are obfuscated within the transmitted combination. The algorithm effectively merges principles from differential privacy (DP) and secure aggregation. Specifically, the dual variables act as perturbation noise, ensuring that each worker’s model update satisfies (ϵ,δ)-DP. Importantly, during aggregation at the PS, this noise cancels out, enabling an accurate recovery of the quantized global model while safeguarding individual contributions. PP-QADMM inherits the theoretical privacy guarantees of DP, yet with no loss in the performance, and inherits the secure aggregation capability of multi-party computation (MPC) technique, yet without incurring additional communication or computation overhead. To further improve communication efficiency, workers quantize their updates before transmission at each iteration. The quantization scheme is designed such that, as k → ∞, the quantization error vanishes asymptotically. We provide a rigorous theoretical proof of convergence, showing that PP-QADMM converges to the optimal solution for convex problems while achieving a convergence rate comparable to standard ADMM, but with significantly lower communication and energy costs, and robust privacy protection. Extensive numerical experiments on a convex linear regression task validate the effectiveness of PP-QADMM. On a publicly available dataset with 100 workers, our results show that PP-QADMM transmits only 1/16th the number of bits required by standard ADMM, while achieving a loss value of 1 × 10−10, a level of accuracy unattainable with DP-ADMM alone, due to its inherent utility-privacy trade-off.
| Original language | English |
|---|---|
| Pages (from-to) | 4156-4175 |
| Number of pages | 20 |
| Journal | IEEE Open Journal of the Communications Society |
| Volume | 6 |
| DOIs | |
| State | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2020 IEEE.
Keywords
- ADMM
- Communication-efficient federated learning
- differential privacy
- secure aggregation
- stochastic quantization
ASJC Scopus subject areas
- Computer Networks and Communications