Paper 2024/081
SuperFL: Privacy-Preserving Federated Learning with Efficiency and Robustness
Abstract
Federated Learning (FL) accomplishes collaborative model training without the need to share local training data. However, existing FL aggregation approaches suffer from inefficiency, privacy vulnerabilities, and neglect of poisoning attacks, severely impacting the overall performance and reliability of model training. In order to address these challenges, we propose SuperFL, an efficient two-server aggregation scheme that is both privacy preserving and secure against poisoning attacks. The two semi-honest servers $\mathcal{S}_0$ and $\mathcal{S}_1$ collaborate with each other, with a shuffle server $\mathcal{S}_0$ in charge of privacy-preserving random clustering, while an analysis server $\mathcal{S}_1$ responsible for robustness detection, identifying and filtering malicious model updates. Our scheme employs a novel combination of homomorphic encryption and proxy re-encryption to realize secure server-to-server collaboration. We also utilize a novel sparse matrix projection compression technique to enhance communication efficiency and significantly reduce communication overhead. To resist poisoning attacks, we introduce a dual-filter algorithm based on trusted root, combine dimensionality reduction and norm calculation to identify malicious model updates. Extensive experiments validate the efficiency and robustness of our scheme. SuperFL achieves impressive compression ratios, ranging from $5\text{-}40$x, under different models while maintaining comparable model accuracy as the baseline. Notably, our solution demonstrates a maximal model accuracy decrease of no more than $2\%$ and $6\%$ on the MNIST and CIFAR-10 datasets respectively, under specific compression ratios and the presence of malicious clients.
Note: Preprint.
Metadata
- Available format(s)
- Category
- Applications
- Publication info
- Preprint.
- Keywords
- Federated learning
- Contact author(s)
-
zhaoyulin22 @ mails ucas ac cn
zhouhualin22 @ mails ucas ac cn
wanzhiguo @ zhejianglab com - History
- 2024-01-19: approved
- 2024-01-18: received
- See all versions
- Short URL
- https://ia.cr/2024/081
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2024/081, author = {Yulin Zhao and Hualin Zhou and Zhiguo Wan}, title = {{SuperFL}: Privacy-Preserving Federated Learning with Efficiency and Robustness}, howpublished = {Cryptology {ePrint} Archive, Paper 2024/081}, year = {2024}, url = {https://eprint.iacr.org/2024/081} }