Paper 2024/081
SuperFL: Privacy-Preserving Federated Learning with Efficiency and Robustness
Abstract
Federated Learning (FL) accomplishes collaborative model training without the need to share local training data. However, existing FL aggregation approaches suffer from inefficiency, privacy vulnerabilities, and neglect of poisoning attacks, severely impacting the overall performance and reliability of model training. In order to address these challenges, we propose SuperFL, an efficient two-server aggregation scheme that is both privacy preserving and secure against poisoning attacks. The two semi-honest servers
Note: Preprint.
Metadata
- Available format(s)
-
PDF
- Category
- Applications
- Publication info
- Preprint.
- Keywords
- Federated learning
- Contact author(s)
-
zhaoyulin22 @ mails ucas ac cn
zhouhualin22 @ mails ucas ac cn
wanzhiguo @ zhejianglab com - History
- 2024-01-19: approved
- 2024-01-18: received
- See all versions
- Short URL
- https://ia.cr/2024/081
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2024/081, author = {Yulin Zhao and Hualin Zhou and Zhiguo Wan}, title = {{SuperFL}: Privacy-Preserving Federated Learning with Efficiency and Robustness}, howpublished = {Cryptology {ePrint} Archive, Paper 2024/081}, year = {2024}, url = {https://eprint.iacr.org/2024/081} }