You are looking at a specific version 20210121:153304 of this paper. See the latest version.

Paper 2021/025

FLGUARD: Secure and Private Federated Learning

Thien Duc Nguyen and Phillip Rieger and Hossein Yalame and Helen Möllering and Hossein Fereidooni and Samuel Marchal and Markus Miettinen and Azalia Mirhoseini and Ahmad-Reza Sadeghi and Thomas Schneider and Shaza Zeitouni

Abstract

Recently, a number of backdoor attacks against Federated Learning (FL) have been proposed. In such attacks, an adversary injects poisoned model updates into the federated model aggregation process with the goal of manipulating the aggregated model to provide false predictions on specific adversary-chosen inputs. A number of defenses have been proposed but none of them can effectively protect the FL process also against so-called multi-backdoor attacks in which multiple different backdoors are injected by the adversary simultaneously without severely impacting the benign performance of the aggregated model. To overcome this challenge, we introduce FLGuard, a poisoning defense framework that is able to defend FL against state-of-the-art backdoor attacks while simultaneously maintaining the benign performance of the aggregated model. Moreover, FL is also vulnerable to inference attacks, in which a malicious aggregator can infer information about clients’ training data from their model updates. To thwart such attacks, we augment FLGuard with state-of-the-art secure computation techniques that securely evaluate the FLGuard algorithm. We provide formal argumentation for the effectiveness of our FLGuard and extensively evaluate it against known backdoor attacks on several datasets and applications (including image classification, word prediction, and IoT intrusion detection) demonstrating that FLGuard can entirely remove backdoors with a negligible effect on accuracy. We also show that private FLGuard achieves practical runtimes.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Preprint. MINOR revision.
Keywords
secure computationsecret sharingfederated learningdata privacybackdoor
Contact author(s)
ducthien nguyen @ trust tu-darmstadt de
History
2022-02-01: last of 3 revisions
2021-01-12: received
See all versions
Short URL
https://ia.cr/2021/025
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.