Paper 2021/025

FLAME: Taming Backdoors in Federated Learning

Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi, and Thomas Schneider


Federated Learning (FL) is a collaborative machine learning approach allowing participants to jointly train a model without having to share their private, potentially sensitive local datasets with others. Despite its benefits, FL is vulnerable to so-called backdoor attacks, in which an adversary injects manipulated model updates into the federated model aggregation process so that the resulting model will provide targeted false predictions for specific adversary-chosen inputs. Proposed defenses against backdoor attacks based on detecting and filtering out malicious model updates consider only very specific and limited attacker models, whereas defenses based on differential privacy-inspired noise injection significantly deteriorate the benign performance of the aggregated model. To address these deficiencies, we introduce FLAME, a defense framework that estimates the sufficient amount of noise to be injected to ensure the elimination of backdoors. To minimize the required amount of noise, FLAME uses a model clustering and weight clipping approach. This ensures that FLAME can maintain the benign performance of the aggregated model while effectively eliminating adversarial backdoors. Our evaluation of FLAME on several datasets stemming from application areas including image classification, word prediction, and IoT intrusion detection demonstrates that FLAME removes backdoors effectively with a negligible impact on the benign performance of the models.

Note: To appear in the 31st USENIX Security Symposium, August 2022, Boston, MA, USA

Available format(s)
Publication info
Published elsewhere. USENIX Security Symposium 2022
secure computationsecret sharingfederated learningdata privacybackdoor
Contact author(s)
ducthien nguyen @ trust tu-darmstadt de
2022-02-01: last of 3 revisions
2021-01-12: received
See all versions
Short URL
Creative Commons Attribution


      author = {Thien Duc Nguyen and Phillip Rieger and Huili Chen and Hossein Yalame and Helen Möllering and Hossein Fereidooni and Samuel Marchal and Markus Miettinen and Azalia Mirhoseini and Shaza Zeitouni and Farinaz Koushanfar and Ahmad-Reza Sadeghi and Thomas Schneider},
      title = {{FLAME}: Taming Backdoors in Federated Learning},
      howpublished = {Cryptology ePrint Archive, Paper 2021/025},
      year = {2021},
      note = {\url{}},
      url = {}
Note: In order to protect the privacy of readers, does not use cookies or embedded third party content.