Paper 2020/201

A Survey on Neural Trojans

Yuntao Liu, Ankit Mondal, Abhishek Chakraborty, Michael Zuzak, Nina Jacobsen, Daniel Xing, and Ankur Srivastava


Neural networks have become increasingly prevalent in many real-world applications including security-critical ones. Due to the high hardware requirement and time consumption to train high-performance neural network models, users often outsource training to a machine-learning-as-a-service (MLaaS) provider. This puts the integrity of the trained model at risk. In 2017, Liu et. al. found that, by mixing the training data with a few malicious samples of a certain trigger pattern, hidden functionality can be embedded in the trained network which can be evoked by the trigger pattern. We refer to this kind of hidden malicious functionality as neural Trojans. In this paper, we survey a myriad of neural Trojan attack and defense techniques that have been proposed over the last few years. In a neural Trojan insertion attack, the attacker can be the MLaaS provider itself or a third party capable of adding or tampering with training data. In most research on attacks, the attacker selects the Trojan's functionality and a set of input patterns that will trigger the Trojan. Training data poisoning is the most common way to make the neural network acquire Trojan functionality. Trojan embedding methods that modify the training algorithm or directly interfere with the neural network's execution at the binary level have also been studied. Defense techniques include detecting neural Trojans in the model and/or Trojan trigger patterns, erasing the Trojan's functionality from the neural network model, and bypassing the Trojan. It was also shown that carefully crafted neural Trojans can be used to mitigate other types of attacks. We systematize the above attack and defense approaches in this paper.

Available format(s)
Publication info
Published elsewhere. Minor revision. ISQED 2020
Neural NetworksMachine-Learning-as-a-ServiceNeural Trojans
Contact author(s)
ytliu @ umd edu
2020-02-19: received
Short URL
Creative Commons Attribution


      author = {Yuntao Liu and Ankit Mondal and Abhishek Chakraborty and Michael Zuzak and Nina Jacobsen and Daniel Xing and Ankur Srivastava},
      title = {A Survey on Neural Trojans},
      howpublished = {Cryptology ePrint Archive, Paper 2020/201},
      year = {2020},
      note = {\url{}},
      url = {}
Note: In order to protect the privacy of readers, does not use cookies or embedded third party content.