MP-BADNet $$^+$$ : Secure and effective backdoor attack detection and mitigation protocols among multi-participants in private DNNs | Peer-to-Peer Networking and Applications Skip to main content
Log in

MP-BADNet\(^+\): Secure and effective backdoor attack detection and mitigation protocols among multi-participants in private DNNs

  • Published:
Peer-to-Peer Networking and Applications Aims and scope Submit manuscript

Abstract

Deep neural networks (DNNs) significantly facilitate the performance and efficiency of the Internet of Things (IoT). However, DNNs are vulnerable to backdoor attacks where the adversary can inject malicious data during the DNN model training. Such attacks are always activated when the input is stamped with a pre-specified trigger, resulting in a pre-setting prediction of the DNN model. It is necessary to detect the backdoors whether the DNN model has been injected before implementation. Since the data come from the various data holders during the model training, it is also essential to preserve the privacy of both input data and model. In this paper, we propose a framework MP-BADNet\(^+\) including backdoor attack detection and mitigation protocols among multi-participants in private deep neural networks. Based on the secure multi-party computation technique, MP-BADNet\(^+\) not only preserves the privacy of the training data and model parameters but also enables backdoor attacks detection and mitigation in privacy-preserving DNNs. Furthermore, we give the security analysis and formal security proof following the real world-ideal world simulation paradigm. Last but not least, experimental results demonstrate that our approach is effective in detecting and mitigating backdoor attacks in privacy-preserving DNNs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Zhu X, Vondrick C, Fowlkes C, Ramanan D (2016) Do we need more training data? Int J Comput Vision 119(1):76–92. https://doi.org/10.1007/s11263-015-0812-2

    Article  MathSciNet  Google Scholar 

  2. Stoica I, Song D, Popa A, Patterson D, Mahoney M, Katz R, Joseph A, Jordan M, Hellerstein J, Gonzalez J, et al. (2017) A berkeley view of systems challenges for ai. arXiv preprint arXiv:1712.05855

  3. Mohassel P, Zhang Y (2017) Secureml: A system for scalable privacy-preserving machine learning. In: 2017 IEEE Symposium on Security and Privacy (SP), IEEE, Piscataway, NJ, pp 19–38. https://doi.org/10.1109/SP.2017.12

  4. Yao A (1986) How to generate and exchange secrets. In: 27th Annual Symposium on Foundations of Computer Science (sfcs 1986), IEEE, Piscataway, NJ, pp 162–167. https://doi.org/10.1109/SFCS.1986.25

  5. Wagh S, Gupta D (2019) Chandran N (2019) Securenn: 3-party secure computation for neural network training. Proceedings on Privacy Enhancing Technologies 3:26–49. https://doi.org/10.2478/popets-2019-0035

    Article  Google Scholar 

  6. Chaudhari H, Choudhury A, Patra A, Suresh A (2019) Astra: High throughput 3pc over rings with application to secure prediction. In: Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop, Association for Computing Machinery, New York, pp 81–92. https://doi.org/10.1145/3338466.3358922

  7. Patra A, Suresh A (2020) Blaze: blazing fast privacy-preserving machine learning. Proceedings 2020 Network and Distributed System Security Symposium. https://doi.org/10.14722/ndss.2020.24202

  8. Wagh S, Tople S, Benhamouda F, Kushilevitz E, Mittal P (2021) Rabin T (2021) Falcon: Honest-majority maliciously secure framework for private deep learning. Proceedings on Privacy Enhancing Technologies 1:188–208. https://doi.org/10.2478/popets-2021-0011

    Article  Google Scholar 

  9. Wang B, Yao Y, Shan S, Li H, Viswanath B, Zheng H, Zhao B (2019) Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In: 2019 IEEE Symposium on Security and Privacy (SP), IEEE, Piscataway, NJ, pp 707–723. https://doi.org/10.1109/SP.2019.00031

  10. Liu Y, Ma S, Aafer Y, Lee WC, Zhang X (2017a) Trojaning attack on neural networks. In: Network and Distributed System Security Symposium

  11. Gu T, Liu K, Dolan-Gavitt B, Garg S (2019) Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7:47230–47244. https://doi.org/10.1109/ACCESS.2019.2909068

    Article  Google Scholar 

  12. Huang L, Joseph A, Nelson B, Rubinstein B, Tygar D (2011) Adversarial machine learning. In: Proceedings of the 4th ACM workshop on Security and artificial intelligence, Association for Computing Machinery, New York, NY, USA, pp 43–58. https://doi.org/10.1145/2046684.2046692

  13. Chen C, Wei L, Zhang L, Ning J (2021) MP-BADNet: A Backdoor-Attack Detection and Identification Protocol among Multi-Participants in Private Deep Neural Networks, Association for Computing Machinery, New York, NY, USA, p 104-109. https://doi.org/10.1145/3472634.3472660

  14. Chen X, Liu C, Li B, Lu K, Song D (2017) Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526

  15. Saha A, Subramanya A, Pirsiavash H (2020) Hidden trigger backdoor attacks. Proceedings of the AAAI Conference on Artificial Intelligence 34:11957–11965

    Article  Google Scholar 

  16. Shokri R, et al. (2020) Bypassing backdoor detection algorithms in deep learning. In: 2020 IEEE European Symposium on Security and Privacy (EuroS &P), IEEE, pp 175–183

  17. Salem A, Backes M, Zhang Y (2020) Don’t trigger me! a triggerless backdoor attack against deep neural networks. arXiv preprint arXiv:2010.03282

  18. Bagdasaryan E, Shmatikov V (2021) Blind backdoors in deep learning models. In: 30th USENIX Security Symposium (USENIX Security 21), pp 1505–1521

  19. Liu Y, Xie Y, Srivastava A (2017b) Neural trojans. In: 2017 IEEE International Conference on Computer Design (ICCD), IEEE, Piscataway, NJ, pp 45–48. https://doi.org/10.1109/ICCD.2017.16

  20. Liu K, Dolan-Gavitt B, Garg S (2018) Fine-pruning: Defending against backdooring attacks on deep neural networks. In: International Symposium on Research in Attacks, Intrusions, and Defenses, Springer, Springer International Publishing, Cham, pp 273–294. https://doi.org/10.1007/978-3-030-00470-5_13

  21. Liu Y, Lee W, Tao G, Ma S, Aafer Y, Zhang X (2019) Abs: Scanning neural networks for back-doors by artificial brain stimulation. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, Association for Computing Machinery, New York, NY, USA, pp 1265–1282. https://doi.org/10.1145/3319535.3363216

  22. Gao Y, Xu C, Wang D, Chen S, Ranasinghe DC, Nepal S (2019) Strip: A defence against trojan attacks on deep neural networks. In: Proceedings of the 35th Annual Computer Security Applications Conference, Association for Computing Machinery, New York, NY, USA, pp 113–125. https://doi.org/10.1145/3359789.3359790

  23. Chen H, Fu C, Zhao J, Koushanfar F (2019) Deepinspect: A black-box trojan detection and mitigation framework for deep neural networks. In: IJCAI, pp 4658–4664

  24. Guo W, Wang L, Xing X, Du M, Song D (2019) Tabor: A highly accurate approach to inspecting and restoring trojan backdoors in ai systems. arXiv preprint arXiv:1908.01763

  25. Demmler D, Schneider T, Zohner M (2015) Aby-a framework for efficient mixed-protocol secure two-party computation. In: NDSS, San Diego, CA

  26. Mohassel P, Rindal P (2018) Aby3: A mixed protocol framework for machine learning. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Association for Computing Machinery, New York, NY, USA, pp 35–52. https://doi.org/10.1145/3243734.3243760

  27. Gilad-Bachrach R, Dowlin N, Laine K, Lauter K, Naehrig M, Wernsing J (2016) Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. International Conference on Machine Learning, New York, New York, USA 48:201–210

  28. Hesamifard E, Takabi H, Ghasemi M (2018) Wright R (2018) Privacy-preserving machine learning as a service. Proceedings on Privacy Enhancing Technologies 3:123–142. https://doi.org/10.1515/popets-2018-0024

    Article  Google Scholar 

  29. Boemer F, Cammarota R, Demmler D, Schneider T, Yalame H (2020) Mp2ml: a mixed-protocol machine learning framework for private inference. In: Proceedings of the 15th International Conference on Availability, Reliability and Security, Association for Computing Machinery, New York, NY, USA, pp 1–10. https://doi.org/10.1145/3407023.3407045

  30. Juvekar C, Vaikuntanathan V, Chandrakasan A (2018) \(\{\)GAZELLE\(\}\): A low latency framework for secure neural network inference. In: 27th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 18), \(\{\)USENIX\(\}\) Association, Baltimore, MD, pp 1651–1669. https://www.usenix.org/conference/usenixsecurity18/presentation/juvekar

  31. Guo J, Kong Z, Liu C (2020) Poishygiene: Detecting and mitigating poisoning attacks in neural networks. arXiv preprint arXiv:2003.11110

  32. Araki T, Furukawa J, Lindell Y, Nof A, Ohara K (2016) High-throughput semi-honest secure three-party computation with an honest majority. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Association for Computing Machinery, New York, NY, USA, pp 805–817. https://doi.org/10.1145/2976749.2978331

  33. Kingma D, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980

  34. Hampel F (1974) The influence curve and its role in robust estimation. J Am Stat Assoc 69(346):383–393. https://doi.org/10.1080/01621459.1974.10482962

    Article  MathSciNet  MATH  Google Scholar 

  35. Canetti R (2000) Security and composition of multiparty cryptographic protocols. J Cryptol 13(1):143–202

    Article  MathSciNet  MATH  Google Scholar 

  36. Canetti R (2001) Universally composable security: A new paradigm for cryptographic protocols. In: Proceedings 42nd IEEE Symposium on Foundations of Computer Science, IEEE, pp 136–145

  37. Goldreich O, Micali S, Wigderson A (2019) How to Play Any Mental Game, or a Completeness Theorem for Protocols with Honest Majority, Association for Computing Machinery, New York, NY, USA, p 307-328. https://doi.org/10.1145/3335741.3335755

Download references

Acknowledgements

The authors are grateful to the anonymous reviewers for their invaluable suggestions and comments. This work is supported in part by National Natural Science Foundation of China under Grant 61972241 and 61972094, Natural Science Foundation of Shanghai under Grant 22ZR1427100 and 18ZR1417300, and Luo-Zhaorao College Student Science and Technology Innovation Foundation of Shanghai Ocean University.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Lifei Wei or Lei Zhang.

Ethics declarations

Conflict of interest

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, C., Wei, L., Zhang, L. et al. MP-BADNet\(^+\): Secure and effective backdoor attack detection and mitigation protocols among multi-participants in private DNNs. Peer-to-Peer Netw. Appl. 15, 2457–2473 (2022). https://doi.org/10.1007/s12083-022-01377-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12083-022-01377-6

Keywords

Navigation