Abstract
The present research on label-consistent invisible backdoor attacks mainly faces the problem of needing a high poisoning rate to achieve a high attack success rate. To address the problem, this paper proposes a low-poisoning rate invisible backdoor attack based on important neurons (INIB) by enhancing the connection between triggers and target labels with the help of the neural gradient ranking algorithm. The method first identifies the neurons with the most significant influence on the target label with the help of the neural gradient ranking algorithm, secondly establishes a strong link between the important neurons and the trigger using the gradient descent algorithm, and then generates a trigger based on the established strong link by minimizing the difference between the current activation value and the expected activation value of the important neurons, thus causing the important neurons to be strongly activated when images have the trigger, which in turn causes the model to misidentify them as the target label. Finally, detailed experimental results show that INIB is able to achieve a very high attack success rate with a very low poisoning rate. Specifically, INIB achieves a 98.7% backdoor attack success rate with the poisoning rate of only 1.64% on the MNIST dataset.
Supported by the National Natural Science Foundation of China (NSFC) under Grant No. 62172377 and 61872205, and the Natural Science Foundation of Shandong Province under Grant No. ZR2019MF018.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cai, Z., Zheng, X.: A private and efficient mechanism for data uploading in smart cyber-physical systems. IEEE Trans. Netw. Sci. Eng. 7(2), 766–775 (2020)
Li, Y., Wu, B., Jiang, Y., Li, Z., Xia, S.T.: Backdoor learning: a survey (2020)
Cai, Z., Xu, Z., Wang, J., He, Z.: Private data trading towards range counting queries in internet of things. IEEE Trans. Mobile Comput. 1 (2022)
Zheng, X., Cai, Z.: Privacy-preserved data sharing towards multiple parties in industrial IoTs. IEEE J. Sel. Areas Commun. 38(5), 968–979 (2020)
Gu, T., Liu, K., Dolan-Gavitt, B., Garg, S.: BadNets: evaluating backdooring attacks on deep neural networks. IEEE Access 7, 47230–47244 (2019)
Xue, M., Wang, X., Sun, S., Zhang, Y., Wang, J., Liu, W.: Compression-resistant backdoor attack against deep neural networks (2022)
Barni, M., Kallas, K., Tondi, B.: A new backdoor attack in CNNs by training set corruption without label poisoning. In: IEEE Internation Conference on Image Processing (2019)
Saha, A., Subramanya, A., Pirsiavash, H.: Hidden trigger backdoor attacks. In: AAAI 2020 - Main Technical Track (Oral) (2019)
Nguyen, A., Tran, A.: WaNet - imperceptible warping-based backdoor attack (2021)
Clements, J., Lao, Y.: Backdoor attacks on neural network operations. In: 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP) (2018)
Dumford, J., Scheirer, W.: Backdooring convolutional neural networks via targeted weight perturbations (2018)
Bagdasaryan, E., Shmatikov, V.: Blind backdoors in deep learning models (2020)
Salem, A., Backes, M., Zhang, Y.: Don’t trigger me! a triggerless backdoor attack against deep neural networks (2021)
Goldstein, T., Studer, C., Baraniuk, R.: A field guide to forward-backward splitting with a FASTA implementation. Computer Science (2016)
Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25 (2012)
Acknowledgements
This research is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 62172377 and 61872205, and the Natural Science Foundation of Shandong Province under Grant No. ZR2019MF018.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, Xg., Qian, Xy., Zhang, R., Huang, N., Xia, H. (2022). Low-Poisoning Rate Invisible Backdoor Attack Based on Important Neurons. In: Wang, L., Segal, M., Chen, J., Qiu, T. (eds) Wireless Algorithms, Systems, and Applications. WASA 2022. Lecture Notes in Computer Science, vol 13472. Springer, Cham. https://doi.org/10.1007/978-3-031-19214-2_31
Download citation
DOI: https://doi.org/10.1007/978-3-031-19214-2_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-19213-5
Online ISBN: 978-3-031-19214-2
eBook Packages: Computer ScienceComputer Science (R0)