Abstract
Deep neural networks (DNNs) are prone to produce incorrect prediction results under the attack of adversarial samples. To cope with this problem, some defense methods are presented. However, most of them are based on adversarial training, which has great computational consumption and does not start from strengthening the architecture of the network model itself to resist the adversarial attack. Recent studies have shown that feature denoising can remove the adversarial perturbations in the adversarial samples. In this paper, we propose a lightweight denoising network with residual connection (LDN-RC), on which the internal denoising block and the intermediate denoising block are introduced for feature denoising and sample denoising, respectively; the two denoising blocks are combined in the network model, which can withstand the interference of the adversarial perturbations in the adversarial samples to a large extent and also save computational resources. In the training strategy, a two-stage denoising approach and fine-tuning are presented to train the RESNET network model on MNIST, CIFAR-10, and SVHN datasets, and the accuracy of the enhanced network model exceeds 60% on all three datasets under the \({L}_{\infty }\)-PGD white-box attack, which demonstrate that LDN-RC can effectively improve the adversarial robustness of the network model.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
References
Lee S, Lee H, Yoon S (2020) Adversarial vertex mixup: toward better adversarially robust generalization. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 269–278. https://doi.org/10.1109/CVPR42600.2020.00035
Deng Z, Yang X, Xu S et al (2021) LiBRe: a practical bayesian approach to adversarial detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 972–982. https://doi.org/10.1109/cvpr46437.2021.00103
Qiu S, Liu Q, Zhou S, Wu C (2019) Review of artificial intelligence adversarial attack and defense technologies. Appl Sci 9(5). https://doi.org/10.3390/app9050909
Madry A, Makelov A, Schmidt L et al (2018) Towards deep learning models resistant to adversarial attacks. In: International conference on learning representations, pp 1–28
Liao F, Liang M, Dong Y et al (2018) Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1778–1787
Ma C, Ying L (2021) Achieving adversarial robustness requires an active teacher. J Comput Math 39(6):880–896. https://doi.org/10.4208/jcm.2105-m2020-0310
Wang S, Gong Y (2021) Adversarial example detection based on saliency map features. Appl Intell. https://doi.org/10.1007/s10489-021-02759-8
Xie C, Wu Y, Maaten L, Van Der et al (2019) Feature denoising for improving adversarial robustness. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 501–509
Cui Z, Xue F, Cai X et al (2018) Detection of malicious code variants based on deep learning. IEEE Trans Industr Inf 14(7):3187–3196. https://doi.org/10.1109/TII.2018.2822680
Mustafa A, Khan S, Hayat M et al (2019) Adversarial defense by restricting the hidden space of deep neural networks. In: Proceedings of the international conference on computer vision, pp 3384–3393
Wadlow LR (2017) MagNet: a two-pronged defense against adversarial examples. In: Proceedings of the 24th ACM-SIGSAC conference on computer and communications security (ACM CCS), pp 135–147. https://doi.org/10.1145/3133956.3134057
Ortiz-Jimenez G, Modas A, Moosavi-Dezfooli SM, Frossard P (2021) Optimism in the face of adversity: understanding and improving deep learning through adversarial robustness. Proc IEEE 109(5):635–659. https://doi.org/10.1109/JPROC.2021.3050042
Li T, Liu A, Liu X et al (2021) Understanding adversarial robustness via critical attacking route. Inf Sci 547:568–578. https://doi.org/10.1016/j.ins.2020.08.043
Fang X, Li Z, Yang G (2021) A novel approach to generating high-resolution adversarial examples. Appl Intell. https://doi.org/10.1007/s10489-021-02371-w
Naseer M, Khan S, Hayat M et al (2020) A self-supervised approach for adversarial robustness. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 259–268
Wang L, Chen X, Tang R et al (2021) Improving adversarial robustness of deep neural networks by using semantic information. Knowl Based Syst. https://doi.org/10.1016/j.knosys.2021.107141
Ghosh P, Losalka A, Black MJ (2019) Resisting adversarial attacks using Gaussian mixture variational autoencoders. In: Proceedings of the AAAI conference on artificial intelligence, pp 541–548
Mahmood K, Gurevin D, van Dijk M, Ha Nguyen P (2021) Beware the black-box: On the robustness of recent defenses to adversarial examples. Entropy 23(10):1–40. https://doi.org/10.3390/e23101359
Yin Z, Wang H, Wang J et al (2020) Defense against adversarial attacks by low-level image transformations. Int J Intell Syst 35(10):1453–1466. https://doi.org/10.1002/int.22258
Liu N, Du M, Guo R et al (2020) Adversarial attacks and defenses: an interpretation perspective. https://doi.org/10.1145/3468507.3468519
Nesti F, Biondi A, Buttazzo G (2021) Detecting adversarial examples by input transformations, defense perturbations, and voting. IEEE Trans Neural Netw Learn Syst 11–13. https://doi.org/10.1109/tnnls.2021.3105238
Vivek BS, Babu RV (2020)Single-step adversarial training with dropout scheduling. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 947–956
Wei W, Liu L (2021) Robust deep learning ensemble against deception. IEEE Trans Dependable Secur Comput 18(4):1513–1527. https://doi.org/10.1109/TDSC.2020.3024660
He Z, Rakin AS, Fan D (2019) Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 588–597
Jeddi A, Shafiee MJ, Karg M et al (2020) Learn2Perturb: An end-to-end feature perturbation learning to improve adversarial robustness. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1238–1247
Shaham U, Yamada Y, Negahban S (2018) Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing 307:195–204. https://doi.org/10.1016/j.neucom.2018.04.027
Chen T, Liu S, Chang S et al (2020) Adversarial robustness: from self-supervised pre-training to fine-tuning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 696–705
Chen P (2017) ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM workshop on artificial intelligence and security, pp 15–26
Wu T, Liu Z, Huang Q et al (2021) Adversarial robustness under long-tailed distribution. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8659–8668
Ho J, Lee BG, Kang DK (2021)Attack-less adversarial training for a robust adversarial defense. Appl Intell. https://doi.org/10.1007/s10489-021-02523-y
Awasthi P, Yu G, Ferng C-S et al (2020) Adversarial robustness across representation spaces. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7604–7612
Li G, Ding S, Luo J, Liu C (2020) Enhancing intrinsic adversarial robustness via feature pyramid decoder. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 797–805
Cheng M, Chen P-Y, Liu S et al (2021)Self-progressing robust training. In: Proceedings of the AAAI conference on artificial intelligence, pp 7107–7115
Cazenavette G, Murdock C, Lucey S (2021) Architectural adversarial robustness: the case for deep pursuit. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7150–7158
Zhang H, Yu Y, Jiao J et al (2019) Theoretically principled trade-off between robustness and accuracy. In: Proceedings of the 36th international conference on machine learning (ICML), pp 12907–12929
Dong Y, Liao F, Pang T et al (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185–9193
Shi Y, Han Y, Zhang Q, Kuang X (2020) Adaptive iterative attack towards explainable adversarial robustness. Pattern Recogn. https://doi.org/10.1016/j.patcog.2020.107309
Carlini N (2017) Towards evaluating the robustness of neural networks. In: Proceedings of the 38th IEEE symposium on security and privacy (SP), pp 39–57
Fawzi A, Frossard P (2016) DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582
Goodfellow IJ (2017) Adversarial examples in the physical world. In: Proceedings of the 5th international conference on learning representations (ICLR), pp 1–14
Xie C, Zhang Z, Wang J et al (2019) Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2725–2734
Jin Y, Lai L (2021) On the adversarial robustness of hypothesis testing. IEEE Trans Signal Process 69:515–530. https://doi.org/10.1109/TSP.2020.3045206
Huang B, Ke Z, Wang Y et al (2021) Adversarial defence by diversified simultaneous training of deep ensembles. In: Proceedings of the AAAI conference on artificial intelligence, pp 7823–7831
Li X, Li X, Pan D, Zhu D (2020) Improving adversarial robustness via probabilistically compact loss with logit constraints. In: Proceedings of the AAAI conference on artificial intelligence, pp 8482–8490
Addepalli S, Vivek BS, Baburaj A et al (2020) Towards achieving adversarial robustness by enforcing feature consistency across bit planes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1017–1026
Hlihor P, Volpi R, Malagò L (2020) Evaluating the robustness of defense mechanisms based on autoencoder reconstructions against carlini-wagner adversarial attacks. In: Proceedings of the northern lights deep learning workshop. https://doi.org/10.7557/18.5173
Deng Z, Zhang L, Ghorbani A, Zou J (2020) Improving adversarial robustness via unlabeled out-of-domain data. In: Proceedings of the 24th international conference on artificial intelligence and statistics (AISTATS)
Zhang C, Liu A, Liu X et al (2021) Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity. IEEE Trans Image Process 30:1291–1304. https://doi.org/10.1109/TIP.2020.3042083
Tavakoli M, Agostinelli F, Baldi P (2021) SPLASH: learnable activation functions for improving accuracy and adversarial robustness. Neural Netw 140:1–12. https://doi.org/10.1016/j.neunet.2021.02.023
Liu A, Liu X, Yu H et al (2021) Training robust deep neural networks via adversarial noise propagation. IEEE Trans Image Process 30:5769–5781. https://doi.org/10.1109/TIP.2021.3082317
Wang GG, Lu M, Dong YQ, Zhao XJ (2016)Self-adaptive extreme learning machine. Neural Comput Appl 27(2):291–303. https://doi.org/10.1007/s00521-015-1874-3
Yi JH, Wang J, Wang GG (2016) Improved probabilistic neural networks with self-adaptive strategies for transformer fault diagnosis problem. Adv Mech Eng 8(1):1–13. https://doi.org/10.1177/1687814015624832
Han K, Xia B, Li Y (2022) (AD)2: adversarial domain adaptation to defense with adversarial perturbation removal. Pattern Recogn. https://doi.org/10.1016/j.patcog.2021.108303
Yue Z, Yong H, Zhao Q et al (2019) Variational denoising network: toward blind noise modeling and removal. Adv Neural Inf Process Syst 32:1–12
Zhang K, Zuo W, Chen Y et al (2017) Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Trans Image Process 26(7):3142–3155. https://doi.org/10.1109/TIP.2017.2662206
Lecun Y, Bottou L, Bengio Y, Ha P (1998)Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324. https://doi.org/10.1109/5.726791
McCrary MB (1992) Urban multicultural trauma patients. Asha 34(4)
Netzer Y, Wang T, Coates A et al (2011) Reading digits in natural images with unsupervised feature learning. In: NIPS workshop on deep learning and unsupervised feature learning
Rice L, Wong E, Kolter JZ (2020) Overfitting in adversarially robust deep learning. In: Proceedings of the 37th international conference on machine learning, pp 8093–8104
Goldblum M, Fowl L, Feizi S, Goldstein T (2020) Adversarially robust distillation. In: Proceedings of the AAAI conference on artificial intelligence, pp 3996–4003
Wang GG, Deb S, Cui Z (2019) Monarch butterfly optimization. Neural Comput Appl 31(7):1995–2014. https://doi.org/10.1007/s00521-015-1923-y
Yang Y, Chen H, Heidari AA, Gandomi AH (2021) Hunger games search: visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst Appl 177(114864). https://doi.org/10.1016/j.eswa.2021.114864
Ahmadianfar I, Heidari AA, Gandomi AH et al (2021) RUN beyond the metaphor: an efficient optimization algorithm based on runge kutta method. Expert Syst Appl 181(115079). https://doi.org/10.1016/j.eswa.2021.115079
Acknowledgements
All the authors are deeply grateful to the editors for smooth and fast handling of the manuscript. The authors would also like to thank the anonymous referees for their valuable suggestions to improve the quality of this paper. This work is supported by the National Natural Science Foundation of China (Grant No. 61802111, 61872125) and the Key Science and Technology Project of Henan Province (Grant No. 201300210400, 212102210094).
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Chai, X., Wei, T., Chen, Z. et al. LDN-RC: a lightweight denoising network with residual connection to improve adversarial robustness. Appl Intell 53, 5224–5239 (2023). https://doi.org/10.1007/s10489-022-03847-z
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-022-03847-z