Efficient Local Imperceptible Random Search for Black-Box Adversarial Attacks | SpringerLink
Skip to main content

Efficient Local Imperceptible Random Search for Black-Box Adversarial Attacks

  • Conference paper
  • First Online:
Advanced Intelligent Computing Technology and Applications (ICIC 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14872))

Included in the following conference series:

  • 657 Accesses

Abstract

Adversarial attacks involve making subtle perturbations to input images, which cause the DNN model to output incorrect predictions. Most existing black-box attacks fool the target model by querying the target model to generate global perturbation, which requires many queries and makes the perturbation easily detectable. We propose a local black-box attack algorithm based on salient region localization called Local Imperceptible Random Search (LIRS). This method combines the precise localization of sensitive regions with a random search algorithm to generate a universal framework for local perturbation, which is compatible with most black-box attack algorithms. We conducted comprehensive experiments and found that it efficiently generates adversarial examples with subtle perturbations under limited queries. Additionally, it can effectively identify perturbation-sensitive regions in images, outperforming existing state-of-the-art black-box attack methods.

Y. Li and S. You—Contribute equally to this work and should be regarded as co-first authors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8465
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Andriushchenko, M., Croce, F., Flammarion, N., Hein, M.: Square attack: a query-efficient black-box adversarial attack via random search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) Computer Vision – ECCV 2020. LNCS, vol. 12368, pp. 484–501. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58592-1_29

    Chapter  Google Scholar 

  2. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248 (2017)

  3. Cao, Y.: Invisible for both camera and LiDAR: security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In: 2021 IEEE Symposium on Security and Privacy (SP), pp. 176–194. IEEE (2021)

    Google Scholar 

  4. Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.-J.: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26 (2017)

    Google Scholar 

  5. Cheng, M., Le, T., Chen, P.-Y., Yi, J., Zhang, H., Hsieh, C.-J.: Query-efficient hard-label black-box attack: an optimization-based approach. arXiv preprint arXiv:1807.04457 (2018)

  6. Cheng, S., Dong, Y., Pang, T., Su, H., Zhu, J.: Improving black-box adversarial attacks with a transfer-based prior. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  7. Croce, F., Hein, M.: Sparse and imperceivable adversarial attacks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4724–4732 (2019)

    Google Scholar 

  8. Dong, X.: Robust superpixel-guided attentional adversarial attack. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12895–12904 (2020)

    Google Scholar 

  9. Dong, Y., Su, H., Wu, B., Li, Z., Liu, W., Zhang, T., Zhu, J.: Efficient decision-based black-box adversarial attacks on face recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7714–7722 (2019)

    Google Scholar 

  10. Eykholt, K.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634 (2018)

    Google Scholar 

  11. Fang, X., Bai, H., Guo, Z., Shen, B., Hoi, S., Zenglin, X.: DART: domain-adversarial residual-transfer networks for unsupervised cross-domain image classification. Neural Netw. 127, 182–192 (2020)

    Article  Google Scholar 

  12. Guo, C., Gardner, J., You, Y., Wilson, A.G., Weinberger, K.: Simple black-box adversarial attacks. In: International Conference on Machine Learning, pp. 2484–2493. PMLR (2019)

    Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  14. Hu, Z., Huang, S., Zhu, X., Sun, F., Zhang, B., Hu, X.: Adversarial texture for fooling person detectors in the physical world. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13307–13316 (2022)

    Google Scholar 

  15. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  16. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  17. Li, C., Wang, H., Zhang, J., Yao, W., Jiang, T.: An approximated gradient sign method using differential evolution for black-box adversarial attack. IEEE Trans. Evol. Comput. 26(5), 976–990 (2022)

    Article  Google Scholar 

  18. Li, Y., Li, L., Wang, L., Zhang, T., Gong, B.: NATTACK: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. In: International Conference on Machine Learning, pp. 3866–3876. PMLR (2019)

    Google Scholar 

  19. Modas, A., Moosavi-Dezfooli, S.-M., Frossard, P.: SparseFool: a few pixels make a big difference. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9087–9096 (2019)

    Google Scholar 

  20. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115, 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  21. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  22. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Google Scholar 

  23. Sun, Y., Xue, B., Zhang, M., Yen, G.G.: Evolving deep convolutional neural networks for image classification. IEEE Trans. Evol. Comput. 24(2), 394–407 (2019)

    Google Scholar 

  24. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  25. Tu, C.-C., et al.: AutoZOOM: autoencoder-based zeroth order optimization method for attacking black-box neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 742–749 (2019)

    Google Scholar 

  26. Wei, X., Guo, Y., Jie, Y.: Adversarial sticker: a stealthy attack method in the physical world. IEEE Trans. Pattern Anal. Mach. Intell. 45(3), 2711–2725 (2023)

    Google Scholar 

  27. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the National Natural Science Foundation of China (NSFC) under Grants 62102178.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhenhua Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Y., You, S., Chen, Y., Li, Z. (2024). Efficient Local Imperceptible Random Search for Black-Box Adversarial Attacks. In: Huang, DS., Pan, Y., Zhang, Q. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science, vol 14872. Springer, Singapore. https://doi.org/10.1007/978-981-97-5612-4_28

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-5612-4_28

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-5611-7

  • Online ISBN: 978-981-97-5612-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics