Abstract
Recent research shows that graph neural networks (GNNs) are easily disrupted due to the lack of robustness, a phenomenon that poses a serious security threat. Currently, most efforts to attack GNNs mainly use gradient information to guide the attacks and achieve superior performance. However, gradient-based attacks often lead to suboptimal results due to the discrete structure of graph data. The high complexity of time and space for large-scale graphs also take away the advantage of gradient attacks. In this work, we propose an attack method based on Important Nodes Controllable Labels (INCLA), which finds the set of important nodes for each class in the graph that have a great influence on the network during the graph convolution process and connects the target nodes to the important nodes to achieve the attack effect. In addition, since the gradient optimization attacks in graph neural networks are all salience attacks, which lead to their poor unnoticeability. We construct more unnoticeable adversarial examples based on the association between target nodes and important nodes, and use the Degree Assortativity Change (DAC) metric and Homophily Ratio Change (HRC) metric for verification. Extensive experimental results show that INCLA can significantly improve the time efficiency while maintaining the attack performance compared to the state-of-the-art adversarial attacks with the same attack budget.
W. Hu and M. Ma—Contribute equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cai, D., Shao, Z., He, X., Yan, X., Han, J.: Mining hidden community in heterogeneous social networks. In: Proceedings of the 3rd International Workshop on Link Discovery, pp. 58–65 (2005)
Chen, J., et al.: Ga-based q-attack on community detection. IEEE Trans. Comput. Soc. Syst. 6(3), 491–503 (2019)
Chen, J., Wu, Y., Xu, X., Chen, Y., Zheng, H., Xuan, Q.: Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797 (2018)
Foster, J.G., Foster, D.V., Grassberger, P., Paczuski, M.: Edge direction and the structure of networks. Proc. Natl. Acad. Sci. 107(24), 10815–10820 (2010)
Ju, M., Fan, Y., Zhang, C., Ye, Y.: Let graph be the go board: gradient-free node injection attack for graph neural networks via reinforcement learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 4383–4390 (2023)
Kempe, D., Kleinberg, J., Tardos, É.: Maximizing the spread of influence through a social network. In: Proceedings of the ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 137–146 (2003)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: International Conference on Learning Representations (2016)
Li, J., Xie, T., Chen, L., Xie, F., He, X., Zheng, Z.: Adversarial attack on large scale graph. IEEE Trans. Knowl. Data Eng. 35(1), 82–95 (2021)
Lin, X., et al.: Exploratory adversarial attacks on graph neural networks for semi-supervised node classification. Pattern Recogn. 133, 109042 (2023)
Liu, Z., Wang, G., Luo, Y., Li, S.Z.: What does the gradient tell when attacking the graph structure. arXiv preprint arXiv:2208.12815 (2022)
Ma, J., Deng, J., Mei, Q.: Adversarial attack on graph neural networks as an influence maximization problem. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 675–685 (2022)
Ma, Y., Wang, S., Derr, T., Wu, L., Tang, J.: Graph adversarial attack via rewiring. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1161–1169 (2021)
Sun, Y., Wang, S., Tang, X., Hsieh, T.Y., Honavar, V.: Node injection attacks on graphs via reinforcement learning. arXiv preprint arXiv:1909.06543 (2019)
Sun, Y., Wang, S., Tang, X., Hsieh, T.Y., Honavar, V.: Adversarial attacks on graph neural networks via node injections: a hierarchical reinforcement learning approach. In: Proceedings of the Web Conference 2020, pp. 673–683 (2020)
Takahashi, T.: Indirect adversarial attacks via poisoning neighbors for graph convolutional networks. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 1395–1400. IEEE (2019)
Wu, H., Wang, C., Tyshetskiy, Y., Docherty, A., Lu, K., Zhu, L.: Adversarial examples for graph data: deep insights into attack and defense. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 4816–4823 (2019)
Wu, S., Tang, Y., Zhu, Y., Wang, L., Xie, X., Tan, T.: Session-based recommendation with graph neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 346–353 (2019)
Xu, K., et a.: Topology attack and defense for graph neural networks: an optimization perspective. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 3961–3967 (2019)
Zou, X., et al.: Tdgia: effective injection attacks on graph neural networks. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 2461–2471 (2021)
Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2847–2856 (2018)
Acknowledgments
This research is supported by the National Natural Science Foundation of China (NSFC) under grant number 62172377, the Taishan Scholars Program of Shandong province under grant number tsqn202312102, and the Startup Research Foundation for Distinguished Scholars under grant number 202112016.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Hu, W., Ma, M., Jiang, Y., Xia, H. (2024). Scalable Attack on Graph Data by Important Nodes. In: Cao, C., Chen, H., Zhao, L., Arshad, J., Asyhari, T., Wang, Y. (eds) Knowledge Science, Engineering and Management. KSEM 2024. Lecture Notes in Computer Science(), vol 14887. Springer, Singapore. https://doi.org/10.1007/978-981-97-5501-1_14
Download citation
DOI: https://doi.org/10.1007/978-981-97-5501-1_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5500-4
Online ISBN: 978-981-97-5501-1
eBook Packages: Computer ScienceComputer Science (R0)