Abstract
Influence Maximization (IM), an NP combinatorial optimization problem, has been broadly studied in the past decades. Existing algorithms for IM are still limited by accuracy, scalability and generalization. Moreover, they solve the influence overlapping problem implicitly. This paper proposes Multiple Agents Influence Maximization (MAIM) scheme, a novel Machine Learning based method for IM problem. We focus on explicitly solving the influence overlapping hidden in IM. MAIM first generates a list of sorted nodes as seed candidates in a descending order of overall influence, and drops those with serious influence overlapping based on multiple reinforcement learning (RL) agents in different rounds. We make full use of the characteristics of RL agents: continuous interaction with the environment, quick decision on whether a node should be accepted or dropped and better generalization. We also propose Memory Separated Deep Q-Network to improve training efficiency. Experiments on eight real-world social networks validate the effectiveness and efficiency of our algorithm compared to state-of-the-art algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ali, K., Wang, C., Chen, Y.: A novel nested Q-learning method to tackle time-constrained competitive influence maximization. IEEE Access 7, 6337–6352 (2019)
Banerjee, S., Jenamani, M., Pratihar, D.K.: A survey on influence maximization in a social network. Knowl. Inf. Syst. 62(9), 3417–3455 (2020). https://doi.org/10.1007/s10115-020-01461-4
Chen, W., Wang, Y., Yang, S.: Efficient influence maximization in social networks. In: International Conference on Knowledge Discovery and Data Mining (SIGKDD), pp. 199–208 (2009)
Cheng, S., Shen, H., Huang, J., Zhang, G., Cheng, X.: Staticgreedy: solving the scalability-accuracy dilemma in influence maximization. In: The Conference on Information and Knowledge Management (CIKM), pp. 509–518 (2013)
Goyal, A., Lu, W., Lakshmanan, L.V.: Celf++: optimizing the greedy algorithm for influence maximization in social networks. In: International Conference on World Wide Web (WWW), pp. 47–48 (2011)
Kempe, D., Kleinberg, J., Tardos, É.: Maximizing the spread of influence through a social network. In: International Conference on Knowledge Discovery and Data Mining (SIGKDD), pp. 137–146 (2003)
Kimura, M., Saito, K.: Tractable models for information diffusion in social networks. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) PKDD 2006. LNCS (LNAI), vol. 4213, pp. 259–271. Springer, Heidelberg (2006). https://doi.org/10.1007/11871637_27
Leskovec, J., Krause, A., Guestrin, C., Faloutsos, C., Vanbriesen, J.M., Glance, N.S.: Cost-effective outbreak detection in networks. In: International Conference on Knowledge Discovery and Data Mining (SIGKDD) (2007)
Li, H., Xu, M., Bhowmick, S.S., Sun, C., Jiang, Z., Cui, J.: Disco: influence maximization meets network embedding and deep learning. arXiv preprint arXiv:1906.07378 (2019)
Li, Y., Fan, J., Wang, Y., Tan, K.L.: Influence maximization on social graphs: a survey. IEEE Trans. Knowl. Data Eng. (TKDE) 30(10), 1852–1872 (2018)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Nguyen, H.T., Thai, M.T., Dinh, T.N.: Stop-and-stare: Optimal sampling algorithms for viral marketing in billion-scale networks. In: International Conference on Management of Data (SIGMOD), pp. 695–710 (2016)
Qiu, L., Gu, C., Zhang, S., Tian, X., Mingjv, Z.: TSIM: a two-stage selection algorithm for influence maximization in social networks. IEEE Access 8, 12084–12095 (2020)
Tang, Y., Shi, Y., Xiao, X.: Influence maximization in near-linear time: a martingale approach. In: Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pp. 1539–1554 (2015)
Tang, Y., Xiao, X., Shi, Y.: Influence maximization: near-optimal time complexity meets practical efficiency. In: International Conference on Management of Data (SIGMOD), pp. 75–86 (2014)
Vaswani, S., Kveton, B., Wen, Z., Ghavamzadeh, M., Lakshmanan, L.V., Schmidt, M.: Model-independent online learning for influence maximization. In: International Conference on Machine Learning (ICML), pp. 3530–3539. PMLR (2017)
Wang, C., Liu, Y., Gao, X., Chen, G.: Reinforcement learning model for influence maximization in social networks. In: International Conference on Database Systems for Advanced Applications (DASFAA) (2021)
Acknowledgements
This work was supported by the National Key R&D Program of China [2020YFB1707903]; the National Natural Science Foundation of China [61872238, 61972254], Shanghai Municipal Science and Technology Major Project [2021SHZDZX0102], the Tencent Marketing Solution Rhino-Bird Focused Research Program [FR202001], and the CCF-Tencent Open Fund [RAGR20200105].
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, Y., Sze, W., Gao, X., Chen, G. (2021). Multiple Agents Reinforcement Learning Based Influence Maximization in Social Network Services. In: Hacid, H., Kao, O., Mecella, M., Moha, N., Paik, Hy. (eds) Service-Oriented Computing. ICSOC 2021. Lecture Notes in Computer Science(), vol 13121. Springer, Cham. https://doi.org/10.1007/978-3-030-91431-8_27
Download citation
DOI: https://doi.org/10.1007/978-3-030-91431-8_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91430-1
Online ISBN: 978-3-030-91431-8
eBook Packages: Computer ScienceComputer Science (R0)