Collaborative Information Dissemination with Graph-Based Multi-Agent Reinforcement Learning | SpringerLink
Skip to main content

Collaborative Information Dissemination with Graph-Based Multi-Agent Reinforcement Learning

  • Conference paper
  • First Online:
Algorithmic Decision Theory (ADT 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 15248))

Included in the following conference series:

Abstract

Efficient information dissemination is crucial for supporting critical operations across domains like disaster response, autonomous vehicles, and sensor networks. This paper introduces a Multi-Agent Reinforcement Learning (MARL) approach as a significant step forward in achieving more decentralized, efficient, and collaborative information dissemination. We propose a Partially Observable Stochastic Game (POSG) formulation for information dissemination empowering each agent to decide on message forwarding independently, based on the observation of their one-hop neighborhood. This constitutes a significant paradigm shift from heuristics currently employed in real-world broadcast protocols. Our novel approach harnesses Graph Convolutional Reinforcement Learning and Graph Attention Networks (GATs) with dynamic attention to capture essential network features. We propose two approaches to accomplish cooperative information dissemination, L-DyAN and HL-DyAN, differing in terms of the information exchanged among agents. Our experimental results show that our trained policies outperform existing methods, including the state-of-the-art heuristic, in terms of network coverage and communication overhead on dynamic networks of varying density and behavior.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 6634
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 8293
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/RaffaeleGalliera/melissa.

References

  1. Tonguz, O., Wisitpongphan, N., Bai, F., Mudalige, P., Sadekar, V.: Broadcasting in VANET. In: 2007 Mobile Networking for Vehicular Environments, pp. 7–12 (2007). https://doi.org/10.1109/MOVE.2007.4300825

  2. Ibrahim, B.F., Toycan, M., Mawlood, H.A.: A comprehensive survey on VANET broadcast protocols. In: 2020 International Conference on Computation, Automation and Knowledge Management (ICCAKM), pp. 298–302 (2020). https://doi.org/10.1109/ICCAKM46823.2020.9051462

  3. Suri, N., et al.: Comparing performance of group communications protocols over SCB versus routed manet networks. In: 2022 IEEE Military Communications Conference (MILCOM), MILCOM 2022, pp. 1011–1017 (2022). https://doi.org/10.1109/MILCOM55135.2022.10017772

  4. Foerster, J.N., Assael, Y.M., de Freitas, N., Whiteson, S.: Learning to communicate with deep multi-agent reinforcement learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS 2016, Red Hook, NY, USA, pp. 2145–2153. Curran Associates Inc. (2016). ISBN 9781510838819

    Google Scholar 

  5. Buşoniu, L., Babuška, R., De Schutter, B.: Multi-agent reinforcement learning: an overview. In: Srinivasan, D., Jain, L.C. (eds.) Innovations in Multi-Agent Systems and Applications - 1, vol. 310, pp. 183–221. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14435-6_7

    Chapter  Google Scholar 

  6. Sukhbaatar, S., Szlam, A., Fergus, R.: Learning multiagent communication with backpropagation. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS 2016, Red Hook, NY, USA, pp. 2252–2260. Curran Associates Inc. (2016). ISBN 9781510838819

    Google Scholar 

  7. Peng, P., et al.: Multiagent bidirectionally-coordinated nets: emergence of human-level coordination in learning to play starcraft combat games (2017)

    Google Scholar 

  8. Jiang, J., Lu, Z.: Learning attentional communication for multi-agent cooperation. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS 2018, Red Hook, NY, USA, pp. 7265–7275. Curran Associates Inc. (2018)

    Google Scholar 

  9. Das, A., et al.: TarMAC: targeted multi-agent communication. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 97, pp. 1538–1546. PMLR (2019). https://proceedings.mlr.press/v97/das19a.html

  10. Jiang, J., Dun, C., Huang, T., Lu, Z.: Graph convolutional reinforcement learning. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=HkxdQkSYDB

  11. Brody, S., Alon, U., Yahav, E.: How attentive are graph attention networks? In: International Conference on Learning Representations (Poster) (2022). https://openreview.net/forum?id=F72ximsx7C1

  12. Wang, Z., Schaul, T., Hessel, M., Van Hasselt, H., Lanctot, M., De Freitas, N.: Dueling network architectures for deep reinforcement learning. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning, ICML 2016, vol. 48, pp. 1995–2003. JMLR.org (2016)

    Google Scholar 

  13. Dearlove, C., Clausen, T.H.: Optimized Link State Routing Protocol Version 2 (OLSRv2) and MANET Neighborhood Discovery Protocol (NHDP) Extension TLVs. RFC 7188 (2014). https://www.rfc-editor.org/info/rfc7188

  14. Guille, A., Hacid, H., Favre, C., Zighed, D.A.: Information diffusion in online social networks: a survey. SIGMOD Rec. 42(2), 17–28 (2013). https://doi.org/10.1145/2503792.2503797. ISSN 0163-5808

    Article  Google Scholar 

  15. Ye, Z., Zhou, Q.: Performance evaluation indicators of space dynamic networks under broadcast mechanism. Space: Sci. Technol. 2021 (2021). https://doi.org/10.34133/2021/9826517

  16. Ma, X., Zhang, J., Yin, X., Trivedi, K.S.: Design and analysis of a robust broadcast scheme for VANET safety-related services. IEEE Trans. Veh. Technol. 61(1), 46–61 (2012). https://doi.org/10.1109/TVT.2011.2177675

    Article  Google Scholar 

  17. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. A Bradford Book, Cambridge (2018). ISBN 0262039249

    Google Scholar 

  18. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming, 1st edn. Wiley (1994). ISBN 0471619779

    Google Scholar 

  19. Hausknecht, M.J., Stone, P.: Deep recurrent q-learning for partially observable MDPs. In: 2015 AAAI Fall Symposia, Arlington, Virginia, USA, 12–14 November 2015, pp. 29–37. AAAI Press (2015)

    Google Scholar 

  20. Albrecht, S.V., Christianos, F., Schäfer, L.: Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. MIT Press (2023). https://www.marl-book.com

  21. Ahmed, I.H., et al.: Deep reinforcement learning for multi-agent interaction. AI Commun. 35(4), 357–368 (2022)

    Article  MathSciNet  Google Scholar 

  22. Qayyum, A., Viennot, L., Laouiti, A.: Multipoint relaying for flooding broadcast messages in mobile wireless networks. In: Proceedings of the 35th Annual Hawaii International Conference on System Sciences, pp. 3866–3875 (2002). https://doi.org/10.1109/HICSS.2002.994521

  23. Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman (1979). ISBN 0-7167-1044-7

    Google Scholar 

  24. Macker, J.: RFC 6621: simplified multicast forwarding (2012)

    Google Scholar 

  25. Yahja, A., Kaviani, S., Ryu, B., Kim, J.H., Larson, K.A.: DeepADMR: a deep learning based anomaly detection for MANET routing. In: IEEE Military Communications Conference, MILCOM 2022, Rockville, MD, USA, 28 November–2 December 2022, pp. 412–417. IEEE (2022)

    Google Scholar 

  26. Kaviani, S., et al.: DeepCQ+: robust and scalable routing with multi-agent deep reinforcement learning for highly dynamic networks. In: 2021 IEEE Military Communications Conference, MILCOM 2021, San Diego, CA, USA, 29 November–2 December 2021, pp. 31–36. IEEE (2021)

    Google Scholar 

  27. Kaviani, S., et al.: DeepMPR: enhancing opportunistic routing in wireless networks through multi-agent deep reinforcement learning (2023)

    Google Scholar 

  28. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017)

    Google Scholar 

  29. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=rJXMpikCZ

  30. Suri, N., et al.: Adaptive information dissemination over tactical edge networks. In: 2023 International Conference on Military Communications and Information Systems (ICMCIS), pp. 1–7 (2023). https://doi.org/10.1109/ICMCIS59922.2023.10253585

  31. Galliera, R., et al.: Learning to sail dynamic networks: the marlin reinforcement learning framework for congestion control in tactical environments. In: 2023 IEEE Military Communications Conference (MILCOM), MILCOM 2023, pp. 424–429 (2023). https://doi.org/10.1109/MILCOM58377.2023.10356270

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Raffaele Galliera .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Galliera, R., Venable, K.B., Bassani, M., Suri, N. (2025). Collaborative Information Dissemination with Graph-Based Multi-Agent Reinforcement Learning. In: Freeman, R., Mattei, N. (eds) Algorithmic Decision Theory. ADT 2024. Lecture Notes in Computer Science(), vol 15248. Springer, Cham. https://doi.org/10.1007/978-3-031-73903-3_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73903-3_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73902-6

  • Online ISBN: 978-3-031-73903-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics