Multi-agent Reinforcement Learning for Structured Symbolic Music Generation | SpringerLink
Skip to main content

Abstract

Generating structured music using deep learning methods with symbolic representation is challenging due to the complex relationships between musical elements that define a musical composition. Symbolic representation of music, such as MIDI or sheet music, can help overcome some of these challenges by encoding the music in a format that allows manipulation and analysis. However, the symbolic representation of music still requires interpretation and understanding of musical concepts and theory. In this paper, we propose a method for symbolic music generation using a multi-agent structure built on top of growing hierarchical self-organizing maps and recurrent neural networks. Our model primarily focuses on music structure. It operates at a higher level of abstraction, enabling it to capture longer-term musical structure and dependency. Our approach involves using reinforcement learning as a self-learning method for agents and the human user as a musical expert to facilitate the agents’ learning of global dependency and musical characteristics. We show how agents can learn and adapt to the user’s preferences and musical style. Furthermore, we present and discuss the potential of our approach for agent communication, learning and adaptation, and distributed problem-solving in music generation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 9723
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 12154
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://colinraffel.com/projects/lmd/.

References

  1. Chen, S., Zhong, Y., Du, R.: Automatic composition of guzheng (Chinese zither) music using long short-term memory network (LSTM) and reinforcement learning (RL). Sci. Rep. 12(1), 15829 (2022)

    Article  Google Scholar 

  2. Collins, N.: Reinforcement learning for live musical agents. In: ICMC (2008)

    Google Scholar 

  3. Dadman, S., Bremdal, B.A., Bang, B., Dalmo, R.: Toward interactive music generation: a position paper. IEEE Access 10, 125679–125695 (2022)

    Article  Google Scholar 

  4. Dittenbach, M., Merkl, D., Rauber, A.: The growing hierarchical self-organizing map. In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, Neural Computing: New Challenges and Perspectives for the New Millennium, vol. 6, pp. 15–19. IEEE (2000)

    Google Scholar 

  5. Dong, H.W., Hsiao, W.Y., Yang, L.C., Yang, Y.H.: MuseGAN: multi-track sequential generative adversarial networks for symbolic music generation and accompaniment. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  6. Dupont, S., Ravet, T., Picard-Limpens, C., Frisson, C.: Nonlinear dimensionality reduction approaches applied to music and textural sounds. In: 2013 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2013)

    Google Scholar 

  7. Fortunato, M., et al.: Noisy networks for exploration. arXiv preprint arXiv:1706.10295 (2017)

  8. Huang, C.Z.A., et al.: Music transformer. arXiv preprint arXiv:1809.04281 (2018)

  9. Jaques, N., Gu, S., Turner, R.E., Eck, D.: Tuning recurrent neural networks with reinforcement learning (2017)

    Google Scholar 

  10. Ji, S., Yang, X., Luo, J., Li, J.: RL-chord: CLSTM-based melody harmonization using deep reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. (2023)

    Google Scholar 

  11. Liu, H., Xie, X., Ruzi, R., Wang, L., Yan, N.: RE-RLtuner: a topic-based music generation method. In: 2021 IEEE International Conference on Real-time Computing and Robotics (RCAR), pp. 1139–1142. IEEE (2021)

    Google Scholar 

  12. Mnih, V., et al.: Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)

  13. Pál, T., Várkonyi, D.T.: Comparison of dimensionality reduction techniques on audio signals. In: ITAT, pp. 161–168 (2020)

    Google Scholar 

  14. Roberts, A., Engel, J., Raffel, C., Hawthorne, C., Eck, D.: A hierarchical latent vector model for learning long-term structure in music. In: International Conference on Machine Learning, pp. 4364–4373. PMLR (2018)

    Google Scholar 

  15. Smith, B.D., Garnett, G.E.: Reinforcement learning and the creative, automated music improviser. In: Machado, P., Romero, J., Carballal, A. (eds.) EvoMUSART 2012. LNCS, vol. 7247, pp. 223–234. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29142-5_20

    Chapter  Google Scholar 

  16. Veale, T., Cardoso, F.A.: Computational Creativity: The Philosophy and Engineering of Autonomously Creative Systems. Springer, Heidelberg (2019). https://doi.org/10.1007/978-3-319-43610-4

    Book  Google Scholar 

  17. Wooldridge, M.J.: An Introduction to Multiagent Systems, 2nd edn. Wiley, Chichester (2009)

    Google Scholar 

  18. Wu, S.L., Yang, Y.H.: MuseMorphose: full-song and fine-grained music style transfer with one transformer VAE. arXiv preprint arXiv:2105.04090 (2021)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shayan Dadman .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dadman, S., Bremdal, B.A. (2023). Multi-agent Reinforcement Learning for Structured Symbolic Music Generation. In: Mathieu, P., Dignum, F., Novais, P., De la Prieta, F. (eds) Advances in Practical Applications of Agents, Multi-Agent Systems, and Cognitive Mimetics. The PAAMS Collection. PAAMS 2023. Lecture Notes in Computer Science(), vol 13955. Springer, Cham. https://doi.org/10.1007/978-3-031-37616-0_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-37616-0_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-37615-3

  • Online ISBN: 978-3-031-37616-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics