Abstract
The inherent complexity and structure on long timescales make the automated composition of music a challenging problem. Here we present the Deep Artificial Composer (DAC), a recurrent neural network model of note transitions for the automated composition of melodies. Our model can be trained to produce melodies with compositional structures extracted from large datasets of diverse styles of music, which we exemplify here on a corpus of Irish folk and Klezmer melodies. We assess the creativity of DAC-generated melodies by a new measure, the novelty of musical sequences, showing that melodies imagined by the DAC are as novel as melodies produced by human composers. We further use the novelty measure to show that the DAC creates melodies musically consistent with either of the musical styles it was trained on. This makes the DAC a promising candidate for the automated composition of convincing musical pieces of any provided style.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Seeger, C.: On dissonant counterpoint. Mod. Music 7(4), 25–31 (1930)
Xenakis, I.: Formalized Music: Thought and Mathematics in Composition, vol. 6. Pendragon Press, New York (1992)
Lovelace, A.: Notes on l. menabrea’s “sketch of the analytical engine invented by charles babbage, esq.”. Taylor’s Scientific Memoirs 3 (1843)
Fernández, J.D., Vico, F.: Ai methods in algorithmic composition: a comprehensive survey. J. Artif. Intell. Res. 48, 513–582 (2013)
Hochreiter, S.: The vanishing gradient problem during learning recurrent neural nets and problem solutions. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 6(2), 107–116 (1998)
Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: Continual prediction with LSTM. Neural Comput. 12(10), 2451–2471 (2000)
Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint (2014). arXiv:1412.3555
Graves, A.: Generating sequences with recurrent neural networks. arXiv preprint (2013). arxiv:1308.0850
Karpathy, A.: The unreasonable effectiveness of recurrent neural networks. (2015). http://karpathy.github.io/2015/05/21/rnn-effectiveness/. Accessed 1 Apr 2016
Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint (2014). arXiv:1406.1078
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
Gregor, K., Danihelka, I., Graves, A., Rezende, D.J., Wierstra, D.: Draw: a recurrent neural network for image generation. arXiv preprint (2015). arXiv:1502.04623
Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. arXiv preprint (2015). arXiv:1508.06576
Todd, P.M.: A connectionist approach to algorithmic composition. Comput. Music J. 13(4), 27–43 (1989)
Boulanger-Lewandowski, N., Bengio, Y., Vincent, P.: Modeling temporal dependencies in high-dimensional sequences: application to polyphonic music generation and transcription. arXiv preprint (2012). arXiv:1206.6392
Franklin, J.A.: Computational models for learning pitch and duration using lstm recurrent neural networks. In: Proceedings of the Eighth International Conference on Music Perception and Cognition (ICMPC8), Adelaide, Australia, Causal Productions (2004)
Eck, D., Schmidhuber, J.: Finding temporal structure in music: blues improvisation with LSTM recurrent networks. In: Proceedings of the 2002 12th IEEE Workshop on Neural Networks for Signal Processing, pp. 747–756. IEEE (2002)
Colombo, F., Muscinelli, S.P., Seeholzer, A., Brea, J., Gerstner, W.: Algorithmic composition of melodies with deep recurrent neural networks. In: Proceedings of the First Conference on Computer Simulation of Musical Creativity (CSMC 2016), Huddersfield, UK (2016)
Walshaw, C.: abc notation. http://abcnotation.com/. Accessed 11 March 2016
Cuthbert, M., Ariza, C., Hogue, B., Oberholtzer, J.W.: music21, a toolkit for computer-aided musicology. http://web.mit.edu/music21/. Accessed 11 Nov 2016
Ames, C.: The markov process as a compositional model: a survey and tutorial. Leonardo 22, 175–187 (1989)
Hébert, S., Peretz, I.: Recognition of music in long-term memory: are melodic and temporal patterns equal partners? Mem. Cogn. 25(4), 518–533 (1997)
Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., Bengio, Y.: Theano: a CPU and GPU math expression compiler. In: Proceedings of the Python for Scientific Computing Conference (SciPy), vol. 4, p. 3. TX, Austin (2010)
Jozefowicz, R., Zaremba, W., Sutskever, I.: An empirical exploration of recurrent network architectures. In: Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), pp. 2342–2350 (2015)
Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint (2014). arXiv:1412.6980
Norbeck, H.: Henrik Norbecks’s corpus of irish tunes. http://www.norbeck.nu/abc/. Accessed 03 Nov 2016
Chamber, J.: John chambers’s corpus of klezmer tunes. http://trillian.mit.edu/jc/music/abc/Klezmer/. Accessed 03 Nov 2016
Gorney, E.: Dictionary of creativity: terms, concepts, theories and findings in creativity research (2007)
Fix, E., Hodges, J.L.: Discriminatory analysis-nonparametric discrimination: consistency properties. Technical report, DTIC Document (1951)
Nevill-Manning, C.G., Witten, I.H.: Identifying hierarchical strcture in sequences: a linear-time algorithm. J. Artif. Intell. Res. (JAIR) 7, 67–82 (1997)
Jordanous, A.: A standardised procedure for evaluating creative systems: computational creativity evaluation based on what it is to be creative. Cogn. Comput. 4(3), 246–279 (2012)
Loughran, R., ONeill, M.: Generative music evaluation: why do we limit to human? In: Proceedings of the first Conference on Computer Simulation of Musical Creativity (CSMC 2016), Huddersfield, UK (2016)
Acknowledgments
The authors thank Samuel P. Muscinelli and Johanni Brea for their guidance and helpful comments. This research was partially supported by the Swiss National Science Foundation (200020_147200) and the European Research Council grant no. 268689 (MultiRules).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Colombo, F., Seeholzer, A., Gerstner, W. (2017). Deep Artificial Composer: A Creative Neural Network Model for Automated Melody Generation. In: Correia, J., Ciesielski, V., Liapis, A. (eds) Computational Intelligence in Music, Sound, Art and Design. EvoMUSART 2017. Lecture Notes in Computer Science(), vol 10198. Springer, Cham. https://doi.org/10.1007/978-3-319-55750-2_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-55750-2_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-55749-6
Online ISBN: 978-3-319-55750-2
eBook Packages: Computer ScienceComputer Science (R0)