Abstract
In this paper, we decided to study the effect of extracted audio features, using the analysis tool Essentia, on the quality of constructed music emotion detection classifiers. The research process included constructing training data, feature extraction, feature selection, and building classifiers. We selected features and found sets of features that were the most useful for detecting individual emotions. We examined the effect of low-level, rhythm and tonal features on the accuracy of the constructed classifiers. We built classifiers for different combinations of feature sets, which enabled distinguishing the most useful feature sets for individual emotions.
Chapter PDF
Similar content being viewed by others
References
Grekow, J., Raś, Z.W.: Emotion based MIDI files retrieval system. In: Raś, Z.W., Wieczorkowska, A.A. (eds.) Advances in Music Information Retrieval. SCI, vol. 274, pp. 261–284. Springer, Heidelberg (2010)
Grekow, J.: Mood tracking of musical compositions. In: Chen, L., Felfernig, A., Liu, J., Raś, Z.W. (eds.) ISMIS 2012. LNCS, vol. 7661, pp. 228–233. Springer, Heidelberg (2012)
Bogdanov, D., Wack, N., Gomez, E., Gulati, S., Herrera, P., Mayor, O., Roma, G., Salamon, J., Zapata, J., Serra, X.: ESSENTIA: an audio analysis library for music information retrieval. In: Proceedings of the 14th International Conference on Music Information Retrieval, pp. 493–498 (2013)
Lu, L., Liu, D., Zhang, H.J.: Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech and Language Processing 14(1), 5–18 (2006)
Grekow, J., Raś, Z.W.: Detecting emotions in classical music from MIDI files. In: Rauch, J., Raś, Z.W., Berka, P., Elomaa, T. (eds.) ISMIS 2009. LNCS, vol. 5722, pp. 261–270. Springer, Heidelberg (2009)
Song, Y., Dixon, S., Pearce, M.: Evaluation of musical features for emotion classification. In: Proceedings of the 13th International Society for Music Information Retrieval Conference (2012)
Xu, J., Li, X., Hao, Y., Yang, G.: Source separation improves music emotion recognition. In: ACM International Conference on Multimedia Retrieval (2014)
Yang, Y.-H., Lin, Y.-C., Su, Y.-F., Chen, H.H.: A regression approach to music emotion recognition. IEEE Transactions on Audio, Speech, and Language Processing 16(2), 448–457 (2008)
Lin, Y., Chen, X., Yang, D.: Exploration of music emotion recognition based on MIDI. In: Proceedings of the 14th International Society for Music Information Retrieval Conference (2013)
Schmidt, E.M., Turnbull, D., Kim, Y.E.: Feature selection for content-based, time-varying musical emotion regression. In: Proc. ACM SIGMM International Conference on Multimedia Information Retrieval, Philadelphia, PA (2010)
Saari, P., Eerola, T., Fazekas, G., Barthet, M., Lartillot, O., Sandler, M.: The role of audio and tags in music mood prediction: a study using semantic layer projection. In: Proceedings of the 14th International Society for Music Information Retrieval Conference (2013)
Aljanaki, A., Wiering, F., Veltkamp, R.C.: Computational modeling of induced emotion using GEMS. In: Proceedings of the 15th International Society for Music Information Retrieval Conference (ISMIR), pp. 373–378 (2014)
Lartillot, O., Toiviainen, P.: MIR in Matlab (II): a toolbox for musical feature extraction from audio. In: International Conference on Music Information Retrieval, pp. 237–244 (2007)
McKay, C., Fujinaga, I., Depalle, P.: jAudio: a feature extraction library. In: Proceedings of the 6th International Conference on Music Information Retrieval (ISMIR05), pp. 600–603 (2005)
Cabrera, D.: PSYSOUND: a computer program for psychoacoustical analysis. In: Proceedings of the Australian Acoustical Society Conference, pp. 47–54 (1999)
Grekow, J.: Mood tracking of radio station broadcasts. In: Andreasen, T., Christiansen, H., Cubero, J.-C., Raś, Z.W. (eds.) ISMIS 2014. LNCS, vol. 8502, pp. 184–193. Springer, Heidelberg (2014)
Tzanetakis, G., Cook, P.: Marsyas: A framework for audio analysis. Organized Sound 10, 293–302 (2000)
Laurier, C.: Automatic Classification of Musical Mood by Content-Based Analysis. Ph.D. thesis, UPF, Barcelona, Spain (2011)
Sarasua, A., Laurier, C., Herrera, P.: Support vector machine active learning for music mood tagging. In: 9th International Symposium on Computer Music Modeling and Retrieval (CMMR), London (2012)
Yang, Y.-H., Chen, H.H.: Machine Recognition of Music Emotion: A Review. ACM Transactions on Intelligent Systems and Technology 3(3), Article No. 40 (2012)
Kim, Y., Schmidt, E., Migneco, R., Morton, B., Richardson, P., Scott, J., Speck, J., Turnbull, D.: State of the art report: music emotion recognition: a state of the art review. In: Proceedings of the 11th International Society for Music Information Retrieval Conference, pp. 255–266 (2010)
Thayer, R.E.: The Biopsychology of Mood and Arousal. Oxford University Press (1989)
Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, San Francisco (2005)
Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artificial Intelligence 97(1–2), 273–324 (1997)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 IFIP International Federation for Information Processing
About this paper
Cite this paper
Grekow, J. (2015). Audio Features Dedicated to the Detection of Four Basic Emotions. In: Saeed, K., Homenda, W. (eds) Computer Information Systems and Industrial Management. CISIM 2015. Lecture Notes in Computer Science(), vol 9339. Springer, Cham. https://doi.org/10.1007/978-3-319-24369-6_49
Download citation
DOI: https://doi.org/10.1007/978-3-319-24369-6_49
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-24368-9
Online ISBN: 978-3-319-24369-6
eBook Packages: Computer ScienceComputer Science (R0)