Using the Audio Respiration Signal for Multimodal Discrimination of Expressive Movement Qualities | SpringerLink
Skip to main content

Using the Audio Respiration Signal for Multimodal Discrimination of Expressive Movement Qualities

  • Conference paper
  • First Online:
Human Behavior Understanding (HBU 2016)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 9997))

Included in the following conference series:

Abstract

In this paper we propose a multimodal approach to distinguish between movements displaying three different expressive qualities: fluid, fragmented, and impulsive movements. Our approach is based on the Event Synchronization algorithm, which is applied to compute the amount of synchronization between two low-level features extracted from multimodal data. In more details, we use the energy of the audio respiration signal captured by a standard microphone placed near to the mouth, and the whole body kinetic energy estimated from motion capture data. The method was evaluated on 90 movement segments performed by 5 dancers. Results show that fragmented movements display higher average synchronization than fluid and impulsive movements.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://www.biopac.com/.

  2. 2.

    http://www.infomus.org/eyesweb_ita.php.

  3. 3.

    An example of a segment can be found at: https://youtu.be/J-AtKo2BZ4E .

  4. 4.

    http://dance.dibris.unige.it.

References

  1. Abushakra, A., Faezipour, M.: Acoustic signal classification of breathing movements to virtually aid breath regulation. IEEE J. Biomed. Health Inf. 17(2), 493–500 (2013)

    Article  Google Scholar 

  2. Alborno, P., Piana, S., Mancini, M., Niewiadomski, R., Volpe, G., Camurri, A.: Analysis of intrapersonal synchronization in full-body movements displaying different expressive qualities. In: Proceedings of the International Working Conference on Advanced Visual Interfaces, AVI 2016, New York, pp. 136–143 (2016). http://doi.acm.org/10.1145/2909132.2909262

  3. Bateman, A., McGregor, A., Bull, A., Cashman, P., Schroter, R.: Assessment of the timing of respiration during rowing and its relationship to spinal kinematics. Biol. Sport 23, 353–365 (2006)

    Google Scholar 

  4. Bernasconi, P., Kohl, J.: Analysis of co-ordination between breathing and exercise rhythms in man. J. Physiol. 471, 693–706 (1993)

    Article  Google Scholar 

  5. Camurri, A., Volpe, G., Piana, S., Mancini, M., Niewiadomski, R., Ferrari, N., Canepa, C.: The dancer in the eye: towards a multi-layered computational framework of qualities in movement. In: 3rd International Symposium on Movement and Computing, MOCO 2016 (2016)

    Google Scholar 

  6. Dempster, W.T., Gaughran, G.R.L.: Properties of body segments based on size and weight. Am. J. Anat. 120(1), 33–54 (1967). http://dx.doi.org/10.1002/aja.1001200104

    Google Scholar 

  7. Folke, M., Cernerud, L., Ekström, M., Hök, B.: Critical review of non-invasive respiratory monitoring in medical care. Med. Biol. Eng. Comput. 41(4), 377–383 (2003)

    Article  Google Scholar 

  8. Hoffmann, C.P., Torregrosa, G., Bardy, B.G.: Sound stabilizes locomotor-respiratory coupling and reduces energy cost. PLoS ONE 7(9), e45206 (2012)

    Article  Google Scholar 

  9. Huq, S., Yadollahi, A., Moussavi, Z.: Breath analysis of respiratory flow using tracheal sounds. In: 2007 IEEE International Symposium on Signal Processing and Information Technology, pp. 414–418 (2007)

    Google Scholar 

  10. Jin, F., Sattar, F., Goh, D., Louis, I.M.: An enhanced respiratory rate monitoring method for real tracheal sound recordings. In: 2009 17th European Signal Processing Conference, pp. 642–645 (2009)

    Google Scholar 

  11. Kim, J., Andre, E.: Emotion recognition based on physiological changes in music listening. IEEE Trans. Pattern Anal. Mach. Intell. 30(12), 2067–2083 (2008)

    Article  Google Scholar 

  12. Laban, R., Lawrence, F.C.: Effort. Macdonald & Evans, London (1947)

    Google Scholar 

  13. Liu, G., Guo, Y., Zhu, Q., Huang, B., Wang, L.: Estimation of respiration rate from three-dimensional acceleration data based on body sensor network. Telemed. J. e-Health 17(9), 705–711 (2011)

    Article  Google Scholar 

  14. Niewiadomski, R., Mancini, M., Piana, S.: Human and virtual agent expressive gesture quality analysis and synthesis. In: Rojc, M., Campbell, N. (eds.) Coverbal Synchrony in Human-Machine Interaction, pp. 269–292. CRC Press (2013)

    Google Scholar 

  15. Niewiadomski, R., Mancini, M., Ding, Y., Pelachaud, C., Volpe, G.: Rhythmic body movements of laughter. In: Proceedings of the 16th International Conference on Multimodal Interaction. ICMI 2014, New York, pp. 299–306 (2014). http://doi.acm.org/10.1145/2663204.2663240

  16. Niewiadomski, R., Mancini, M., Volpe, G., Camurri, A.: Automated detection of impulsive movements in HCI. In: Proceedings of the 11th Biannual Conference on Italian SIGCHI Chapter, CHItaly 2015, New York, pp. 166–169 (2015). http://doi.acm.org/10.1145/2808435.2808466

  17. Piana, S., Coletta, P., Ghisio, S., Niewiadomski, R., Mancini, M., Sagoleo, R., Volpe, G., Camurri, A.: Towards a multimodal repository of expressive movement qualities in dance. In: 3rd International Symposium on Movement and Computing, MOCO 2016 (2016). http://dx.doi.org/10.1145/2948910.2948931

  18. Piana, S., Alborno, P., Niewiadomski, R., Mancini, M., Volpe, G., Camurri, A.: Movement fluidity analysis based on performance and perception. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA 2016, New York, pp. 1629–1636 (2016). http://doi.acm.org/10.1145/2851581.2892478

  19. Quian Quiroga, R., Kreuz, T., Grassberger, P.: Event synchronization: a simple and fast method to measure synchronicity and time delay patterns. Phys. Rev. E 66, 041904. http://link.aps.org/doi/10.1103/PhysRevE.66.041904

  20. Rao, K.M., Sudarshan, B.: A review on different technical specifications of respiratory rate monitors. IJRET: Int. J. Res. Eng. Technol. 4(4), 424–429 (2015)

    Article  Google Scholar 

  21. Ruinskiy, D., Lavner, Y.: An effective algorithm for automatic detection and exact demarcation of breath sounds in speech and song signals. IEEE Trans. Audio Speech Lang. Process. 15(3), 838–850 (2007)

    Article  Google Scholar 

  22. Schmid, M., Conforto, S., Bibbo, D., D’Alessio, T.: Respiration and postural sway: detection of phase synchronizations and interactions. Hum. Mov. Sci. 23(2), 105–119 (2004)

    Article  Google Scholar 

  23. Yahya, O., Faezipour, M.: Automatic detection and classification of acoustic breathing cycles. In: 2014 Zone 1 Conference of the American Society for Engineering Education (ASEE Zone 1), pp. 1–5, April 2014

    Google Scholar 

Download references

Acknowledgments

This research has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 645553 (DANCE). DANCE investigates how affective and relational qualities of body movement can be expressed, represented, and analyzed by the auditory channel.

We thank our collegues at Casa Paganini - InfoMus Paolo Alborno, Corrado Canepa, Paolo Coletta, Nicola Ferrari, Simone Ghisio, Maurizio Mancini, Alberto Massari, Ksenia Kolykhalova, Stefano Piana, and Roberto Sagoleo for the fruitful discussions and for their invaluable contributions in the design of the multimodal recordings, and the dancers Roberta Messa, Federica Loredan, and Valeria Puppo for their kind availability to participate in the recordings of our repository of movement qualities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Radoslaw Niewiadomski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Lussu, V., Niewiadomski, R., Volpe, G., Camurri, A. (2016). Using the Audio Respiration Signal for Multimodal Discrimination of Expressive Movement Qualities. In: Chetouani, M., Cohn, J., Salah, A. (eds) Human Behavior Understanding. HBU 2016. Lecture Notes in Computer Science(), vol 9997. Springer, Cham. https://doi.org/10.1007/978-3-319-46843-3_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46843-3_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46842-6

  • Online ISBN: 978-3-319-46843-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics