Track and field training state analysis based on acceleration sensor and deep learning | Evolutionary Intelligence Skip to main content

Advertisement

Log in

Track and field training state analysis based on acceleration sensor and deep learning

  • Special Issue
  • Published:
Evolutionary Intelligence Aims and scope Submit manuscript

Abstract

One action of a human is completed under the control of the relevant system of body. The realization of movements is complex, diverse, and detailed, with flexibility and variability that cannot be achieved by machinery. Human motion analysis and recognition is a challenging research topic that plays an essential role in promoting bionic engineering, medical engineering, sports competition, and game animation. The traditional human motion estimation method relies heavily on image segmentation, so it may be affected by inaccurate segmentation in the process of athlete movement detection, resulting in errors in the following analysis. This paper proposes a track and field training state recognition method based on the acceleration sensor and deep learning algorithms. The proposed method uses an acceleration sensor to detect the accurate motion information of the human body in real-time and a deep learning method to process and analyze the data, which can provide robust data and analysis results even in inclement weather, such as on rainy or snowy days. The proposed method does not rely on unstable image segmentation or perception results. However, it only carries out state analysis through real-time sensor data, which can provide robust analysis results of track and field training. Practical tests show that the proposed method is robust, stable, and robust to sensor and motion noise. At the same time, this method can provide more accurate motion estimation and state analysis. Based on the validation of the public dataset UTD-MHAD, the proposed method significantly improves action recognition accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

All data used to support the findings of the study is included within this paper.

References

  1. Sasaki S, Karube I (1999) The development of microfabricated biocatalytic fuel cells. Trends Biotechnol 17:50–52

    Article  Google Scholar 

  2. He Z, Jin L, Zhen L, Huang J (2008) Gesture recognition based on 3D accelerometer for cell phones interaction. IEEE Asia Pacific Conference on Circuits & Systems. IEEE

  3. Hong YJ, Kim IJ, Sang CA, Kim HG (2010) Mobile health monitoring system based on activity recognition using accelerometer. Simul Modelling Pract Theory 18(4):446–455

    Article  Google Scholar 

  4. Wang S, Jie Y, Chen N, Xin C, Zhang Q (2005) Human activity recognition with user-free accelerometers in the sensor networks. 2005 International Conference on Neural Networks and Brain. IEEE

  5. Carpes FP, Loturco I, Fuente C, Bishop C, Pereira LA (2021) Video-based biomechanical analysis of an unexpected achilles tendon rupture in an olympic sprinter.Journal of Biomechanics,117

  6. Zhao Y (2017) Analysis on Special Training Methods of Track and Field Sprint. International Conference on Education

  7. Geng K, Yin G (2020) Using deep learning in infrared images to enable human gesture recognition for autonomous vehicles.IEEE Access, pp(99),1–1

  8. Song L, Guo X, Fan Y (2020) Action Recognition in Video Using Human Keypoint Detection. 2020 15th International Conference on Computer Science & Education (ICCSE)

  9. Hong F, Lu C, Liu C, Liu R, Wang T (2020) Pgnet: pipeline guidance for human key-point detection. Entropy 22(3):369

    Article  Google Scholar 

  10. Mcnally W, Vats K, Wong A, Mcphee J (2021) Rethinking keypoint representations: modeling keypoints and poses as objects for multi-person human pose estimation

  11. Lu YL, Zhang X, Gong S, Zhou F, Liu Y (2016) Recognition of multiple human motion patterns based on mems inertial sensors. Journal of Chinese Inertial Technology

  12. Shi Dianyue (2016) Research on human behavior recognition based on sensor data (Doctorial observation. University of Electronic Science and Technology)

  13. Li R, Liangliang W (2014) Wang Ke A review of research on human action behavior recognition [J] Pattern recognition and artificial intelligence, 27 (1): 35–48

  14. Mantyjarvi J, Himberg J, Seppanen T (2001) Recognizing human motion with multiple acceleration sensors[C]. Systems, Man, and Cybernetics, IEEE International Conference on, 2001: 747–752

  15. Ravi N, Dandekar N, Mysore P et al (2005) Activity recognition from accelerometer data[C]. AAAI, :1541–1546

  16. Lukowicz P, Ward JA, Junker H et al (2004) Recognizing workshop activity using body worn microphones and accelerometers. Springer, Pervasive Computing, pp 18–32

    Google Scholar 

  17. Pansiot J, Lo B, Yang G-Z (2010) Swimming stroke kinematic analysis with BSN[C]. Body Sensor Networks (BSN), 2010 International Conference on, : 153–158

  18. Ohgi Y, Yasumura M, Ichikawa H et al (2000) Analysis of stroke technique using acceleration sensor IC in freestyle swimming[J].The Engineering of Sport, :503–512

  19. Niroumand K, Mcnamara L, Goguev K et al (2014) SADSense: personalized mobile sensing for seasonal effects on health[C]. Proceedings of the 13th international symposium on Information processing in sensor networks, : 295–296

  20. Hossain SM, Ali AA, Rahman MM et al (2014) Identifying drug (cocaine) intake events from acute physiological response in the presence of free-living physical activity[C]. Proceedings of the 13th international symposium on Information processing in sensor networks, : 71–82

  21. Baccouche M, Mamalet F, Wolf C et al Sequential deep learning for human action recognition[C]//International workshop on human behavior understanding. Springer, Berlin, Heidelberg, 2011: 29–39

  22. Charalampous K, Gasteratos A (2016) On-line deep learning method for action recognition[J]. Pattern Anal Appl 19(2):337–354

    Article  MathSciNet  Google Scholar 

  23. Wu D, Sharma N, Blumenstein M (2017) Recent advances in video-based human action recognition using deep learning: A review[C]//2017 International Joint Conference on Neural Networks (IJCNN). IEEE, : 2865–2872

  24. Jaouedi N, Boujnah N, Bouhlel MS (2020) A new hybrid deep learning model for human action recognition[J]. J King Saud University-Computer Inform Sci 32(4):447–453

    Google Scholar 

  25. Publicity (2021) Research on human motion recognition based on deep learning (doctorial observation. University of Electronic Science and Technology)

  26. Jing L, Ye Y, Yang X et al (2017) 3D convolutional neural network with multi-model framework for action recognition[C]//2017 IEEE international conference on image processing (ICIP). IEEE, : 1837–1841

  27. Zhou E, Zhang H (2020) Human action recognition toward massive-scale sport sceneries based on deep multi-model feature fusion[J]. Sig Process Image Commun 84:115802

    Article  Google Scholar 

  28. Wang Z (2020) Human motion evaluation method based on multimodal information (Master’s thesis, Hangzhou University of Electronic Science and technology).

  29. Li Jinghui, & Yang Licai. A human posture algorithm based on multi-sensor information fusion Journal of Shandong University: Engineering Edition. 2013

  30. Chen C, Jafari R, Kehtarnavaz N (2015) “UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor,“ IEEE International Conference on Image Processing (ICIP), 2015, pp. 168–172, https://doi.org/10.1109/ICIP.2015.7350781

Download references

Acknowledgments

This work was support by Basic Scientific Research Funds Think Tank Project of Provincial Colleges and Universities of Heilongjiang Province (Granted No.145109616).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Zhang.

Ethics declarations

Conflict of interest

The author declares that there is no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y. Track and field training state analysis based on acceleration sensor and deep learning. Evol. Intel. 16, 1627–1636 (2023). https://doi.org/10.1007/s12065-022-00811-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12065-022-00811-1

Keywords

Navigation