Abstract
Recent technological advancements have spurred the usage of machine learning based applications in sports science and healthcare. Using wearable sensors and video cameras to analyze and improve the performance of athletes, has become widely popular. Physiotherapists, sports coaches and athletes actively look to incorporate the latest technologies in order to further improve performance and avoid injuries. While wearable sensors are very popular, their use is hindered by constraints on battery power and sensor calibration, especially for use cases which require multiple sensors to be placed on the body. Hence, there is renewed interest in video-based data capture and analysis for sports science. In this paper, we present the application of classifying strength and conditioning exercises using video. We focus on the popular Military Press exercise, where the execution is captured with a video-camera using a mobile device, such as a mobile phone, and the goal is to classify the execution into different types. Since video recordings need a lot of storage and computation, this use case requires data reduction, while preserving the classification accuracy and enabling fast prediction. To this end, we propose an approach named BodyMTS to turn video into time series by employing body pose tracking, followed by training and prediction using multivariate time series classifiers. We analyze the accuracy and robustness of BodyMTS and show that it is robust to different types of noise caused by either video quality or pose estimation factors. We compare BodyMTS to state-of-the-art deep learning methods which classify human activity directly from videos and show that BodyMTS achieves similar accuracy, but with reduced running time and model engineering effort. Finally, we discuss some of the practical aspects of employing BodyMTS in this application in terms of accuracy and robustness under reduced data quality and size. We show that BodyMTS achieves an average accuracy of 87%, which is significantly higher than the accuracy of human domain experts.




Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Two of the co-authors of this paper have successfully launched Output Sports, a start-up built on commercialising research results on single sensor systems: https://www.outputsports.com.
References
Aaron A, Li Z, Manohara M, Lin JY, Wu ECH, Kuo CCJ (2015) Challenges in cloud based ingest and encoding for high quality streaming media. In: 2015 IEEE international conference on image processing (ICIP), pp 1732–1736. https://doi.org/10.1109/ICIP.2015.7351097
Adnan NMN, Ab Patar MNA, Lee H, Yamamoto SI, Jong-Young L, Mahmud J (2018) Biomechanical analysis using kinovea for sports application, vol 342, no 1, p 012097
Ahmadi A, Mitchell E, Destelle F, Gowing M, O’Connor NE, Richter C, Moran K (2014) Automatic activity classification and movement assessment during a sports training session using wearable inertial sensors. In: 2014 11th international conference on wearable and implantable body sensor networks. IEEE, pp 98–103
Andriluka M, Roth S, Schiele B (2009) Pictorial structures revisited: people detection and articulated pose estimation. In: 2009 IEEE conference on computer vision and pattern recognition. IEEE, pp 1014–1021
Argent R, Slevin P, Bevilacqua A, Neligan M, Daly A, Caulfield B (2018) Clinician perceptions of a prototype wearable exercise biofeedback system for orthopaedic rehabilitation: a qualitative exploration. BMJ Open. https://doi.org/10.1136/bmjopen-2018-026326
Argent R, Slevin P, Bevilacqua A, Neligan M, Daly A, Caulfield B (2019) Wearable sensor-based exercise biofeedback for orthopaedic rehabilitation: a mixed methods user evaluation of a prototype system. Sensors. https://doi.org/10.3390/s19020432
Azulay A, Weiss Y (2019) Why do deep convolutional networks generalize so poorly to small image transformations? J Mach Learn Res 20:184:1-184:25
Baechle TR, Earle RW (2008) Essentials of strength training and conditioning. Human Kinetics, Champaign
Bagnall AJ, Dau HA, Lines J, Flynn M, Large J, Bostrom A, Southam P, Keogh EJ (2018) The UEA multivariate time series classification archive, 2018. CoRR abs/1811.00075 arXiv:1811.00075
Brennan L, Kessie T, Caulfield B (2020) Patient experiences of rehabilitation and the potential for an mhealth system with biofeedback after breast cancer surgery: Qualitative study. JMIR Mhealth Uhealth 8(7):e19721
Cao Z, Hidalgo Martinez G, Simon T, Wei S, Sheikh YA (2019) Openpose: realtime multi-person 2d pose estimation using part affinity fields. IEEE Trans Pattern Anal Mach Intell
Carreira J, Zisserman A (2017) Quo vadis, action recognition? A new model and the kinetics dataset. In: 2017 IEEE conference on computer vision and pattern recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017. IEEE Computer Society, pp 4724–4733. https://doi.org/10.1109/CVPR.2017.502
Carreira J, Noland E, Banki-Horvath A, Hillier C, Zisserman A (2018) A short note about kinetics-600. CoRR abs/1808.01340 arXiv:1808.01340
Choutas V, Weinzaepfel P, Revaud J, Schmid C (2018) Potion: pose motion representation for action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition
Chu WCC, Shih C, Chou WY, Ahamed SI, Hsiung PA (2019) Artificial intelligence of things in sports science: weight training as an example. Computer 52(11):52–61
Dajime PF, Smith H, Zhang Y (2020) Automated classification of movement quality using the microsoft kinect v2 sensor. Comput Biol Med 125:104021
Dalal N, Triggs B, Schmid C (2006) Human detection using oriented histograms of flow and appearance. In: Leonardis A, Bischof H, Pinz A (eds) Computer vision—ECCV 2006, 9th European conference on computer vision, Graz, Austria, May 7–13, 2006, proceedings, part II, lecture notes in computer science, vol 3952. Springer, pp 428–441. https://doi.org/10.1007/11744047_33
Dantone M, Gall J, Leistner C, Gool LV (2013) Human pose estimation using body parts dependent joint regressors. In: Proceedings of the IEEE conference on computer vision and pattern recognition
Decroos T, Schütte K, Beéck TOD, Vanwanseele B, Davis J (2018) AMIE: automatic monitoring of indoor exercises. In: Machine learning and knowledge discovery in databases—European conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, proceedings, part III. Springer. https://doi.org/10.1007/978-3-030-10997-4_26
Dempster A, Petitjean F, Webb GI (2019a) Rocket: exceptionally fast and accurate time series classification using random convolutional kernels. arXiv:1910.13051
Dempster A, Petitjean F, Webb GI (2019b) Rocket: exceptionally fast and accurate time series classification using random convolutional kernels. arXiv preprint arXiv:1910.13051
Dempster A, Petitjean F, Webb GI (2020) Rocket: exceptionally fast and accurate time series classification using random convolutional kernels. Data Min Knowl Discov 34(5):1454–1495. https://doi.org/10.1007/s10618-020-00701-z
Dempster A, Schmidt DF, Webb GI (2021) Minirocket: a very fast (almost) deterministic transform for time series classification. KDD21 abs/2012.08791 arXiv:2012.08791
Dhariyal B, Nguyen TL, Gsponer S, Ifrim G (2020) An examination of the state-of-the-art for multivariate time series classification. In: Workshop on large scale industrial time series analysis, ICDM 2020
Dhariyal B, Le Nguyen T, Ifrim G (2021) Fast channel selection for scalable multivariate time series classification. In: ECMLPKDD
Espinosa HG, Lee J, James DA (2015) The inertial sensor: a base platform for wider adoption in sports science applications. J Fit Res 4(1)
Fan H, Li Y, Xiong B, Lo WY, Feichtenhofer C (2020) Pyslowfast. https://github.com/facebookresearch/slowfast
Fang HS, Xie S, Tai YW, Lu C (2017) RMPE: regional multi-person pose estimation. In: ICCV
Faro A, Rui P (2016) Use of open-source technology to teach biomechanics. EDUCAŢIE FIZICĂ ŞI SPORT p 18
Fathallah Elalem S (2016) Evaluation of hammer throw technique for faculty of physical education students using dartfish technology. J Appl Sports Sci 6(2):80–87
Fawaz HI, Forestier G, Weber J, Idoumghar L, Muller PA (2019) Deep learning for time series classification: a review. Data Min Knowl Discov 33(4):917–963. https://doi.org/10.1007/s10618-019-00619-1
Feichtenhofer C (2020) X3D: expanding architectures for efficient video recognition. CoRR abs/2004.04730, arXiv:2004.04730
Feichtenhofer C, Fan H, Malik J, He K (2019) Slowfast networks for video recognition. In: 2019 IEEE/CVF international conference on computer vision, ICCV 2019, Seoul, Korea (South), October 27–November 2, 2019. IEEE, pp 6201–6210. https://doi.org/10.1109/ICCV.2019.00630
Giggins OM, Caulfield B (2015) Proposed design approach for an interactive feedback technology support in rehabilitation. Association for Computing Machinery, New York, NY, USA, REHAB ’15. https://doi.org/10.1145/2838944.2838953
Girshick RB, Donahue J, Darrell T, Malik J (2013) Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR abs/1311.2524 arXiv:1311.2524
Gkioxari G, Arbelaez P, Bourdev LD, Malik J (2013) Articulated pose estimation using discriminative armlet classifiers. In: Proceedings of the IEEE conference on computer vision and pattern recognition
He K, Gkioxari G, Dollár P, Girshick R (2017) Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 2961–2969
Hinojosa C, Niebles JC, Arguello H (2021) Learning privacy-preserving optics for human pose estimation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 2573–2582
Huang S, Gong M, Tao D (2017) A coarse-fine network for keypoint localization. In: Proceedings of the IEEE international conference on computer vision
Insafutdinov E, Pishchulin L, Andres B, Andriluka M, Schiele B (2016) Deepercut: a deeper, stronger, and faster multi-person pose estimation model
Ji S, Xu W, Yang M, Yu K (2010) 3d convolutional neural networks for human action recognition. In: Fürnkranz J, Joachims T (eds) Proceedings of the 27th international conference on machine learning (ICML-10), June 21–24, 2010, Haifa, Israel. Omnipress, pp 495–502. https://icml.cc/Conferences/2010/papers/100.pdf
Kay W, Carreira J, Simonyan K, Zhang B, Hillier C, Vijayanarasimhan S, Viola F, Green T, Back T, Natsev P, Suleyman M, Zisserman A (2017) The kinetics human action video dataset. CoRR abs/1705.06950 arXiv:1705.06950
Krizhevsky A, Sutskever I, Hinton GE (2012a) Imagenet classification with deep convolutional neural networks. In: Bartlett PL, Pereira FCN, Burges CJC, Bottou L, Weinberger KQ (eds) Advances in neural information processing systems 25: 26th annual conference on neural information processing systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States, pp 1106–1114
Krizhevsky A, Sutskever I, Hinton GE (2012b) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105
Kwon H, Tong C, Haresamudram H, Gao Y, Abowd GD, Lane ND, Plötz T (2020) Imutube: automatic extraction of virtual on-body accelerometry from video for human activity recognition. Proc ACM Interact Mob Wearable Ubiquitous Technol 4(3):87. https://doi.org/10.1145/3411841
Löning M, Bagnall A, Ganesh S, Kazakov V, Lines J, Király FJ (2019) sktime: a unified interface for machine learning with time series. In: Workshop on systems for ML at NeurIPS 2019
Moral-Muñoz JA, Esteban-Moreno B, Arroyo-Morales M, Cobo MJ, Herrera-Viedma E (2015) Agreement between face-to-face and free software video analysis for assessing hamstring flexibility in adolescents. J Strength Cond Res 29(9):2661–2665
Nakano N, Sakura T, Ueda K, Omura L, Kimura A, Iino Y, Fukashiro S, Yoshioka S (2020) Evaluation of 3d markerless motion capture accuracy using openpose with multiple video cameras. Front Sports Act Living. https://doi.org/10.3389/fspor.2020.00050
Nakano N, Sakura T, Ueda K, Omura L, Kimura A, Iino Y, Fukashiro S, Yoshioka S (2020) Evaluation of 3d markerless motion capture accuracy using openpose with multiple video cameras. Front Sports Act Living 2:50
Newell A, Huang Z, Deng J (2017) Associative embedding: end-to-end learning for joint detection and grouping
O’Reilly M, Whelan D, Chanialidis C, Friel N, Delahunt E, Ward T, Caulfield B (2015) Evaluating squat performance with a single inertial measurement unit. In: 2015 IEEE 12th international conference on wearable and implantable body sensor networks (BSN). IEEE, pp 1–6
O’Reilly MA, Whelan DF, Ward TE, Delahunt E, Caulfield BM (2017) Classification of deadlift biomechanics with wearable inertial measurement units. J Biomech 58:155–161
Osokin D (2018) Real-time 2d multi-person pose estimation on cpu: lightweight openpose. arXiv preprint arXiv:1811.12004
O’Reilly M, Caulfield B, Ward T, Johnston W, Doherty C (2018) Wearable inertial sensor systems for lower limb exercise detection and evaluation: a systematic review. Sports Med 48(5):1221–1246
Papandreou G, Zhu T, Kanazawa N, Toshev A, Tompson J, Bregler C, Murphy KP (2017) Towards accurate multi-person pose estimation in the wild
Pasos-Ruiz A, Flynn M, Bagnall A (2020) Benchmarking multivariate time series classification algorithms. arxiv:2007.13156
Peng X, Wang L, Wang X, Qiao Y (2014) Bag of visual words and fusion methods for action recognition: comprehensive study and good practice. CoRR abs/1405.4506
Pishchulin L, Insafutdinov E, Tang S, Andres B, Andriluka M, Gehler PV, Schiele B (2015) Deepcut: joint subset partition and labeling for multi person pose estimation
Puig-Diví A, Escalona-Marfil C, Padullés-Riu JM, Busquets A, Padullés-Chando X, Marcos-Ruiz D (2019) Validity and reliability of the kinovea program in obtaining angles and distances using coordinates in 4 perspectives. PloS one 14(6):e0216448
Ressman J, Rasmussen-Barr E, Grooten WJA (2020) Reliability and validity of a novel kinect-based software program for measuring a single leg squat. BMC Sports Sci Med Rehabil 12:1–12
Richter C, O’Reilly M, Delahunt E (2021) Machine learning in sports science: challenges and opportunities. Sports Biomech. https://doi.org/10.1080/14763141.2021.1910334
Ruiz AP, Flynn M, Large J, Middlehurst M, Bagnall AJ (2021) The great multivariate time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min Knowl Discov 35(2):401–449. https://doi.org/10.1007/s10618-020-00727-3
Sánchez J, Perronnin F, Mensink T, Verbeek JJ (2013) Image classification with the fisher vector: theory and practice. Int J Comput Vis 105(3):222–245. https://doi.org/10.1007/s11263-013-0636-x
Sapp B, Taskar B (2013) MODEC: multimodal decomposable models for human pose estimation
Sigurdsson GA, Varol G, Wang X, Farhadi A, Laptev I, Gupta A (2016) Hollywood in homes: crowdsourcing data collection for activity understanding. In: Leibe B, Matas J, Sebe N, Welling M (eds) Computer vision—ECCV 2016—14th European conference, Amsterdam, The Netherlands, October 11–14, 2016, proceedings, part I, lecture notes in computer science, vol 9905. Springer, pp 510–526. https://doi.org/10.1007/978-3-319-46448-0_31
Simonyan K, Zisserman A (2014) Two-stream convolutional networks for action recognition in videos. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Advances in neural information processing systems 27: annual conference on neural information processing systems 2014, December 8–13 2014, Montreal, Quebec, Canada, pp 568–576
Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Bengio Y, LeCun Y (eds) 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, conference track proceedings, arXiv:1409.1556
Singh A, Le BT, Le Nguyen T, Whelan D, O’Reilly M, Caulfield B, Ifrim G (2020) Interpretable classification of human exercise videos through pose estimation and multivariate time series analysis. In: 5th international workshop on health intelligence at AAAI. https://doi.org/10.1007/978-3-030-93080-6_14
Slembrouck M, Luong H, Gerlo J, Schütte K, Van Cauwelaert D, De Clercq D, Vanwanseele B, Veelaert P, Philips W (2020) Multiview 3d markerless human pose estimation from openpose skeletons. In: Blanc-Talon J, Delmas P, Philips W, Popescu D, Scheunders P (eds) Advanced concepts for intelligent vision systems
Soomro K, Zamir AR, Shah M (2012) UCF101: a dataset of 101 human actions classes from videos in the wild. CoRR abs/1212.0402 arXiv:1212.0402
Stamm O, Heimann-Steinert A (2020) Accuracy of monocular two-dimensional pose estimation compared with a reference standard for kinematic multiview analysis: Validation study. JMIR Mhealth Uhealth 8(12):e19608
Tomar S (2006) Converting video formats with ffmpeg. Linux J 2006(146):10
Tran D, Bourdev LD, Fergus R, Torresani L, Paluri M (2015) Learning spatiotemporal features with 3d convolutional networks. In: 2015 IEEE international conference on computer vision, ICCV 2015, Santiago, Chile, December 7–13, 2015. IEEE Computer Society, pp 4489–4497. https://doi.org/10.1109/ICCV.2015.510
Trejo EW, Yuan P (2018) Recognition of yoga poses through an interactive system with kinect device. In: 2018 2nd international conference on robotics and automation sciences (ICRAS), pp 1–5. https://doi.org/10.1109/ICRAS.2018.8443267
Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E, Peterson P, Weckesser W, Bright J, van der Walt SJ, Brett M, Wilson J, Millman KJ, Mayorov N, Nelson ARJ, Jones E, Kern R, Larson E, Carey CJ, Polat FY, Moore EW, VanderPlas J, Laxalde D, Perktold J, Cimrman R, Henriksen I, Quintero EA, Harris CR, Archibald AM, Ribeiro AH, Pedregosa F, van Mulbregt P, SciPy 10 Contributors, (2020) SciPy 1.0: fundamental algorithms for scientific computing in python. Nat Methods 17:261–272. https://doi.org/10.1038/s41592-019-0686-2
Wang H, Schmid C (2013) Action recognition with improved trajectories. In: IEEE international conference on computer vision, ICCV 2013, Sydney, Australia, December 1–8, 2013. IEEE Computer Society, pp 3551–3558. https://doi.org/10.1109/ICCV.2013.441
Wang X, Girshick RB, Gupta A, He K (2017) Non-local neural networks. CoRR abs/1711.07971 arXiv:1711.07971
Whelan D, O’Reilly M, Huang B, Giggins O, Kechadi T, Caulfield B (2016) Leveraging imu data for accurate exercise performance classification and musculoskeletal injury risk screening. In: 2016 38th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, pp 659–662
Whelan D, Delahunt E, O’Reilly M, Hernandez B, Caulfield B (2019) Determining interrater and intrarater levels of agreement in students and clinicians when visually evaluating movement proficiency during screening assessments. Phys Ther 99(4):478–486
Zerpa C, Lees C, Patel P, Pryzsucha E, Patel P (2015) The use of microsoft kinect for human movement analysis. Int J Sports Sci 5(4):120–127
Acknowledgements
This publication has emanated from research supported in part by a grant from Science Foundation Ireland under Grant numbers [12/RC/2289_P2, SFI/16/RC/3835].
Author information
Authors and Affiliations
Corresponding author
Additional information
Responsible editor: Indre Zliobaite.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 Time series data pre-processing and classification
We consider the impact of time series length re-sampling and data normalization here. We use a single split of data consisting of only y coordinates of 8 body parts here for faster execution.
-
Re-sampling Each time series has been re-sampled to the same length since most time series classifiers cannot handle variable-length data.
-
Normalization The magnitude of the signal is important for this application, as shown in the experiment here.
Results and discussion Table 17 shows the impact of changing the re-sampling length on the BodyMTS accuracy. We see that there is almost no effect of length re-sampling on the classifier accuracy. Furthermore, reducing the length of the data also leads to reduced execution time of BodyMTS. We also experimented with the parameters of the ROCKET classifier such as number of kernels (10000) and normalization (False). While changing the number of kernels did not produce any impact on the accuracy, setting the normalization to FALSE lead to a big increase in the accuracy as shown in Table 18. We believe that this is due to the loss of magnitude information which is a key element in the classification for this type of problem. We further experimented by converting the color scale of videos to gray and observed no change in the accuracy of BodyMTS.
1.2 Quantifying video quality noise using video quality metrics
We further quantify the impact of noise on the classifier accuracy by using video quality metrics. We use three scores: VMAF (Aaron et al. 2015), PSNR, and SSIM. FFmpeg has been utilized to calculate these metrics for different CRF values. We report the average metric score over all the clips for each value of CRF. Table 19 shows the average score over all the clips and the accuracy obtained using a particular value of CRF.
Results and discussion We observe that the VMAF score is more useful than other scores for estimating the quality of videos. Higher VMAF, indicates a better quality of videos. There is a big drop in the average VMAF score by changing the CRF values. Based on these results the threshold of VMAF can be set at around 90 which can be used to exclude those videos whose VMAF score is less than 90.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Singh, A., Bevilacqua, A., Nguyen, T.L. et al. Fast and robust video-based exercise classification via body pose tracking and scalable multivariate time series classifiers. Data Min Knowl Disc 37, 873–912 (2023). https://doi.org/10.1007/s10618-022-00895-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10618-022-00895-4