{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,19]],"date-time":"2025-03-19T13:05:25Z","timestamp":1742389525173},"reference-count":51,"publisher":"Wiley","issue":"5","license":[{"start":{"date-parts":[[2024,9,23]],"date-time":"2024-09-23T00:00:00Z","timestamp":1727049600000},"content-version":"vor","delay-in-days":22,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"funder":[{"DOI":"10.13039\/501100004543","name":"China Scholarship Council","doi-asserted-by":"publisher","award":["202208420109"],"id":[{"id":"10.13039\/501100004543","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62202346"],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Computer Animation & Virtual"],"published-print":{"date-parts":[[2024,9]]},"abstract":"Abstract<\/jats:title>Wearable human action recognition (HAR) has practical applications in daily life. However, traditional HAR methods solely focus on identifying user movements, lacking interactivity and user engagement. This paper proposes a novel immersive HAR method called MovPosVR. Virtual reality (VR) technology is employed to create realistic scenes and enhance the user experience. To improve the accuracy of user action recognition in immersive HAR, a multi\u2010scale spatio\u2010temporal attention network (MSSTANet) is proposed. The network combines the convolutional residual squeeze and excitation (CRSE) module with the multi\u2010branch convolution and long short\u2010term memory (MCLSTM) module to extract spatio\u2010temporal features and automatically select relevant features from action signals. Additionally, a multi\u2010head attention with shared linear mechanism (MHASLM) module is designed to facilitate information interaction, further enhancing feature extraction and improving accuracy. The MSSTANet network achieves superior performance, with accuracy rates of 99.33% and 98.83% on the publicly available WISDM and PAMPA2 datasets, respectively, surpassing state\u2010of\u2010the\u2010art networks. Our method showcases the potential to display user actions and position information in a virtual world, enriching user experiences and interactions across diverse application scenarios.<\/jats:p>","DOI":"10.1002\/cav.2293","type":"journal-article","created":{"date-parts":[[2024,9,24]],"date-time":"2024-09-24T03:44:27Z","timestamp":1727149467000},"update-policy":"http:\/\/dx.doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Human action recognition in immersive virtual reality based on multi\u2010scale spatio\u2010temporal attention network"],"prefix":"10.1002","volume":"35","author":[{"ORCID":"http:\/\/orcid.org\/0009-0005-1550-1531","authenticated-orcid":false,"given":"Zhiyong","family":"Xiao","sequence":"first","affiliation":[{"name":"School of Computer Science and Artificial Intelligence Wuhan Textile University Wuhan China"}]},{"given":"Yukun","family":"Chen","sequence":"additional","affiliation":[{"name":"School of Computer Science and Artificial Intelligence Wuhan Textile University Wuhan China"}]},{"given":"Xinlei","family":"Zhou","sequence":"additional","affiliation":[{"name":"School of Computer Science and Artificial Intelligence Wuhan Textile University Wuhan China"}]},{"given":"Mingwei","family":"He","sequence":"additional","affiliation":[{"name":"School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore"}]},{"given":"Li","family":"Liu","sequence":"additional","affiliation":[{"name":"School of Computer Science and Artificial Intelligence Wuhan Textile University Wuhan China"}]},{"ORCID":"http:\/\/orcid.org\/0000-0001-8252-5131","authenticated-orcid":false,"given":"Feng","family":"Yu","sequence":"additional","affiliation":[{"name":"School of Computer Science and Artificial Intelligence Wuhan Textile University Wuhan China"},{"name":"School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore"},{"name":"Engineering Research Center of Hubei Province for Clothing Information Wuhan China"}]},{"given":"Minghua","family":"Jiang","sequence":"additional","affiliation":[{"name":"School of Computer Science and Artificial Intelligence Wuhan Textile University Wuhan China"},{"name":"Engineering Research Center of Hubei Province for Clothing Information Wuhan China"}]}],"member":"311","published-online":{"date-parts":[[2024,9,23]]},"reference":[{"key":"e_1_2_10_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCE.2023.3306206"},{"key":"e_1_2_10_3_1","unstructured":"WuJ VorobeychikY.Robust deep reinforcement learning through bootstrapped opportunistic curriculum.2022:24177\u201324211."},{"key":"e_1_2_10_4_1","doi-asserted-by":"crossref","unstructured":"ZuoC WangY ZhanL et al.Loose inertial poser: motion capture with IMU\u2010attached loose\u2010wear jacket.2024.","DOI":"10.1109\/CVPR52733.2024.00215"},{"key":"e_1_2_10_5_1","first-page":"77277","article-title":"Self\u2010adaptive motion tracking against on\u2010body displacement of flexible sensors","volume":"36","author":"Zuo C","year":"2024","journal-title":"Adv Neural Inf Process Syst"},{"key":"e_1_2_10_6_1","doi-asserted-by":"crossref","unstructured":"YiX ZhouY HabermannM et al.Physical inertial poser (pip): Physics\u2010aware real\u2010time human motion tracking from sparse inertial sensors.2022:13167\u201313178.","DOI":"10.1109\/CVPR52688.2022.01282"},{"key":"e_1_2_10_7_1","doi-asserted-by":"crossref","unstructured":"JiangY YeY GopinathD WonJ WinklerAW LiuCK.Transformer inertial poser: Real\u2010time human motion reconstruction from sparse imus with simultaneous terrain generation.2022:1\u20139.","DOI":"10.1145\/3550469.3555428"},{"key":"e_1_2_10_8_1","doi-asserted-by":"publisher","DOI":"10.1002\/cav.1993"},{"key":"e_1_2_10_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2021.3091004"},{"key":"e_1_2_10_10_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00500-022-07240-3"},{"key":"e_1_2_10_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/TG.2021.3064749"},{"key":"e_1_2_10_12_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jbi.2018.11.012"},{"key":"e_1_2_10_13_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jvcir.2018.11.039"},{"key":"e_1_2_10_14_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11277-017-4954-0"},{"key":"e_1_2_10_15_1","doi-asserted-by":"publisher","DOI":"10.1080\/10447318.2018.1469710"},{"key":"e_1_2_10_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2022.3165875"},{"key":"e_1_2_10_17_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2023.106150"},{"key":"e_1_2_10_18_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2021.108487"},{"key":"e_1_2_10_19_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2021.107728"},{"key":"e_1_2_10_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2024.3394244"},{"key":"e_1_2_10_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2021.3082180"},{"key":"e_1_2_10_22_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.compeleceng.2020.106603"},{"key":"e_1_2_10_23_1","doi-asserted-by":"publisher","DOI":"10.1049\/sil2.12201"},{"key":"e_1_2_10_24_1","doi-asserted-by":"crossref","unstructured":"ZhouY GengX ShenT PeiJ ZhangW JiangD.Modeling event\u2010pair relations in external knowledge graphs for script reasoning. Findings of the Association for Computational Linguistics: ACL\u2010IJCNLP 2021.2021.","DOI":"10.18653\/v1\/2021.findings-acl.403"},{"key":"e_1_2_10_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCE.2024.3377377"},{"key":"e_1_2_10_26_1","doi-asserted-by":"publisher","DOI":"10.62836\/iaet.v2i1.162"},{"key":"e_1_2_10_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2022.3148431"},{"key":"e_1_2_10_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIE.2022.3161812"},{"key":"e_1_2_10_29_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2022.116764"},{"key":"e_1_2_10_30_1","first-page":"820","article-title":"MLCNNwav: multi\u2010level convolutional neural network with wavelet transformations for sensor\u2010based human activity recognition","author":"Dahou A","year":"2023","journal-title":"IEEE Internet Things J"},{"key":"e_1_2_10_31_1","unstructured":"LyuW ZhengS MaT ChenC.A study of the attention abnormality in trojaned berts. arXiv preprint arXiv:2205.08305.2022."},{"key":"e_1_2_10_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2022.3224947"},{"key":"e_1_2_10_33_1","doi-asserted-by":"publisher","DOI":"10.3233\/JIFS-191674"},{"key":"e_1_2_10_34_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41467-020-19523-0"},{"issue":"3","key":"e_1_2_10_35_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3570613","article-title":"Leveraging the properties of mmwave signals for 3d finger motion tracking for interactive iot applications","volume":"6","author":"Liu Y","year":"2022","journal-title":"Proc ACM Meas Anal Comput Syst"},{"key":"e_1_2_10_36_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cmpb.2017.08.008"},{"key":"e_1_2_10_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSAC.2020.3020600"},{"key":"e_1_2_10_38_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41528-020-00092-7"},{"key":"e_1_2_10_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2022.3152367"},{"key":"e_1_2_10_40_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.asr.2023.05.005"},{"key":"e_1_2_10_41_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2021.108159"},{"key":"e_1_2_10_42_1","first-page":"2118","article-title":"Unsupervised deep anomaly detection for multi\u2010sensor time\u2010series signals","author":"Zhang Y","year":"2021","journal-title":"IEEE Trans Knowl Data Eng"},{"key":"e_1_2_10_43_1","first-page":"2925","article-title":"Low\u2010rank tensor regularized graph fuzzy learning for multi\u2010view data processing","author":"Pan B","year":"2023","journal-title":"IEEE Trans Consum Electron"},{"key":"e_1_2_10_44_1","doi-asserted-by":"publisher","DOI":"10.1093\/bjs\/znac015"},{"key":"e_1_2_10_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2020.3045135"},{"key":"e_1_2_10_46_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2021.107671"},{"key":"e_1_2_10_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/JBHI.2022.3193148"},{"key":"e_1_2_10_48_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2023.110954"},{"key":"e_1_2_10_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2023.3242603"},{"key":"e_1_2_10_50_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2022.119419"},{"key":"e_1_2_10_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIM.2023.3240198"},{"key":"e_1_2_10_52_1","unstructured":"LyuW DongX WongR et al.A multimodal transformer: Fusing clinical notes with structured EHR data for interpretable in\u2010hospital mortality prediction.2022;2022:719."}],"container-title":["Computer Animation and Virtual Worlds"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1002\/cav.2293","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,9,26]],"date-time":"2024-09-26T12:19:36Z","timestamp":1727353176000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/cav.2293"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,9]]},"references-count":51,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2024,9]]}},"alternative-id":["10.1002\/cav.2293"],"URL":"https:\/\/doi.org\/10.1002\/cav.2293","archive":["Portico"],"relation":{},"ISSN":["1546-4261","1546-427X"],"issn-type":[{"type":"print","value":"1546-4261"},{"type":"electronic","value":"1546-427X"}],"subject":[],"published":{"date-parts":[[2024,9]]},"assertion":[{"value":"2024-06-17","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-08-07","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-09-23","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}