{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,9,19]],"date-time":"2024-09-19T16:17:27Z","timestamp":1726762647867},"reference-count":91,"publisher":"Association for Computing Machinery (ACM)","issue":"4","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2021,12,27]]},"abstract":"Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devices. To this aim, we propose a novel ubiquitous sensing system based on the commodity microphone array --- SonicFace, which provides an accessible, unobtrusive, contact-free, and privacy-preserving solution to monitor the user's emotional expressions continuously without playing hearable sound. SonicFace utilizes a pair of speaker and microphone array to recognize various fine-grained facial expressions and emotional hand gestures by emitted ultrasound and received echoes. Based on a set of experimental evaluations, the accuracy of recognizing 6 common facial expressions and 4 emotional gestures can reach around 80%. Besides, the extensive system evaluations with distinct configurations and an extended real-life case study have demonstrated the robustness and generalizability of the proposed SonicFace system.<\/jats:p>","DOI":"10.1145\/3494988","type":"journal-article","created":{"date-parts":[[2021,12,30]],"date-time":"2021-12-30T17:40:33Z","timestamp":1640886033000},"page":"1-33","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["SonicFace"],"prefix":"10.1145","volume":"5","author":[{"given":"Yang","family":"Gao","sequence":"first","affiliation":[{"name":"Northwestern University, Department of Computer Science, USA"}]},{"given":"Yincheng","family":"Jin","sequence":"additional","affiliation":[{"name":"The State University of New York at Buffalo, Department of Computer Science and Engineering, Buffalo, NY, USA"}]},{"given":"Seokmin","family":"Choi","sequence":"additional","affiliation":[{"name":"The State University of New York at Buffalo, Department of Computer Science and Engineering, USA"}]},{"given":"Jiyang","family":"Li","sequence":"additional","affiliation":[{"name":"The State University of New York at Buffalo, Department of Computer Science and Engineering, USA"}]},{"given":"Junjie","family":"Pan","sequence":"additional","affiliation":[{"name":"South China University of Technology, School of Electronic and Information Engineering, China"}]},{"given":"Lin","family":"Shu","sequence":"additional","affiliation":[{"name":"South China University of Technology, School of Future Technology, School of Electronic and Information Engineering, China"}]},{"given":"Chi","family":"Zhou","sequence":"additional","affiliation":[{"name":"The State University of New York at Buffalo, Department of Industrial and Systems Engineering, USA"}]},{"given":"Zhanpeng","family":"Jin","sequence":"additional","affiliation":[{"name":"The State University of New York at Buffalo, Department of Computer Science and Engineering, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,12,30]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"https:\/\/www.apple.com\/homepod-2018\/. [Online","year":"2021","unstructured":"2018. https:\/\/www.apple.com\/homepod-2018\/. [Online ; accessed 20- July - 2021 ]. 2018. https:\/\/www.apple.com\/homepod-2018\/. [Online; accessed 20-July-2021]."},{"key":"e_1_2_1_2_1","volume-title":"https:\/\/www.nielsen.com\/us\/en\/solutions\/measurement\/television\/. [Online","year":"2021","unstructured":"2020. https:\/\/www.nielsen.com\/us\/en\/solutions\/measurement\/television\/. [Online ; accessed 23- April - 2021 ]. 2020. https:\/\/www.nielsen.com\/us\/en\/solutions\/measurement\/television\/. [Online; accessed 23-April-2021]."},{"key":"e_1_2_1_3_1","volume-title":"https:\/\/www.rottentomatoes.com\/. [Online","year":"2021","unstructured":"2020. https:\/\/www.rottentomatoes.com\/. [Online ; accessed 23- April - 2021 ]. 2020. https:\/\/www.rottentomatoes.com\/. [Online; accessed 23-April-2021]."},{"key":"e_1_2_1_4_1","volume-title":"https:\/\/www.imdb.com\/. [Online","year":"2021","unstructured":"2020. https:\/\/www.imdb.com\/. [Online ; accessed 23- April - 2021 ]. 2020. https:\/\/www.imdb.com\/. [Online; accessed 23-April-2021]."},{"key":"e_1_2_1_5_1","volume-title":"https:\/\/www.techhive.com\/article\/3516312\/amazon-echo-studio-review.html. [Online","year":"2021","unstructured":"2020. https:\/\/www.techhive.com\/article\/3516312\/amazon-echo-studio-review.html. [Online ; accessed 20- July - 2021 ]. 2020. https:\/\/www.techhive.com\/article\/3516312\/amazon-echo-studio-review.html. [Online; accessed 20-July-2021]."},{"key":"e_1_2_1_6_1","volume-title":"https:\/\/github.com\/WIKI2020\/FacePose_pytorch. [Online","year":"2021","unstructured":"2020. https:\/\/github.com\/WIKI2020\/FacePose_pytorch. [Online ; accessed 10- May - 2021 ]. 2020. https:\/\/github.com\/WIKI2020\/FacePose_pytorch. [Online; accessed 10-May-2021]."},{"key":"e_1_2_1_7_1","volume-title":"https:\/\/go.affectiva.com\/affdex-for-market-research. [Online","year":"2021","unstructured":"2021. https:\/\/go.affectiva.com\/affdex-for-market-research. [Online ; accessed 19- Oct- 2021 ]. 2021. https:\/\/go.affectiva.com\/affdex-for-market-research. [Online; accessed 19-Oct-2021]."},{"key":"e_1_2_1_8_1","volume-title":"https:\/\/imotions.com\/blog\/how-facial-expressions-analysis-fea-can-be-done-remotely\/. [Online","year":"2021","unstructured":"2021. https:\/\/imotions.com\/blog\/how-facial-expressions-analysis-fea-can-be-done-remotely\/. [Online ; accessed 19- Oct- 2021 ]. 2021. https:\/\/imotions.com\/blog\/how-facial-expressions-analysis-fea-can-be-done-remotely\/. [Online; accessed 19-Oct-2021]."},{"key":"e_1_2_1_9_1","volume-title":"https:\/\/www.digitaltrends.com\/smart-home-reviews\/nest-mini-review-2\/. [Online","year":"2021","unstructured":"2021. https:\/\/www.digitaltrends.com\/smart-home-reviews\/nest-mini-review-2\/. [Online ; accessed 20- July - 2021 ]. 2021. https:\/\/www.digitaltrends.com\/smart-home-reviews\/nest-mini-review-2\/. [Online; accessed 20-July-2021]."},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2018.8461912"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1016\/S1018-3639(18)30850-X"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2021.3077533"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijhcs.2015.01.006"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1080\/01587919.2020.1869527"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3412382.3458268"},{"key":"e_1_2_1_16_1","volume-title":"WiFace: Facial Expression Recognition Using Wi-Fi Signals","author":"Chen Yanjiao","year":"2020","unstructured":"Yanjiao Chen , Runmin Ou , Zhiyang Li , and Kaishun Wu. 2020. WiFace: Facial Expression Recognition Using Wi-Fi Signals . IEEE Transactions on Mobile Computing ( 2020 ). Yanjiao Chen, Runmin Ou, Zhiyang Li, and Kaishun Wu. 2020. WiFace: Facial Expression Recognition Using Wi-Fi Signals. IEEE Transactions on Mobile Computing (2020)."},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.3389\/fpsyg.2020.00920"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2018.2874986"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.637"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1186\/s40561-018-0080-z"},{"key":"e_1_2_1_21_1","volume-title":"Methods for measuring facial action. Handbook of methods in nonverbal behavior research","author":"Ekman Paul","year":"1982","unstructured":"Paul Ekman . 1982. Methods for measuring facial action. Handbook of methods in nonverbal behavior research ( 1982 ), 45--90. Paul Ekman. 1982. Methods for measuring facial action. Handbook of methods in nonverbal behavior research (1982), 45--90."},{"key":"e_1_2_1_22_1","volume-title":"Basic emotions. Handbook of cognition and emotion 98, 45--60","author":"Ekman Paul","year":"1999","unstructured":"Paul Ekman . 1999. Basic emotions. Handbook of cognition and emotion 98, 45--60 ( 1999 ), 16. Paul Ekman. 1999. Basic emotions. Handbook of cognition and emotion 98, 45--60 (1999), 16."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1016\/B978-0-08-016643-8.50024-0"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACII.2013.19"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-014-0500-0"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411830"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/2647868.2655034"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/1738826.1738914"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-016-0842-x"},{"key":"e_1_2_1_31_1","volume-title":"PFLD: A Practical Facial Landmark Detector. arXiv:1902.10859 [cs.CV]","author":"Guo Xiaojie","year":"2019","unstructured":"Xiaojie Guo , Siyuan Li , Jinke Yu , Jiawan Zhang , Jiayi Ma , Lin Ma , Wei Liu , and Haibin Ling . 2019 . PFLD: A Practical Facial Landmark Detector. arXiv:1902.10859 [cs.CV] Xiaojie Guo, Siyuan Li, Jinke Yu, Jiawan Zhang, Jiayi Ma, Lin Ma, Wei Liu, and Haibin Ling. 2019. PFLD: A Practical Facial Landmark Detector. arXiv:1902.10859 [cs.CV]"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3385956.3422122"},{"key":"e_1_2_1_33_1","volume-title":"John Fisher, and Lars Hansen.","author":"Hauberg S\u00f8ren","year":"2016","unstructured":"S\u00f8ren Hauberg , Oren Freifeld , Anders Boesen Lindbo Larsen , John Fisher, and Lars Hansen. 2016 . Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation. In Artificial Intelligence and Statistics. PMLR , 342--350. S\u00f8ren Hauberg, Oren Freifeld, Anders Boesen Lindbo Larsen, John Fisher, and Lars Hansen. 2016. Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation. In Artificial Intelligence and Statistics. PMLR, 342--350."},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICAIIC48513.2020.9065010"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300245"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300506"},{"key":"e_1_2_1_37_1","doi-asserted-by":"crossref","unstructured":"Carroll E Izard. 1994. Innate and universal facial expressions: evidence from developmental and cross-cultural research. (1994). Carroll E Izard. 1994. Innate and universal facial expressions: evidence from developmental and cross-cultural research. (1994).","DOI":"10.1037\/0033-2909.115.2.288"},{"key":"e_1_2_1_38_1","volume-title":"Human emotions","author":"Izard Carroll E","unstructured":"Carroll E Izard . 2013. Human emotions . Springer Science & Business Media . Carroll E Izard. 2013. Human emotions. Springer Science & Business Media."},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3241539.3241548"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3372224.3380900"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445588"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-010-0632-x"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2019.8682322"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3384419.3430780"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2020.2981446"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/2813524.2813527"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3314404"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/TBME.2010.2048568"},{"key":"e_1_2_1_49_1","unstructured":"Helen Coster Lisa Richwine. 2020. Analysis: Fewer movies in theaters? Big Media turns focus to streaming video. https:\/\/www.reuters.com\/article\/walt-disney-restructuring-streaming\/analysis-fewer-movies-in-theaters-big-media-turns-focus-to-streaming-video-idUSKBN26Z09E. [Online; accessed 10-May-2021]. Helen Coster Lisa Richwine. 2020. Analysis: Fewer movies in theaters? Big Media turns focus to streaming video. https:\/\/www.reuters.com\/article\/walt-disney-restructuring-streaming\/analysis-fewer-movies-in-theaters-big-media-turns-focus-to-streaming-video-idUSKBN26Z09E. [Online; accessed 10-May-2021]."},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3432235"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.3390\/s18103379"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3424739"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3300061.3345439"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2014.2384198"},{"key":"e_1_2_1_55_1","volume-title":"Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques. arXiv preprint arXiv:1003.4083","author":"Muda Lindasalwa","year":"2010","unstructured":"Lindasalwa Muda , Mumtaj Begam , and Irraivan Elamvazuthi . 2010. Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques. arXiv preprint arXiv:1003.4083 ( 2010 ). Lindasalwa Muda, Mumtaj Begam, and Irraivan Elamvazuthi. 2010. Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques. arXiv preprint arXiv:1003.4083 (2010)."},{"key":"e_1_2_1_56_1","unstructured":"Walter Murch. 2001. In the Blink of an Eye. Vol. 995. Silman-James Press Los Angeles. Walter Murch. 2001. In the Blink of an Eye. Vol. 995. Silman-James Press Los Angeles."},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2019.2902091"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2017.2723011"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/WACV.2014.6835987"},{"key":"e_1_2_1_60_1","volume-title":"Proceedings of the 28th International Conference on International Conference on Machine Learning","author":"Ngiam Jiquan","unstructured":"Jiquan Ngiam , Aditya Khosla , Mingyu Kim , Juhan Nam , Honglak Lee , and Andrew Y. Ng . 2011. Multimodal Deep Learning . In Proceedings of the 28th International Conference on International Conference on Machine Learning ( Bellevue, Washington, USA) (ICML'11). Omnipress, Madison, WI, USA, 689--696. Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng. 2011. Multimodal Deep Learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning (Bellevue, Washington, USA) (ICML'11). Omnipress, Madison, WI, USA, 689--696."},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1109\/IoTDI49375.2020.00011"},{"key":"e_1_2_1_62_1","volume-title":"Proceedings of IEEE Region 10 International Conference on Electrical and Electronic Technology. TENCON 2001 (Cat. No. 01CH37239)","volume":"1","author":"Nwe Tin Lay","year":"2001","unstructured":"Tin Lay Nwe , Foo Say Wei , and Liyanage C De Silva . 2001 . Speech based emotion classification . In Proceedings of IEEE Region 10 International Conference on Electrical and Electronic Technology. TENCON 2001 (Cat. No. 01CH37239) , Vol. 1 . IEEE, 297--301. Tin Lay Nwe, Foo Say Wei, and Liyanage C De Silva. 2001. Speech based emotion classification. In Proceedings of IEEE Region 10 International Conference on Electrical and Electronic Technology. TENCON 2001 (Cat. No. 01CH37239), Vol. 1. IEEE, 297--301."},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3161174"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.specom.2003.10.002"},{"key":"e_1_2_1_65_1","volume-title":"Emotion elicitation using films. Handbook of Emotion Elicitation and Assessment 9","author":"Ray Rebecca D","year":"2007","unstructured":"Rebecca D Ray and James J Gross . 2007. Emotion elicitation using films. Handbook of Emotion Elicitation and Assessment 9 ( 2007 ). Rebecca D Ray and James J Gross. 2007. Emotion elicitation using films. Handbook of Emotion Elicitation and Assessment 9 (2007)."},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1037\/h0077714"},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1037\/0022-3514.76.5.805"},{"key":"e_1_2_1_68_1","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 1132--1137","author":"Saha Suman","year":"2018","unstructured":"Suman Saha , Rajitha Navarathna , Leonhard Helminger , and Romann M Weber . 2018 . Unsupervised deep representations for learning audience facial behaviors . In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 1132--1137 . Suman Saha, Rajitha Navarathna, Leonhard Helminger, and Romann M Weber. 2018. Unsupervised deep representations for learning audience facial behaviors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 1132--1137."},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1145\/3381010"},{"key":"e_1_2_1_70_1","volume-title":"Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012)","author":"Shibata Tatsuya","year":"2012","unstructured":"Tatsuya Shibata and Yohei Kijima . 2012 . Emotion recognition modeling of sitting postures by using pressure sensors and accelerometers . In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012) . IEEE, 1124--1127. Tatsuya Shibata and Yohei Kijima. 2012. Emotion recognition modeling of sitting postures by using pressure sensors and accelerometers. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). IEEE, 1124--1127."},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-019-52891-2"},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1145\/2493432.2493508"},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.3390\/ijerph17010330"},{"key":"e_1_2_1_74_1","doi-asserted-by":"crossref","unstructured":"Bharath Sudharsan Peter Corcoran and Muhammad Intizar Ali. 2019. Smart Speaker Design and Implementation with Biometric Authentication and Advanced Voice Interaction Capability.. In AICS. 305--316. Bharath Sudharsan Peter Corcoran and Muhammad Intizar Ali. 2019. Smart Speaker Design and Implementation with Biometric Authentication and Advanced Voice Interaction Capability.. In AICS. 305--316.","DOI":"10.1109\/ICoAC48765.2019.247125"},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICoAC48765.2019.247125"},{"key":"e_1_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1509\/jmr.10.0207"},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.3389\/fpsyg.2016.00180"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/3136755.3136817"},{"key":"e_1_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.1145\/3161188"},{"key":"e_1_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2020.3032278"},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASSP.1984.1164400"},{"key":"e_1_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.1145\/3380981"},{"key":"e_1_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411836"},{"key":"e_1_2_1_84_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351273"},{"key":"e_1_2_1_85_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445487"},{"key":"e_1_2_1_86_1","volume-title":"Harry Potter and The Chamber of Secrets - Best\/Funny Moments. https:\/\/www.youtube.com\/watch?v=d69uAdbrprY. [Online","year":"2021","unstructured":"Youtube. 2016. Harry Potter and The Chamber of Secrets - Best\/Funny Moments. https:\/\/www.youtube.com\/watch?v=d69uAdbrprY. [Online ; accessed 10- May - 2021 ]. Youtube. 2016. Harry Potter and The Chamber of Secrets - Best\/Funny Moments. https:\/\/www.youtube.com\/watch?v=d69uAdbrprY. [Online; accessed 10-May-2021]."},{"key":"e_1_2_1_87_1","doi-asserted-by":"publisher","DOI":"10.1145\/3314420"},{"key":"e_1_2_1_88_1","doi-asserted-by":"publisher","DOI":"10.1145\/3448087"},{"key":"e_1_2_1_89_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376808"},{"key":"e_1_2_1_90_1","doi-asserted-by":"publisher","DOI":"10.1145\/2973750.2973762"},{"key":"e_1_2_1_91_1","doi-asserted-by":"publisher","DOI":"10.1145\/3236621"},{"key":"e_1_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.1145\/3241539.3241575"}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3494988","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,12,30]],"date-time":"2023-12-30T11:10:14Z","timestamp":1703934614000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3494988"}},"subtitle":["Tracking Facial Expressions Using a Commodity Microphone Array"],"short-title":[],"issued":{"date-parts":[[2021,12,27]]},"references-count":91,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2021,12,27]]}},"alternative-id":["10.1145\/3494988"],"URL":"https:\/\/doi.org\/10.1145\/3494988","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,12,27]]},"assertion":[{"value":"2021-12-30","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}