{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,19]],"date-time":"2025-03-19T15:23:16Z","timestamp":1742397796766},"reference-count":66,"publisher":"Association for Computing Machinery (ACM)","issue":"1","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2021,3,19]]},"abstract":"With the rapid growth of wearable computing and increasing demand for mobile authentication scenarios, voiceprint-based authentication has become one of the prevalent technologies and has already presented tremendous potentials to the public. However, it is vulnerable to voice spoofing attacks (e.g., replay attacks and synthetic voice attacks). To address this threat, we propose a new biometric authentication approach, named EarPrint, which aims to extend voiceprint and build a hidden and secure user authentication scheme on earphones. EarPrint builds on the speaking-induced body sound transmission from the throat to the ear canal, i.e., different users will have different body sound conduction patterns on both sides of ears. As the first exploratory study, extensive experiments on 23 subjects show the EarPrint is robust against ambient noises and body motions. EarPrint achieves an Equal Error Rate (EER) of 3.64% with 75 seconds enrollment data. We also evaluate the resilience of EarPrint against replay attacks. A major contribution of EarPrint is that it leverages two-level uniqueness, including the body sound conduction from the throat to the ear canal and the body asymmetry between the left and the right ears, taking advantage of earphones' paring form-factor. Compared with other mobile and wearable biometric modalities, EarPrint is a low-cost, accurate, and secure authentication solution for earphone users.<\/jats:p>","DOI":"10.1145\/3448113","type":"journal-article","created":{"date-parts":[[2021,3,30]],"date-time":"2021-03-30T18:56:41Z","timestamp":1617130601000},"page":"1-25","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":27,"title":["Voice In Ear"],"prefix":"10.1145","volume":"5","author":[{"given":"Yang","family":"Gao","sequence":"first","affiliation":[{"name":"University at Buffalo, State University of New York, Department of Computer Science and Engineering, Buffalo, NY, USA"}]},{"given":"Yincheng","family":"Jin","sequence":"additional","affiliation":[{"name":"University at Buffalo, State University of New York, Department of Computer Science and Engineering, Buffalo, NY, USA"}]},{"given":"Jagmohan","family":"Chauhan","sequence":"additional","affiliation":[{"name":"University of Southampton, Department of Electronics and Computer Science, Southampton, UK"}]},{"given":"Seokmin","family":"Choi","sequence":"additional","affiliation":[{"name":"University at Buffalo, State University of New York, Department of Computer Science and Engineering, Buffalo, NY, USA"}]},{"given":"Jiyang","family":"Li","sequence":"additional","affiliation":[{"name":"University at Buffalo, State University of New York, Department of Computer Science and Engineering, Buffalo, NY, USA"}]},{"given":"Zhanpeng","family":"Jin","sequence":"additional","affiliation":[{"name":"University at Buffalo, State University of New York, Department of Computer Science and Engineering, Buffalo, NY, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,3,30]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"29th USENIX Security Symposium (USENIX Security'20)","author":"Ahmed Muhammad Ejaz","year":"2020","unstructured":"Muhammad Ejaz Ahmed , Il-Youp Kwak , Jun Ho Hua , Iljoo Kim , Taekkyung Oh , and Hyoungshick Kim . 2020 . Void: A fast and light voice liveness detection system . In 29th USENIX Security Symposium (USENIX Security'20) . 2685--2702. Muhammad Ejaz Ahmed, Il-Youp Kwak, Jun Ho Hua, Iljoo Kim, Taekkyung Oh, and Hyoungshick Kim. 2020. Void: A fast and light voice liveness detection system. In 29th USENIX Security Symposium (USENIX Security'20). 2685--2702."},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3341163.3347747"},{"key":"e_1_2_1_3_1","volume-title":"About Face ID advanced technology. https:\/\/support.apple.com\/en-us\/HT208108. [Online","year":"2020","unstructured":"Apple. 2020. About Face ID advanced technology. https:\/\/support.apple.com\/en-us\/HT208108. [Online ; accessed 30- July - 2020 ]. Apple. 2020. About Face ID advanced technology. https:\/\/support.apple.com\/en-us\/HT208108. [Online; accessed 30-July-2020]."},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/2800835.2807933"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3264902"},{"key":"e_1_2_1_6_1","volume-title":"Voiceprint authentication starts to go mainstream in Australia. https:\/\/www.csoonline.com\/article\/3546188\/voiceprint-authentication-starts-to-go-mainstream-in-australia.html. [Online","author":"Braue David","year":"2020","unstructured":"David Braue . 2020. Voiceprint authentication starts to go mainstream in Australia. https:\/\/www.csoonline.com\/article\/3546188\/voiceprint-authentication-starts-to-go-mainstream-in-australia.html. [Online ; accessed 10- Feb- 2020 ]. David Braue. 2020. Voiceprint authentication starts to go mainstream in Australia. https:\/\/www.csoonline.com\/article\/3546188\/voiceprint-authentication-starts-to-go-mainstream-in-australia.html. [Online; accessed 10-Feb-2020]."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3300061.3345454"},{"key":"e_1_2_1_8_1","volume-title":"You Can Hear But You Cannot Steal: Defending Against Voice Impersonation Attacks on Smartphones. In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). 183--195","author":"Chen S.","unstructured":"S. Chen , K. Ren , S. Piao , C. Wang , Q. Wang , J. Weng , L. Su , and A. Mohaisen . 2017 . You Can Hear But You Cannot Steal: Defending Against Voice Impersonation Attacks on Smartphones. In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). 183--195 . S. Chen, K. Ren, S. Piao, C. Wang, Q. Wang, J. Weng, L. Su, and A. Mohaisen. 2017. You Can Hear But You Cannot Steal: Defending Against Voice Impersonation Attacks on Smartphones. In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). 183--195."},{"key":"e_1_2_1_9_1","volume-title":"Google Smart Lock: The complete guide. https:\/\/www.computerworld.com\/article\/3322626\/google-smart-lock-complete-guide.html. [Online","year":"2020","unstructured":"Computerworld. 2020. Google Smart Lock: The complete guide. https:\/\/www.computerworld.com\/article\/3322626\/google-smart-lock-complete-guide.html. [Online ; accessed 30- June - 2020 ]. Computerworld. 2020. Google Smart Lock: The complete guide. https:\/\/www.computerworld.com\/article\/3322626\/google-smart-lock-complete-guide.html. [Online; accessed 30-June-2020]."},{"key":"e_1_2_1_10_1","volume-title":"International conference on machine learning. 933--941","author":"Dauphin Yann N","year":"2017","unstructured":"Yann N Dauphin , Angela Fan , Michael Auli , and David Grangier . 2017 . Language modeling with gated convolutional networks . In International conference on machine learning. 933--941 . Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In International conference on machine learning. 933--941."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASL.2012.2201472"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASL.2010.2064307"},{"key":"e_1_2_1_13_1","first-page":"1","article-title":"EarEcho: Using Ear Canal Echo for Wearable Authentication","volume":"3","author":"Gao Yang","year":"2019","unstructured":"Yang Gao , Wei Wang , Vir V Phoha , Wei Sun , and Zhanpeng Jin . 2019 . EarEcho: Using Ear Canal Echo for Wearable Authentication . Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3 , 3 (2019), 1 -- 24 . Yang Gao, Wei Wang, Vir V Phoha, Wei Sun, and Zhanpeng Jin. 2019. EarEcho: Using Ear Canal Echo for Wearable Authentication. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 3 (2019), 1--24.","journal-title":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/IM.2003.1240273"},{"key":"e_1_2_1_15_1","volume-title":"https:\/\/webrtc.org\/. [Online","author":"RTC.","year":"2020","unstructured":"Google. 2016. Web RTC. https:\/\/webrtc.org\/. [Online ; accessed 30- June - 2020 ]. Google. 2016. WebRTC. https:\/\/webrtc.org\/. [Online; accessed 30-June-2020]."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2015.2471183"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-017-06925-2"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.specom.2009.12.001"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3191744"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.632"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICIAFS.2008.4783977"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.3390\/s20030942"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2019.8682897"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASL.2006.881693"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-25948-0_10"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISSPIT.2006.270839"},{"key":"e_1_2_1_27_1","first-page":"1","article-title":"Vocal Resonance: Using internal body voice for wearable authentication","volume":"2","author":"Liu Rui","year":"2018","unstructured":"Rui Liu , Cory Cornelius , Reza Rawassizadeh , Ronald Peterson , and David Kotz . 2018 . Vocal Resonance: Using internal body voice for wearable authentication . Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2 , 1 (2018), 1 -- 23 . Rui Liu, Cory Cornelius, Reza Rawassizadeh, Ronald Peterson, and David Kotz. 2018. Vocal Resonance: Using internal body voice for wearable authentication. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 1 (2018), 1--23.","journal-title":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"},{"key":"e_1_2_1_28_1","volume-title":"The voice conversion challenge 2018: Promoting development of parallel and nonparallel methods. arXiv preprint","author":"Lorenzo-Trueba Jaime","year":"1804","unstructured":"Jaime Lorenzo-Trueba , Junichi Yamagishi , Tomoki Toda , Daisuke Saito , Fernando Villavicencio , Tomi Kinnunen , and Zhenhua Ling . 2018. The voice conversion challenge 2018: Promoting development of parallel and nonparallel methods. arXiv preprint 1804 .04262 (2018). Jaime Lorenzo-Trueba, Junichi Yamagishi, Tomoki Toda, Daisuke Saito, Fernando Villavicencio, Tomi Kinnunen, and Zhenhua Ling. 2018. The voice conversion challenge 2018: Promoting development of parallel and nonparallel methods. arXiv preprint 1804.04262 (2018)."},{"key":"e_1_2_1_29_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3397320","article-title":"VocalLock: Sensing Vocal Tract for Passphrase-Independent User Authentication Leveraging Acoustic Signals on Smartphones","volume":"4","author":"Lu Li","year":"2020","unstructured":"Li Lu , Jiadi Yu , Yingying Chen , and Yan Wang . 2020 . VocalLock: Sensing Vocal Tract for Passphrase-Independent User Authentication Leveraging Acoustic Signals on Smartphones . Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4 , 2 (2020), 1 -- 24 . Li Lu, Jiadi Yu, Yingying Chen, and Yan Wang. 2020. VocalLock: Sensing Vocal Tract for Passphrase-Independent User Authentication Leveraging Acoustic Signals on Smartphones. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 2 (2020), 1--24.","journal-title":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"},{"key":"e_1_2_1_30_1","volume-title":"What Is the Stethoscope Effect? http:\/\/m.maonotech.com\/info\/what-is-the-stethoscope-effect-29859990.html. [Online","author":"MAONO.","year":"2020","unstructured":"MAONO. 2018. What Is the Stethoscope Effect? http:\/\/m.maonotech.com\/info\/what-is-the-stethoscope-effect-29859990.html. [Online ; accessed 10- July - 2020 ]. MAONO. 2018. What Is the Stethoscope Effect? http:\/\/m.maonotech.com\/info\/what-is-the-stethoscope-effect-29859990.html. [Online; accessed 10-July-2020]."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2011.5947436"},{"key":"e_1_2_1_32_1","unstructured":"Deirdre D. Michael. 2018. About the voice. http:\/\/www.lionsvoiceclinic.umn.edu\/page2.htm#physiology101. [Online; accessed 19-Jan-2020]. Deirdre D. Michael. 2018. About the voice. http:\/\/www.lionsvoiceclinic.umn.edu\/page2.htm#physiology101. [Online; accessed 19-Jan-2020]."},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2971648.2971677"},{"key":"e_1_2_1_34_1","volume-title":"Joon Son Chung, and Andrew Zisserman","author":"Nagrani Arsha","year":"2017","unstructured":"Arsha Nagrani , Joon Son Chung, and Andrew Zisserman . 2017 . VoxCeleb : a large-scale speaker identification dataset. arXiv preprint arXiv:1706.08612 (2017). Arsha Nagrani, Joon Son Chung, and Andrew Zisserman. 2017. VoxCeleb: a large-scale speaker identification dataset. arXiv preprint arXiv:1706.08612 (2017)."},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.2209\/tdcpublication.48.171"},{"key":"e_1_2_1_36_1","volume-title":"Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499","author":"van den Oord Aaron","year":"2016","unstructured":"Aaron van den Oord , Sander Dieleman , Heiga Zen , Karen Simonyan , Oriol Vinyals , Alex Graves , Nal Kalchbrenner , Andrew Senior , and Koray Kavukcuoglu . 2016 . Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 (2016). Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 (2016)."},{"key":"e_1_2_1_37_1","volume-title":"Ekin Dogus Cubuk, and Quoc V Le","author":"Park Daniel S","year":"2019","unstructured":"Daniel S Park , William Chan , Yu Zhang , Chung-Cheng Chiu , Barret Zoph , Ekin Dogus Cubuk, and Quoc V Le . 2019 . SpecAugment: A Simple Augmentation Method for Automatic Speech Recognition . (2019). Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin Dogus Cubuk, and Quoc V Le. 2019. SpecAugment: A Simple Augmentation Method for Automatic Speech Recognition. (2019)."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.3390\/s150923402"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3307334.3328582"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359316"},{"key":"e_1_2_1_41_1","volume-title":"https:\/\/www.apple.com\/airpods-pro\/. [Online","author":"Pro AirPods","year":"2020","unstructured":"AirPods Pro . 2020. Apple. https:\/\/www.apple.com\/airpods-pro\/. [Online ; accessed 30- July - 2020 ]. AirPods Pro. 2020. Apple. https:\/\/www.apple.com\/airpods-pro\/. [Online; accessed 30-July-2020]."},{"key":"e_1_2_1_42_1","unstructured":"Alec Radford Luke Metz and Soumith Chintala. 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv:cs.LG\/1511.06434 Alec Radford Luke Metz and Soumith Chintala. 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv:cs.LG\/1511.06434"},{"key":"e_1_2_1_43_1","volume-title":"Earphones Headphones Market Size Worth 126.7 Billion By","author":"Research Grand View","year":"2027","unstructured":"Grand View Research . 2020. Earphones Headphones Market Size Worth 126.7 Billion By 2027 . https:\/\/www.grandviewresearch.com\/press-release\/global-earphones-headphones-market. [Online; accessed 30-July-2020]. Grand View Research. 2020. Earphones Headphones Market Size Worth 126.7 Billion By 2027. https:\/\/www.grandviewresearch.com\/press-release\/global-earphones-headphones-market. [Online; accessed 30-July-2020]."},{"key":"e_1_2_1_44_1","first-page":"310","article-title":"Recreational bone conduction audio device, system","volume":"7","author":"Retchin Sheldon M","year":"2007","unstructured":"Sheldon M Retchin and Martin Lenhardt . 2007 . Recreational bone conduction audio device, system . US Patent 7 , 310 ,427. Sheldon M Retchin and Martin Lenhardt. 2007. Recreational bone conduction audio device, system. US Patent 7,310,427.","journal-title":"US Patent"},{"key":"e_1_2_1_45_1","volume-title":"Speaker verification using adapted Gaussian mixture models. Digital signal processing 10, 1-3","author":"Reynolds Douglas A","year":"2000","unstructured":"Douglas A Reynolds , Thomas F Quatieri , and Robert B Dunn . 2000. Speaker verification using adapted Gaussian mixture models. Digital signal processing 10, 1-3 ( 2000 ), 19--41. Douglas A Reynolds, Thomas F Quatieri, and Robert B Dunn. 2000. Speaker verification using adapted Gaussian mixture models. Digital signal processing 10, 1-3 (2000), 19--41."},{"key":"e_1_2_1_46_1","first-page":"2016","volume-title":"Robust Speaker Recognition with Combined Use of Acoustic and Throat Microphone Speech. In INTERSPEECH","author":"Sahidullah Md.","year":"2016","unstructured":"Md. Sahidullah , Rosa Gonzalez Hautam\u00e4ki , Dennis Alexander Lehmann Thomsen , Tomi Kinnunen , Zheng-Hua Tan , Ville Hautam\u00e4ki , Robert Parts , and Martti Pitk\u00e4nen . 2016 . Robust Speaker Recognition with Combined Use of Acoustic and Throat Microphone Speech. In INTERSPEECH 2016. 1720--1724. https:\/\/doi.org\/10.21437\/Interspeech. 2016 - 1153 Md. Sahidullah, Rosa Gonzalez Hautam\u00e4ki, Dennis Alexander Lehmann Thomsen, Tomi Kinnunen, Zheng-Hua Tan, Ville Hautam\u00e4ki, Robert Parts, and Martti Pitk\u00e4nen. 2016. Robust Speaker Recognition with Combined Use of Acoustic and Throat Microphone Speech. In INTERSPEECH 2016. 1720--1724. https:\/\/doi.org\/10.21437\/Interspeech.2016-1153"},{"key":"e_1_2_1_47_1","doi-asserted-by":"crossref","unstructured":"A. Shahina and B. Yegnanarayana. 2007. Mapping Speech Spectra from Throat Microphone to Close-Speaking Microphone: A Neural Network Approach. EURASIP Journal on Advances in Signal Processing 087219 (2007). A. Shahina and B. Yegnanarayana. 2007. Mapping Speech Spectra from Throat Microphone to Close-Speaking Microphone: A Neural Network Approach. EURASIP Journal on Advances in Signal Processing 087219 (2007).","DOI":"10.1155\/2007\/87219"},{"key":"e_1_2_1_48_1","volume-title":"Defending Against Voice Spoofing: A Robust Software-Based Liveness Detection System. In 2018 IEEE 15th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). 28--36","author":"Shang J.","unstructured":"J. Shang , S. Chen , and J. Wu . 2018 . Defending Against Voice Spoofing: A Robust Software-Based Liveness Detection System. In 2018 IEEE 15th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). 28--36 . J. Shang, S. Chen, and J. Wu. 2018. Defending Against Voice Spoofing: A Robust Software-Based Liveness Detection System. In 2018 IEEE 15th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). 28--36."},{"key":"e_1_2_1_49_1","volume-title":"Quantifying the Breakability of Voice Assistants. In 2019 IEEE International Conference on Pervasive Computing and Communications (PerCom). IEEE, 1--11","author":"Shirvanian Maliheh","year":"2019","unstructured":"Maliheh Shirvanian , Summer Vo , and Nitesh Saxena . 2019 . Quantifying the Breakability of Voice Assistants. In 2019 IEEE International Conference on Pervasive Computing and Communications (PerCom). IEEE, 1--11 . Maliheh Shirvanian, Summer Vo, and Nitesh Saxena. 2019. Quantifying the Breakability of Voice Assistants. In 2019 IEEE International Conference on Pervasive Computing and Communications (PerCom). IEEE, 1--11."},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1299\/jamdsm.4.158"},{"key":"e_1_2_1_51_1","volume-title":"https:\/\/basicenglishspeaking.com\/. [Online","author":"Speaking Basic English","year":"2020","unstructured":"Basic English Speaking . 2020. ESL Conversation . https:\/\/basicenglishspeaking.com\/. [Online ; accessed 30- June - 2020 ]. Basic English Speaking. 2020. ESL Conversation. https:\/\/basicenglishspeaking.com\/. [Online; accessed 30-June-2020]."},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3191768"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.21437\/Interspeech.2008-510"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1109\/JTEHM.2013.2277870"},{"key":"e_1_2_1_55_1","volume-title":"Tacotron: Towards end-to-end speech synthesis. arXiv preprint arXiv:1703.10135","author":"Wang Yuxuan","year":"2017","unstructured":"Yuxuan Wang , RJ Skerry-Ryan , Daisy Stanton , Yonghui Wu , Ron J Weiss , Navdeep Jaitly , Zongheng Yang , Ying Xiao , Zhifeng Chen , Samy Bengio , 2017 . Tacotron: Towards end-to-end speech synthesis. arXiv preprint arXiv:1703.10135 (2017). Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. 2017. Tacotron: Towards end-to-end speech synthesis. arXiv preprint arXiv:1703.10135 (2017)."},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICMLC.2011.6016982"},{"key":"e_1_2_1_57_1","volume-title":"Voiceprint: The New WeChat Password. https:\/\/blog.wechat.com\/tag\/voiceprint\/. [Online","year":"2015","unstructured":"WeChat. 2015 . Voiceprint: The New WeChat Password. https:\/\/blog.wechat.com\/tag\/voiceprint\/. [Online ; accessed 30-June-2020]. WeChat. 2015. Voiceprint: The New WeChat Password. https:\/\/blog.wechat.com\/tag\/voiceprint\/. [Online; accessed 30-June-2020]."},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICEMI.2017.8265849"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAU.1967.1161901"},{"key":"e_1_2_1_60_1","volume-title":"Penalized AdaBoost: Improving the Generalization Error of Gentle AdaBoost through a Margin Distribution. IEICE Transactions on Information and Systems E98-D, 11","author":"Wu Shuqiong","year":"2015","unstructured":"Shuqiong Wu and Hiroshi Nagahashi . 2015. Penalized AdaBoost: Improving the Generalization Error of Gentle AdaBoost through a Margin Distribution. IEICE Transactions on Information and Systems E98-D, 11 ( 2015 ), 1906--1915. Shuqiong Wu and Hiroshi Nagahashi. 2015. Penalized AdaBoost: Improving the Generalization Error of Gentle AdaBoost through a Margin Distribution. IEICE Transactions on Information and Systems E98-D, 11 (2015), 1906--1915."},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1145\/3319535.3354248"},{"key":"e_1_2_1_62_1","volume-title":"8th International Conference on Spoken Language Processing (ICSLP).","author":"Yegnanarayana Bayya","unstructured":"Bayya Yegnanarayana , A. Shahina , and M.R. Kesheorey . 2004. Throat microphone signal for speaker recognition. In INTERSPEECH-2004 , 8th International Conference on Spoken Language Processing (ICSLP). Bayya Yegnanarayana, A. Shahina, and M.R. Kesheorey. 2004. Throat microphone signal for speaker recognition. In INTERSPEECH-2004, 8th International Conference on Spoken Language Processing (ICSLP)."},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-74549-5_2"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3133962"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/2976749.2978296"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2004.1326661"}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3448113","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,1]],"date-time":"2023-01-01T22:18:49Z","timestamp":1672611529000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3448113"}},"subtitle":["Spoofing-Resistant and Passphrase-Independent Body Sound Authentication"],"short-title":[],"issued":{"date-parts":[[2021,3,19]]},"references-count":66,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,3,19]]}},"alternative-id":["10.1145\/3448113"],"URL":"https:\/\/doi.org\/10.1145\/3448113","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,3,19]]},"assertion":[{"value":"2021-03-30","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}