{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2023,3,28]],"date-time":"2023-03-28T04:15:40Z","timestamp":1679976940241},"reference-count":46,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,1,28]],"date-time":"2023-01-28T00:00:00Z","timestamp":1674864000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,1,28]],"date-time":"2023-01-28T00:00:00Z","timestamp":1674864000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100000155","name":"Social Sciences and Humanities Research Council of Canada","doi-asserted-by":"publisher","award":["435-2019-1065"],"id":[{"id":"10.13039\/501100000155","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004326","name":"Simon Fraser University","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100004326","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Speech Technol"],"published-print":{"date-parts":[[2023,3]]},"abstract":"Abstract<\/jats:title>Clearly articulated speech, relative to plain-style speech, has been shown to improve intelligibility. We examine if visible speech cues in video only can be systematically modified to enhance clear-speech visual features and improve intelligibility. We extract clear-speech visual features of English words varying in vowels produced by multiple male and female talkers. Via a frame-by-frame image-warping based video generation method with a controllable parameter (displacement factor), we apply the extracted clear-speech visual features to videos of plain speech to synthesize clear speech videos. We evaluate the generated videos using a robust, state of the art AI Lip Reader as well as human intelligibility testing. The contributions of this study are: (1) we successfully extract relevant visual cues for video modifications across speech styles, and have achieved enhanced intelligibility for AI; (2) this work suggests that universal talker-independent clear-speech features may be utilized to modify any talker\u2019s visual speech style; (3) we introduce \u201cdisplacement factor\u201d as a way of systematically scaling the magnitude of displacement modifications between speech styles; and (4) the high definition generated videos make them ideal candidates for human-centric intelligibility and perceptual training studies.<\/jats:p>","DOI":"10.1007\/s10772-023-10018-z","type":"journal-article","created":{"date-parts":[[2023,1,28]],"date-time":"2023-01-28T18:03:53Z","timestamp":1674929033000},"page":"163-184","update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Plain-to-clear speech video conversion for enhanced intelligibility"],"prefix":"10.1007","volume":"26","author":[{"given":"Shubam","family":"Sachdeva","sequence":"first","affiliation":[]},{"given":"Haoyao","family":"Ruan","sequence":"additional","affiliation":[]},{"given":"Ghassan","family":"Hamarneh","sequence":"additional","affiliation":[]},{"given":"Dawn M.","family":"Behne","sequence":"additional","affiliation":[]},{"given":"Allard","family":"Jongman","sequence":"additional","affiliation":[]},{"given":"Joan A.","family":"Sereno","sequence":"additional","affiliation":[]},{"ORCID":"http:\/\/orcid.org\/0000-0003-3862-3767","authenticated-orcid":false,"given":"Yue","family":"Wang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,1,28]]},"reference":[{"key":"10018_CR1","doi-asserted-by":"publisher","first-page":"2356","DOI":"10.1121\/1.2839004","volume":"123","author":"TH Chen","year":"2008","unstructured":"Chen, T. H., & Massaro, D. W. (2008). Seeing pitch: Visual information for lexical tones of Mandarin-Chinese. Journal of the Acoustical Society of America, 123, 2356\u20132366.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR2","doi-asserted-by":"publisher","first-page":"2059","DOI":"10.1121\/1.3478775","volume":"128","author":"M Cooke","year":"2010","unstructured":"Cooke, M., & Lu, Y. (2010). Spectral and temporal changes to speech produced in the presence of energetic and informational maskers. Journal of the Acoustical Society of America, 128, 2059\u20132069.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR3","doi-asserted-by":"crossref","unstructured":"Dong, X., Yan, Y., Ouyang, W., & Yang, Y. (2018). Style aggregated network for facial landmark detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 379\u2013388).","DOI":"10.1109\/CVPR.2018.00047"},{"key":"10018_CR4","doi-asserted-by":"publisher","first-page":"2365","DOI":"10.1121\/1.1788730","volume":"116","author":"SH Ferguson","year":"2004","unstructured":"Ferguson, S. H. (2004). Talker differences in clear and conversational speech: Vowel intelligibility for normal-hearing listeners. Journal of the Acoustical Society of America, 116, 2365\u20132373.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR5","doi-asserted-by":"publisher","first-page":"779","DOI":"10.1044\/1092-4388(2011\/10-0342)","volume":"55","author":"SH Ferguson","year":"2012","unstructured":"Ferguson, S. H. (2012). Talker differences in clear and conversational speech: Vowel intelligibility for older adults with hearing loss. Journal of Speech, Language, and Hearing Research, 55, 779\u2013790.","journal-title":"Journal of Speech, Language, and Hearing Research"},{"key":"10018_CR6","doi-asserted-by":"publisher","first-page":"259","DOI":"10.1121\/1.1482078","volume":"112","author":"SH Ferguson","year":"2002","unstructured":"Ferguson, S. H., & Kewley-Port, D. (2002). Vowel intelligibility in clear and conversational speech for normal-hearing and hearing-impaired listeners. Journal of the Acoustical Society of America, 112, 259\u2013271.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR7","doi-asserted-by":"publisher","first-page":"1241","DOI":"10.1044\/1092-4388(2007\/087)","volume":"50","author":"SH Ferguson","year":"2007","unstructured":"Ferguson, S. H., & Kewley-Port, D. (2007). Talker differences in clear and conversational speech: Acoustic characteristics of vowels. Journal of Speech, Language, and Hearing Research, 50, 1241\u20131255.","journal-title":"Journal of Speech, Language, and Hearing Research"},{"key":"10018_CR8","doi-asserted-by":"publisher","first-page":"3570","DOI":"10.1121\/1.4874596","volume":"135","author":"SH Ferguson","year":"2014","unstructured":"Ferguson, S. H., & Quen\u00e9, H. (2014). Acoustic correlates of vowel intelligibility in clear and conversational speech for young normal-hearing and elderly hearing-impaired listeners. Journal of the Acoustical Society of America, 135, 3570\u20133584.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR9","first-page":"133","volume":"27","author":"JP Gagn\u00e9","year":"1994","unstructured":"Gagn\u00e9, J. P., Masterson, V., Munhall, K. G., Bilida, N., & Querengesser, C. (1994). Across talker variability in speech intelligibility for conversational and clear speech: A crossmodal investigation. Journal of the Academy of Rehabilitative Audiology, 27, 133\u2013158.","journal-title":"Journal of the Academy of Rehabilitative Audiology"},{"key":"10018_CR10","doi-asserted-by":"publisher","first-page":"213","DOI":"10.1016\/S0167-6393(01)00012-7","volume":"37","author":"JP Gagn\u00e9","year":"2002","unstructured":"Gagn\u00e9, J. P., Rochette, A. J., & Charest, M. (2002). Auditory, visual and audiovisual clear speech. Speech Communication, 37, 213\u2013230.","journal-title":"Speech Communication"},{"key":"10018_CR11","doi-asserted-by":"publisher","first-page":"2139","DOI":"10.1121\/1.3623753","volume":"130","author":"V Hazan","year":"2011","unstructured":"Hazan, V., & Baker, R. (2011). Acoustic-phonetic characteristics of speech produced with communicative intent to counter adverse listening conditions. Journal of the Acoustical Society of America, 130, 2139\u20132152.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR12","doi-asserted-by":"publisher","first-page":"432","DOI":"10.1044\/jslhr.4002.432","volume":"40","author":"KS Helfer","year":"1997","unstructured":"Helfer, K. S. (1997). Auditory and auditory visual perception of clear and conversational speech. Journal of Speech, Language, and Hearing Research, 40, 432\u2013443.","journal-title":"Journal of Speech, Language, and Hearing Research"},{"key":"10018_CR13","doi-asserted-by":"publisher","first-page":"117","DOI":"10.1109\/TETCI.2017.2784878","volume":"2","author":"JC Hou","year":"2018","unstructured":"Hou, J. C., Wang, S. S., Lai, Y. H., Tsao, Y., Chang, H. W., & Wang, H. M. (2018). Audio-visual speech enhancement using multimodal deep convolutional neural networks. IEEE Transactions on Emerging Topics in Computational Intelligence, 2, 117\u2013128.","journal-title":"IEEE Transactions on Emerging Topics in Computational Intelligence"},{"key":"10018_CR14","doi-asserted-by":"crossref","unstructured":"Ideli, E., Sharpe, B., Baji\u0107, I. V., & Vaughan, R. G. (2019). Visually assisted time-domain speech enhancement. In IEEE global conference on signal and information processing (GlobalSIP) (pp.1\u20135).","DOI":"10.1109\/GlobalSIP45357.2019.8969244"},{"key":"10018_CR15","doi-asserted-by":"publisher","first-page":"86","DOI":"10.1016\/j.bandl.2014.07.012","volume":"137","author":"J Kim","year":"2014","unstructured":"Kim, J., & Davis, C. (2014a). How visual timing and form information affect speech and non-speech processing. Brain and Language, 137, 86\u201390.","journal-title":"Brain and Language"},{"key":"10018_CR16","doi-asserted-by":"publisher","first-page":"598","DOI":"10.1016\/j.csl.2013.02.002","volume":"28","author":"J Kim","year":"2014","unstructured":"Kim, J., & Davis, C. (2014b). Comparing the consistency and distinctiveness of speech produced in quiet and in noise. Computer Speech and Language, 28, 598\u2013606.","journal-title":"Computer Speech and Language"},{"key":"10018_CR17","doi-asserted-by":"publisher","first-page":"853","DOI":"10.1068\/p6941","volume":"40","author":"J Kim","year":"2011","unstructured":"Kim, J., Sironic, A., & Davis, C. (2011). Hearing speech in noise: Seeing a loud talker is better. Perception, 40, 853\u2013862.","journal-title":"Perception"},{"key":"10018_CR18","first-page":"1755","volume":"10","author":"DE King","year":"2009","unstructured":"King, D. E. (2009). Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research, 10, 1755\u20131758.","journal-title":"Journal of Machine Learning Research"},{"key":"10018_CR19","doi-asserted-by":"publisher","first-page":"148","DOI":"10.1037\/a0038695","volume":"122","author":"D Kleinschmidt","year":"2015","unstructured":"Kleinschmidt, D., & Jaeger, F. (2015). Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel. Psychological Review, 122, 148\u2013203.","journal-title":"Psychological Review"},{"key":"10018_CR20","doi-asserted-by":"publisher","first-page":"362","DOI":"10.1121\/1.1635842","volume":"115","author":"JC Krause","year":"2004","unstructured":"Krause, J. C., & Braida, L. D. (2004). Acoustic properties of naturally produced clear speech at normal speaking rates. Journal of the Acoustical Society of America, 115, 362\u2013378.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR21","doi-asserted-by":"publisher","first-page":"600","DOI":"10.1016\/j.specom.2013.01.003","volume":"55","author":"K Lander","year":"2013","unstructured":"Lander, K., & Capek, C. (2013). Investigating the impact of lip visibility and talking style on speechreading performance. Speech Communication, 55, 600\u2013605.","journal-title":"Speech Communication"},{"key":"10018_CR22","doi-asserted-by":"publisher","first-page":"904","DOI":"10.3109\/14992027.2010.509112","volume":"49","author":"I Legault","year":"2010","unstructured":"Legault, I., Gagn\u00e9, J. P., & Anderson-Gosselin, P. (2010). The effects of blurred vision on auditory-visual speech perception in younger and older adults. International Journal of Audiology, 49, 904\u2013911.","journal-title":"International Journal of Audiology"},{"key":"10018_CR23","doi-asserted-by":"publisher","first-page":"45","DOI":"10.1121\/1.4954737","volume":"140","author":"KK Leung","year":"2016","unstructured":"Leung, K. K., Jongman, A., Wang, Y., & Sereno, J. A. (2016). Acoustic characteristics of clearly spoken English tense and lax vowels. Journal of the Acoustical Society of America, 140, 45\u201358.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR24","doi-asserted-by":"publisher","first-page":"403","DOI":"10.1007\/978-94-009-2037-8_16","volume-title":"Speech production and speech modelling","author":"B Lindblom","year":"1990","unstructured":"Lindblom, B. (1990). Explaining phonetic variation: A sketch of the H & H theory. In W. J. Hardcastle & A. Marchal (Eds.), Speech production and speech modelling (pp. 403\u2013439). Kluwer Academic."},{"key":"10018_CR25","doi-asserted-by":"publisher","first-page":"1114","DOI":"10.1121\/1.2821966","volume":"123","author":"K Maniwa","year":"2008","unstructured":"Maniwa, K., Jongman, A., & Wade, T. (2008). Perception of clear fricatives by normal-hearing and simulated hearing-impaired listeners. Journal of the Acoustical Society of America, 123, 1114\u20131125.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR26","doi-asserted-by":"publisher","first-page":"3962","DOI":"10.1121\/1.2990715","volume":"125","author":"K Maniwa","year":"2009","unstructured":"Maniwa, K., Jongman, A., & Wade, T. (2009). Acoustic characteristics of clearly spoken English fricatives. Journal of the Acoustical Society of America, 125, 3962\u20133973.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR27","doi-asserted-by":"crossref","unstructured":"Martinez, B., Ma, P., Petridis, S., & Pantic, M. (2020). Lipreading using temporal convolutional networks. In\u00a0ICASSP 2020\u20132020 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 6319\u20136323).","DOI":"10.1109\/ICASSP40776.2020.9053841"},{"key":"10018_CR28","doi-asserted-by":"publisher","first-page":"304","DOI":"10.1044\/1092-4388(2004\/025)","volume":"47","author":"DW Massaro","year":"2004","unstructured":"Massaro, D. W., & Light, J. (2004). Using visible speech to train perception and production of speech for individuals with hearing loss. Journal of Speech, Language, and Hearing Research, 47, 304\u2013320.","journal-title":"Journal of Speech, Language, and Hearing Research"},{"key":"10018_CR29","doi-asserted-by":"publisher","first-page":"40","DOI":"10.1121\/1.410492","volume":"96","author":"SJ Moon","year":"1994","unstructured":"Moon, S. J., & Lindblom, B. (1994). Interaction between duration, context, and speaking style in English stressed vowels. Journal of the Acoustical Society of America, 96, 40\u201355.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR30","doi-asserted-by":"crossref","unstructured":"Morrone, G., Bergamaschi, S., Pasa, L., Fadiga, L., Tikhanoff, V., & Badino, L. (2019). Face landmark-based speaker-independent audio-visual speech enhancement in multi-talker environments. In ICASSP 2019\u20132019 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 6900\u20136904).","DOI":"10.1109\/ICASSP.2019.8682061"},{"key":"10018_CR31","doi-asserted-by":"crossref","unstructured":"Ohala, J. J. (1995). Clear speech does not exaggerate phonemic contrast. In Proceedings of the 4th European conference on speech communication and technology, Eurospeech\u201995 (pp. 1323\u20131325).","DOI":"10.21437\/Eurospeech.1995-344"},{"key":"10018_CR32","doi-asserted-by":"publisher","first-page":"1581","DOI":"10.1121\/1.408545","volume":"95","author":"KL Payton","year":"1994","unstructured":"Payton, K. L., Uchanski, R. M., & Braida, L. D. (1994). Intelligibility of conversational and clear speech in noise and reverberation for listeners with normal and impaired hearing. Journal of the Acoustical Society of America, 95, 1581\u20131592.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR33","doi-asserted-by":"publisher","first-page":"434","DOI":"10.1044\/jshr.2904.434","volume":"29","author":"MA Picheny","year":"1986","unstructured":"Picheny, M. A., Durlach, N. I., & Braida, L. D. (1986). Speaking clearly for the hard of hearing II: Acoustic characteristics of clear and conversational speech. Journal of Speech Hearing Research, 29, 434\u2013446.","journal-title":"Journal of Speech Hearing Research"},{"key":"10018_CR34","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.wocn.2020.100980","volume":"81","author":"C Redmon","year":"2020","unstructured":"Redmon, C., Leung, K., Wang, Y., McMurray, B., Jongman, A., & Sereno, J. A. (2020). Cross-linguistic perception of clearly spoken English tense and lax vowels based on auditory, visual, and auditory-visual information. Journal of Phonetics, 81, 1\u201325.","journal-title":"Journal of Phonetics"},{"key":"10018_CR35","doi-asserted-by":"publisher","first-page":"1788","DOI":"10.1109\/TASLP.2020.3000593","volume":"28","author":"M Sadeghi","year":"2020","unstructured":"Sadeghi, M., Leglaive, S., Alameda-Pineda, X., Girin, L., & Horaud, R. (2020). Audio-visual speech enhancement using conditional variational auto-encoders. IEEE\/ACM Transactions on Audio, Speech, and Language Processing, 28, 1788\u20131800.","journal-title":"IEEE\/ACM Transactions on Audio, Speech, and Language Processing"},{"key":"10018_CR36","doi-asserted-by":"crossref","unstructured":"Shillingford, B., Assael, Y., Hoffman, M. W., Paine, T., Hughes, C., Prabhu, U., Liao, H., Sak, H., Rao, K., Bennett, L., & Mulville M. (2018). Large-scale visual speech recognition. arXiv preprint arXiv:1807.05162.","DOI":"10.21437\/Interspeech.2019-1669"},{"key":"10018_CR37","doi-asserted-by":"publisher","first-page":"177","DOI":"10.1002\/9781119184096.ch7","volume-title":"The handbook of speech perception","author":"R Smiljani\u0107","year":"2021","unstructured":"Smiljani\u0107, R., et al. (2021). Clear speech perception: Linguistic and cognitive benefits. In J. S. Pardo (Ed.), The handbook of speech perception (pp. 177\u2013205). Wiley."},{"key":"10018_CR38","doi-asserted-by":"publisher","first-page":"1677","DOI":"10.1121\/1.2000788","volume":"118","author":"R Smiljani\u0107","year":"2005","unstructured":"Smiljani\u0107, R., & Bradlow, A. R. (2005). Production and perception of clear speech in Croatian and English. Journal of the Acoustical Society of America, 118, 1677\u20131688.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR39","doi-asserted-by":"publisher","first-page":"236","DOI":"10.1111\/j.1749-818X.2008.00112.x","volume":"3","author":"R Smiljani\u0107","year":"2009","unstructured":"Smiljani\u0107, R., & Bradlow, A. R. (2009). Speaking and hearing clearly: Talker and listener factors in speaking style changes. Language and Linguistics Compass, 3, 236\u2013264.","journal-title":"Language and Linguistics Compass"},{"key":"10018_CR40","doi-asserted-by":"publisher","first-page":"4020","DOI":"10.1121\/1.3652882","volume":"130","author":"R Smiljani\u0107","year":"2011","unstructured":"Smiljani\u0107, R., & Bradlow, A. R. (2011). Bidirectional clear speech perception benefit for native and high-proficiency non-native talkers and listeners: Intelligibility and accentedness. Journal of the Acoustical Society of America, 130, 4020\u20134031.","journal-title":"Journal of the Acoustical Society of America"},{"key":"10018_CR41","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.specom.2015.09.008","volume":"75","author":"LY Tang","year":"2015","unstructured":"Tang, L. Y., Hannah, B., Jongman, A., Sereno, J., Wang, Y., & Hamarneh, G. (2015). Examining visible articulatory features in clear and plain speech. Speech Communication, 75, 1\u201313.","journal-title":"Speech Communication"},{"key":"10018_CR42","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1044\/1092-4388(2009\/08-0124)","volume":"53","author":"SM Tasko","year":"2010","unstructured":"Tasko, S. M., & Greilick, K. (2010). Acoustic and articulatory features of diphthong production: A speech clarity study. Journal of Speech, Language, and Hearing Research, 53, 84\u201399.","journal-title":"Journal of Speech, Language, and Hearing Research"},{"key":"10018_CR43","doi-asserted-by":"publisher","first-page":"494","DOI":"10.1044\/jshr.3903.494","volume":"39","author":"RM Uchanski","year":"1996","unstructured":"Uchanski, R. M., Choi, S. S., Braida, L. D., Reed, C. M., & Durlach, N. I. (1996). Speaking clearly for the hard of hearing IV: Further studies of the role of speaking rate. Journal of Speech, Language, and Hearing Research, 39, 494\u2013509.","journal-title":"Journal of Speech, Language, and Hearing Research"},{"key":"10018_CR44","doi-asserted-by":"publisher","first-page":"1908","DOI":"10.1044\/JSLHR-H-13-0076","volume":"57","author":"KJ Van Engen","year":"2014","unstructured":"Van Engen, K. J., Phelps, J. E., Smiljani\u0107, R., & Chandrasekaran, B. (2014). Enhancing speech intelligibility: Interactions among context, modality, speech style, and masker. Journal of Speech, Language, and Hearing Research, 57, 1908\u20131918.","journal-title":"Journal of Speech, Language, and Hearing Research"},{"key":"10018_CR45","doi-asserted-by":"crossref","unstructured":"Xiao, J., Yang, S., Zhang, Y., Shan, S., & Chen, X. (2020). Deformation flow based two-stream network for lip reading. In 2020 15th IEEE international conference on automatic face and gesture recognition (FG 2020) (pp. 364\u2013370).","DOI":"10.1109\/FG47880.2020.00132"},{"key":"10018_CR46","doi-asserted-by":"publisher","first-page":"1405","DOI":"10.3758\/BF03212142","volume":"62","author":"DA Yakel","year":"2000","unstructured":"Yakel, D. A., Rosenblum, L. D., & Fortier, M. A. (2000). Effects of talker variability on speechreading. Perception and Psychophysics, 62, 1405\u20131412.","journal-title":"Perception and Psychophysics"}],"container-title":["International Journal of Speech Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10772-023-10018-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10772-023-10018-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10772-023-10018-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,3,27]],"date-time":"2023-03-27T11:14:46Z","timestamp":1679915686000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10772-023-10018-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,1,28]]},"references-count":46,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2023,3]]}},"alternative-id":["10018"],"URL":"https:\/\/doi.org\/10.1007\/s10772-023-10018-z","relation":{},"ISSN":["1381-2416","1572-8110"],"issn-type":[{"value":"1381-2416","type":"print"},{"value":"1572-8110","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,1,28]]},"assertion":[{"value":"28 May 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 January 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 January 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}