{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T07:24:33Z","timestamp":1740122673461,"version":"3.37.3"},"reference-count":63,"publisher":"Springer Science and Business Media LLC","issue":"20","license":[{"start":{"date-parts":[[2024,8,9]],"date-time":"2024-08-09T00:00:00Z","timestamp":1723161600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2024,8,9]],"date-time":"2024-08-09T00:00:00Z","timestamp":1723161600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62263017"],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Yunnan Province Basic Research Program Project","award":["202301AU070059"]},{"name":"Kunming University of Science and Technology college level personnel training project","award":["KKZ3202301041"]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Appl Intell"],"published-print":{"date-parts":[[2024,10]]},"DOI":"10.1007\/s10489-024-05705-6","type":"journal-article","created":{"date-parts":[[2024,8,9]],"date-time":"2024-08-09T07:02:18Z","timestamp":1723186938000},"page":"10224-10241","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Efficient and stable deep reinforcement learning: selective priority timing entropy"],"prefix":"10.1007","volume":"54","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6510-5212","authenticated-orcid":false,"given":"Lin","family":"Huo","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1407-4863","authenticated-orcid":false,"given":"Jianlin","family":"Mao","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4256-5582","authenticated-orcid":false,"given":"Hongjun","family":"San","sequence":"additional","affiliation":[]},{"given":"Shufan","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Ruiqi","family":"Li","sequence":"additional","affiliation":[]},{"given":"Lixia","family":"Fu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,8,9]]},"reference":[{"issue":"5","key":"5705_CR1","doi-asserted-by":"publisher","first-page":"1054","DOI":"10.1109\/TNN.1998.712192","volume":"9","author":"RS Sutton","year":"1998","unstructured":"Sutton RS, Barto AG (1998) Reinforcement learning: An introduction. IEEE Tran Neural Netw 9(5):1054\u20131054","journal-title":"IEEE Tran Neural Netw"},{"key":"5705_CR2","unstructured":"Kapturowski S, Campos V, Jiang R, Rakicevic N, Hasselt H, Blundell C, Badia AP (2023) Human-level atari 200x faster. In: The eleventh international conference on learning representations"},{"issue":"12","key":"5705_CR3","doi-asserted-by":"publisher","first-page":"14903","DOI":"10.1007\/s10489-022-04227-3","volume":"53","author":"L G\u00fcitta-L\u00f3pez","year":"2023","unstructured":"G\u00fcitta-L\u00f3pez L, Boal J, L\u00f3pez-L\u00f3pez \u00c1J (2023) Learning more with the same effort: how randomization improves the robustness of a robotic deep reinforcement learning agent. Appl Intell 53(12):14903\u201314917","journal-title":"Appl Intell"},{"issue":"2","key":"5705_CR4","doi-asserted-by":"publisher","first-page":"121101","DOI":"10.1007\/s11432-022-3696-5","volume":"67","author":"F-M Luo","year":"2024","unstructured":"Luo F-M, Xu T, Lai H, Chen X-H, Zhang W, Yu Y (2024) A survey on model-based reinforcement learning. Sci Chin Inf Sci 67(2):121101","journal-title":"Sci Chin Inf Sci"},{"key":"5705_CR5","unstructured":"Fujimoto S, Hoof H, Meger D (2018) Addressing function approximation error in actor-critic methods. In: International conference on machine learning"},{"key":"5705_CR6","doi-asserted-by":"crossref","unstructured":"Hu Z, Ding Y, Wu R, Li L, Zhang R, Hu Y, Qiu F, Zhang Z, Wang K, Zhao S, Zhang Y, Jiang J, Xi Y, Pu J, Zhang W, Wang S, Chen K, Zhou T, Chen J, Song Y, Lv T, Fan C (2023) Deep learning applications in games: a survey from a data perspective. Appl Intell 53:31129\u201331164","DOI":"10.1007\/s10489-023-05094-2"},{"key":"5705_CR7","unstructured":"Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D (2016) Continuous control with deep reinforcement learning. In: Bengio Y, LeCun Y (eds) 4th International conference on learning representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings"},{"issue":"7676","key":"5705_CR8","doi-asserted-by":"publisher","first-page":"354","DOI":"10.1038\/nature24270","volume":"550","author":"D Silver","year":"2017","unstructured":"Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, Chen Y, Lillicrap TP, Hui F, Sifre L, Driessche G, Graepel T, Hassabis D (2017) Mastering the game of go without human knowledge. Nature 550(7676):354\u2013359","journal-title":"Nature"},{"issue":"7782","key":"5705_CR9","doi-asserted-by":"publisher","first-page":"350","DOI":"10.1038\/s41586-019-1724-z","volume":"575","author":"O Vinyals","year":"2019","unstructured":"Vinyals O, Babuschkin I, Czarnecki WM, Mathieu M, Dudzik A, Chung J, Choi DH, Powell R, Ewalds T, Georgiev P et al (2019) Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575(7782):350\u2013354","journal-title":"Nature"},{"key":"5705_CR10","doi-asserted-by":"publisher","first-page":"24792","DOI":"10.1007\/s10489-023-04816-w","volume":"53","author":"G Zuo","year":"2023","unstructured":"Zuo G, Tian Z, Huang G (2023) A stable data-augmented reinforcement learning method with ensemble exploration and exploitation. Appl Intell 53:24792\u201324803","journal-title":"Appl Intell"},{"key":"5705_CR11","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.inffus.2022.03.003","volume":"85","author":"P Ladosz","year":"2022","unstructured":"Ladosz P, Weng L, Kim M, Oh H (2022) Exploration in deep reinforcement learning: A survey. Inf Fusion 85:1\u201322","journal-title":"Inf Fusion"},{"key":"5705_CR12","unstructured":"Chen E, Hong Z, Pajarinen J, Agrawal P (2022) Redeeming intrinsic rewards via constrained optimization. In: Advances in neural information processing systems 35: annual conference on neural information processing systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022"},{"key":"5705_CR13","unstructured":"Taiga AA, Agarwal R, Farebrother J, Courville A, Bellemare MG (2023) Investigating multi-task pretraining and generalization in reinforcement learning. In: The eleventh international conference on learning representations"},{"key":"5705_CR14","unstructured":"Schulman J, Wolski F, Dhariwal P, Radford A, Klimov O (2017) Proximal policy optimization algorithms. CoRR arXiv:1707.06347"},{"key":"5705_CR15","unstructured":"Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller MA (2013) Playing atari with deep reinforcement learning. CoRR arXiv:1312.5602"},{"key":"5705_CR16","unstructured":"Bellemare MG, Dabney W, Munos R (2017) A distributional perspective on reinforcement learning. CoRR arXiv:1707.06887"},{"key":"5705_CR17","unstructured":"Haarnoja T, Zhou A, Abbeel P, Levine S (2018) Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv:1801.01290"},{"key":"5705_CR18","doi-asserted-by":"crossref","unstructured":"Hessel M, Modayil J, Hasselt HV, Schaul T, Ostrovski G, Dabney W, Horgan D, Piot B, Azar MG, Silver D (2018) Rainbow: Combining improvements in deep reinforcement learning. In: AAAI conference on artificial intelligence","DOI":"10.1609\/aaai.v32i1.11796"},{"key":"5705_CR19","doi-asserted-by":"publisher","first-page":"1421","DOI":"10.1613\/jair.1.12412","volume":"69","author":"A Lazaridis","year":"2020","unstructured":"Lazaridis A, Fachantidis A, Vlahavas I (2020) Deep reinforcement learning: A state-of-the-art walkthrough. J Artif Intell Res 69:1421\u20131471","journal-title":"J Artif Intell Res"},{"key":"5705_CR20","unstructured":"Ecoffet A, Huizinga J, Lehman J, Stanley KO, Clune J (2019) Go-explore: a new approach for hard-exploration problems. arXiv:1901.10995"},{"key":"5705_CR21","doi-asserted-by":"publisher","first-page":"21299","DOI":"10.1007\/s10489-023-04695-1","volume":"53","author":"W Zhang","year":"2023","unstructured":"Zhang W, Song Y, Liu X, Shangguan Q-Q, An K (2023) A novel action decision method of deep reinforcement learning based on a neural network and confidence bound. Appl Intell 53:21299\u201321311","journal-title":"Appl Intell"},{"issue":"1","key":"5705_CR22","doi-asserted-by":"publisher","first-page":"95","DOI":"10.1007\/s10489-023-05197-w","volume":"54","author":"J Huang","year":"2024","unstructured":"Huang J, Tan Q, Qi R, Li H (2024) Relight: a random ensemble reinforcement learning based method for traffic light control. Appl Intell 54(1):95\u2013112","journal-title":"Appl Intell"},{"key":"5705_CR23","doi-asserted-by":"publisher","first-page":"529","DOI":"10.1038\/nature14236","volume":"518","author":"V Mnih","year":"2015","unstructured":"Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller MA, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518:529\u2013533","journal-title":"Nature"},{"key":"5705_CR24","doi-asserted-by":"publisher","first-page":"293","DOI":"10.1007\/BF00992699","volume":"8","author":"L Lin","year":"1992","unstructured":"Lin L (1992) Self-improving reactive agents based on reinforcement learning, planning and teaching. Mach Learn 8:293\u2013321","journal-title":"Mach Learn"},{"key":"5705_CR25","doi-asserted-by":"publisher","first-page":"103","DOI":"10.1007\/BF00993104","volume":"13","author":"AW Moore","year":"2004","unstructured":"Moore AW, Atkeson CG (2004) Prioritized sweeping: Reinforcement learning with less data and less time. Mach Learn 13:103\u2013130","journal-title":"Mach Learn"},{"key":"5705_CR26","unstructured":"Schaul T, Quan J, Antonoglou I, Silver D (2016) Prioritized experience replay. In: ICLR (Poster)"},{"key":"5705_CR27","unstructured":"Baxter J, Bartlett PL (2000) Reinforcement learning in pomdp\u2019s via direct gradient ascent. In: ICML, pp 41\u201348"},{"key":"5705_CR28","doi-asserted-by":"publisher","first-page":"137","DOI":"10.1007\/s10994-011-5235-x","volume":"84","author":"R Hafner","year":"2011","unstructured":"Hafner R, Riedmiller M (2011) Reinforcement learning in feedback control: Challenges and benchmarks from technical process control. Mach Learn 84:137\u2013169","journal-title":"Mach Learn"},{"key":"5705_CR29","unstructured":"Schulman J, Moritz P, Levine S, Jordan MI, Abbeel P (2016) High-dimensional continuous control using generalized advantage estimation. In: 4th International conference on learning representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings"},{"key":"5705_CR30","unstructured":"Hasselt HV, Guez A, Silver D (2015) Deep reinforcement learning with double q-learning. In: AAAI conference on artificial intelligence"},{"key":"5705_CR31","unstructured":"Fortunato M, Azar MG, Piot B, Menick J, Osband I, Graves A, Mnih V, Munos R, Hassabis D, Pietquin O, Blundell C, Legg S (2018) Noisy networks for exploration. In: 6th International conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings"},{"key":"5705_CR32","unstructured":"D\u2019Oro P, Schwarzer M, Nikishin E, Bacon P-L, Bellemare MG, Courville A (2022) Sample-efficient reinforcement learning by breaking the replay ratio barrier. In: Deep Reinforcement Learning Workshop NeurIPS 2022"},{"key":"5705_CR33","unstructured":"Lee H, Cho H, Kim H, Gwak D, Kim J, Choo J, Yun S-Y, Yun C (2024) Plastic: Improving input and label plasticity for sample efficient reinforcement learning. Adv Neural Inf Process Syst 36"},{"key":"5705_CR34","unstructured":"Nikishin E, Oh J, Ostrovski G, Lyle C, Pascanu R, Dabney W, Barreto A (2024) Deep reinforcement learning with plasticity injection. Adv Neural Inf Process Syst 36"},{"key":"5705_CR35","unstructured":"Sokar G, Agarwal R, Castro P, Evci U (2023) The dormant neuron phenomenon in deep reinforcement learning. In: International conference on machine learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA. Proceedings of Machine Learning Research, vol 202, pp 32145\u201332168"},{"key":"5705_CR36","unstructured":"Bhardwaj M, Xie T, Boots B, Jiang N, Cheng C-A (2024) Adversarial model for offline reinforcement learning. Adv Neural Inf Process Syst 36"},{"key":"5705_CR37","unstructured":"Cagatan OV, Akgun B (2024) Barlowrl: Barlow twins for data-efficient reinforcement learning. In: Asian conference on machine learning, pp 201\u2013216. PMLR"},{"key":"5705_CR38","doi-asserted-by":"crossref","unstructured":"Hao J, Yang T, Tang H, Bai C, Liu J, Meng Z, Liu P, Wang Z (2023) Exploration in deep reinforcement learning: From single-agent to multiagent domain. IEEE Trans Neural Netw Learn Syst","DOI":"10.1109\/TNNLS.2023.3236361"},{"key":"5705_CR39","unstructured":"Abbas Z, Zhao R, Modayil J, White A, Machado MC (2023) Loss of plasticity in continual deep reinforcement learning. In: Conference on lifelong learning agents, 22-25 August 2023, McGill University, Montr\u00e9al, Qu\u00e9bec, Canada. Proceedings of Machine Learning Research, vol 232, pp 620\u2013636"},{"key":"5705_CR40","unstructured":"Schulman J, Levine S, Abbeel P, Jordan MI, Moritz P (2015) Trust region policy optimization. In: Proceedings of the 32nd International conference on machine learning, ICML 2015, Lille, France, 6-11 July 2015. JMLR Workshop and Conference Proceedings, vol 37, pp 1889\u20131897"},{"key":"5705_CR41","unstructured":"Gruslys A, Dabney W, Azar MG, Piot B, Bellemare MG, Munos R (2018) The reactor: A fast and sample-efficient actor-critic agent for reinforcement learning. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings"},{"key":"5705_CR42","doi-asserted-by":"crossref","unstructured":"Rawlik K, Toussaint M, Vijayakumar S (2013) On stochastic optimal control and reinforcement learning by approximate inference (extended abstract). In: International joint conference on artificial intelligence, pp 3052\u20133056","DOI":"10.15607\/RSS.2012.VIII.045"},{"key":"5705_CR43","unstructured":"Fox R, Pakman A, Tishby N (2015) G-learning: Taming the noise in reinforcement learning via soft updates. CoRR arXiv:1512.08562"},{"issue":"39","key":"5705_CR44","first-page":"1","volume":"17","author":"S Levine","year":"2016","unstructured":"Levine S, Finn C, Darrell T, Abbeel P (2016) End-to-end training of deep visuomotor policies. J Mach Learn Res 17(39):1\u201340","journal-title":"J Mach Learn Res"},{"key":"5705_CR45","unstructured":"Haarnoja T, Tang H, Abbeel P, Levine S (2017) Reinforcement learning with deep energy-based policies. In: Precup D, Teh YW (eds) Proceedings of the 34th international conference on machine learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017. Proceedings of Machine Learning Research, vol 70, pp 1352\u20131361"},{"key":"5705_CR46","unstructured":"Cohen A, Yu L, Qiao X, Tong X (2019) Maximum entropy diverse exploration: Disentangling maximum entropy reinforcement learning. CoRR arXiv:1911.00828"},{"key":"5705_CR47","unstructured":"Gangwani T, Liu Q, Peng J (2019) Learning self-imitating diverse policies. In: 7th International conference on learning representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019"},{"key":"5705_CR48","doi-asserted-by":"crossref","unstructured":"Henderson P, Islam R, Bachman P, Pineau J, Precup D, Meger D (2018) Deep reinforcement learning that matters. In: McIlraith SA, Weinberger KQ (eds) Proceedings of the Thirty-Second AAAI conference on artificial intelligence, (AAAI-18), the 30th innovative applications of artificial intelligence (IAAI-18), and the 8th AAAI symposium on educational advances in artificial intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pp 3207\u20133214","DOI":"10.1609\/aaai.v32i1.11694"},{"key":"5705_CR49","unstructured":"Li Y, Xu J, Han L, Luo Z (2024) Hyperagent: A simple, scalable, efficient and provable reinforcement learning framework for complex environments. CoRR. arXiv:2402.10228"},{"key":"5705_CR50","unstructured":"Schwarzer M, Obando-Ceron JS, Courville AC, Bellemare MG, Agarwal R, Castro PS (2023) Bigger, better, faster: Human-level atari with human-level efficiency. In: Krause A, Brunskill E, Cho K, Engelhardt B, Sabato S, Scarlett J (eds) International conference on machine learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA. Proceedings of Machine Learning Research, vol 202, pp 30365\u201330380"},{"key":"5705_CR51","unstructured":"Tiapkin D, Belomestny D, Moulines E, Naumov A, Samsonov S, Tang Y, Valko M, M\u00e9nard P (2022) From dirichlet to rubin: Optimistic exploration in RL without bonuses. In: International conference on machine learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA. Proceedings of Machine Learning Research, vol 162, pp 21380\u201321431"},{"key":"5705_CR52","unstructured":"Eberhard O, Hollenstein JJ, Pinneri C, Martius G (2023) Pink noise is all you need: Colored noise exploration in deep reinforcement learning. In: The eleventh international conference on learning representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023"},{"key":"5705_CR53","unstructured":"Obando-Ceron JS, Bellemare MG, Castro PS (2023) The small batch size anomaly in multistep deep reinforcement learning. In: Maughan K, Liu R, Burns TF (eds) The first tiny papers track at ICLR 2023, Tiny Papers @ ICLR 2023, Kigali, Rwanda, May 5, 2023"},{"key":"5705_CR54","doi-asserted-by":"publisher","first-page":"679","DOI":"10.1512\/iumj.1957.6.56038","volume":"6","author":"R Bellman","year":"1957","unstructured":"Bellman R (1957) A markovian decision process. Indiana Univ Math J 6:679\u2013684","journal-title":"Indiana Univ Math J"},{"issue":"8","key":"5705_CR55","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"S Hochreiter","year":"1997","unstructured":"Hochreiter S, Schmidhuber J (1997) Long Short-Term Memory. Neural Comput 9(8):1735\u20131780","journal-title":"Neural Comput"},{"key":"5705_CR56","unstructured":"Wang Z, Schaul T, Hessel M, Hasselt H, Lanctot M, Freitas N (2016) Dueling network architectures for deep reinforcement learning. In: Balcan M, Weinberger KQ (eds) Proceedings of the 33nd international conference on machine learning, ICML 2016, New York City, NY, USA, June 19-24, 2016. JMLR Workshop and Conference Proceedings, vol 48, pp 1995\u20132003"},{"key":"5705_CR57","unstructured":"Asadi K, Misra D, Kim S, Littman ML (2019) Combating the compounding-error problem with a multi-step model. CoRR arXiv:1905.13320"},{"key":"5705_CR58","unstructured":"Ziebart BD (2010) Modeling purposeful adaptive behavior with the principle of maximum causal entropy. PhD thesis, Carnegie Mellon University, USA"},{"key":"5705_CR59","unstructured":"Bellemare MG, Naddaf Y, Veness J, Bowling M (2015) The arcade learning environment: An evaluation platform for general agents (extended abstract). In: Yang Q, Wooldridge MJ (eds) Proceedings of the twenty-fourth international joint conference on artificial intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pp 4148\u20134152"},{"issue":"1","key":"5705_CR60","doi-asserted-by":"publisher","first-page":"1020","DOI":"10.1007\/s10489-023-05240-w","volume":"54","author":"M Kemmerling","year":"2023","unstructured":"Kemmerling M, L\u00fctticke D, Schmitt RH (2023) Beyond games: a systematic review of neural monte carlo tree search applications. Appl Intell 54(1):1020\u20131046","journal-title":"Appl Intell"},{"key":"5705_CR61","unstructured":"Nair A, Srinivasan P, Blackwell S, Alcicek C, Fearon R, Maria AD, Panneershelvam V, Suleyman M, Beattie C, Petersen S, Legg S, Mnih V, Kavukcuoglu K, Silver D (2015) Massively parallel methods for deep reinforcement learning. CoRR arXiv:1507.04296"},{"key":"5705_CR62","unstructured":"Zhang L, Tang K, Yao X (2019) Explicit planning for efficient exploration in reinforcement learning. In: Wallach HM, Larochelle H, Beygelzimer A, d\u2019Alch\u00e9-Buc F, Fox EB, Garnett R (eds) Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp 7486\u20137495"},{"key":"5705_CR63","unstructured":"Obando-Ceron JS, Courville AC, Castro PS (2024) In deep reinforcement learning, a pruned network is a good network. CoRR arXiv:2402.12479"}],"container-title":["Applied Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-024-05705-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10489-024-05705-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-024-05705-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,8,15]],"date-time":"2024-08-15T13:32:04Z","timestamp":1723728724000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10489-024-05705-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,8,9]]},"references-count":63,"journal-issue":{"issue":"20","published-print":{"date-parts":[[2024,10]]}},"alternative-id":["5705"],"URL":"https:\/\/doi.org\/10.1007\/s10489-024-05705-6","relation":{},"ISSN":["0924-669X","1573-7497"],"issn-type":[{"type":"print","value":"0924-669X"},{"type":"electronic","value":"1573-7497"}],"subject":[],"published":{"date-parts":[[2024,8,9]]},"assertion":[{"value":"27 July 2024","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 August 2024","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that there are no competing interests associated with this research.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing Interests"}},{"value":"This research adheres to ethical standards, and all individuals involved in the study were fully informed and provided consent for the use of their data.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical and Informed Consent for Data Used"}}]}}