{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,4,16]],"date-time":"2025-04-16T11:48:28Z","timestamp":1744804108469,"version":"3.37.3"},"reference-count":59,"publisher":"MDPI AG","issue":"14","license":[{"start":{"date-parts":[[2023,7,9]],"date-time":"2023-07-09T00:00:00Z","timestamp":1688860800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100003626","name":"DAPA","doi-asserted-by":"publisher","award":["UD230017TD"],"id":[{"id":"10.13039\/501100003626","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"Research on video anomaly detection has mainly been based on video data. However, many real-world cases involve users who can conceive potential normal and abnormal situations within the anomaly detection domain. This domain knowledge can be conveniently expressed as text descriptions, such as \u201cwalking\u201d or \u201cpeople fighting\u201d, which can be easily obtained, customized for specific applications, and applied to unseen abnormal videos not included in the training dataset. We explore the potential of using these text descriptions with unlabeled video datasets. We use large language models to obtain text descriptions and leverage them to detect abnormal frames by calculating the cosine similarity between the input frame and text descriptions using the CLIP visual language model. To enhance the performance, we refined the CLIP-derived cosine similarity using an unlabeled dataset and the proposed text-conditional similarity, which is a similarity measure between two vectors based on additional learnable parameters and a triplet loss. The proposed method has a simple training and inference process that avoids the computationally intensive analyses of optical flow or multiple frames. The experimental results demonstrate that the proposed method outperforms unsupervised methods by showing 8% and 13% better AUC scores for the ShanghaiTech and UCFcrime datasets, respectively. Although the proposed method shows \u22126% and \u22125% than weakly supervised methods for those datasets, in abnormal videos, the proposed method shows 17% and 5% better AUC scores, which means that the proposed method shows comparable results with weakly supervised methods that require resource-intensive dataset labeling. These outcomes validate the potential of using text descriptions in unsupervised video anomaly detection.<\/jats:p>","DOI":"10.3390\/s23146256","type":"journal-article","created":{"date-parts":[[2023,7,10]],"date-time":"2023-07-10T05:02:50Z","timestamp":1688965370000},"page":"6256","source":"Crossref","is-referenced-by-count":6,"title":["Unsupervised Video Anomaly Detection Based on Similarity with Predefined Text Descriptions"],"prefix":"10.3390","volume":"23","author":[{"given":"Jaehyun","family":"Kim","sequence":"first","affiliation":[{"name":"School of Electrical Engineering, Korea University, Seoul 02841, Republic of Korea"}]},{"given":"Seongwook","family":"Yoon","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering, Korea University, Seoul 02841, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-8975-7430","authenticated-orcid":false,"given":"Taehyeon","family":"Choi","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering, Korea University, Seoul 02841, Republic of Korea"}]},{"given":"Sanghoon","family":"Sull","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering, Korea University, Seoul 02841, Republic of Korea"}]}],"member":"1968","published-online":{"date-parts":[[2023,7,9]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Zaheer, M.Z., Mahmood, A., Khan, M.H., Segu, M., Yu, F., and Lee, S.I. (2022, January 18\u201324). Generative cooperative learning for unsupervised video anomaly detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01433"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Zhao, B., Fei-Fei, L., and Xing, E.P. (2011, January 20\u201325). Online detection of unusual events in videos via dynamic sparse coding. Proceedings of the CVPR 2011, Colorado Springs, CO, USA.","DOI":"10.1109\/CVPR.2011.5995524"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Hasan, M., Choi, J., Neumann, J., Roy-Chowdhury, A.K., and Davis, L.S. (2016, January 27\u201330). Learning temporal regularity in video sequences. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.86"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Cheng, K.W., Chen, Y.T., and Fang, W.H. (2015, January 7\u201312). Video anomaly detection and localization using hierarchical feature representation and Gaussian process regression. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298909"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Qiao, M., Wang, T., Li, J., Li, C., Lin, Z., and Snoussi, H. (2017, January 26\u201328). Abnormal event detection based on deep autoencoder fusing optical flow. Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China.","DOI":"10.23919\/ChiCC.2017.8029129"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"117","DOI":"10.1016\/j.cviu.2016.10.010","article-title":"Detecting anomalous events in videos by learning deep representations of appearance and motion","volume":"156","author":"Xu","year":"2017","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_7","unstructured":"Zaheer, M.Z., Lee, J.h., Astrid, M., and Lee, S.I. (2020, January 13\u201319). Old is gold: Redefining the adversarially learned one-class classifier training paradigm. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"2301","DOI":"10.1109\/TNNLS.2021.3083152","article-title":"Robust unsupervised video anomaly detection by multipath frame prediction","volume":"33","author":"Wang","year":"2021","journal-title":"IEEE Trans. Neural Networks Learn. Syst."},{"key":"ref_9","first-page":"4505","article-title":"A background-agnostic framework with adversarial training for abnormal event detection in video","volume":"44","author":"Georgescu","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_10","unstructured":"Reiss, T., and Hoshen, Y. (2022). Attribute-Based Representations for Accurate and Interpretable Video Anomaly Detection. arXiv."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Sultani, W., Chen, C., and Shah, M. (2018, January 8\u201314). Real-world anomaly detection in surveillance videos. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Munich, Germany.","DOI":"10.1109\/CVPR.2018.00678"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Tian, Y., Pang, G., Chen, Y., Singh, R., Verjans, J.W., and Carneiro, G. (2021, January 11\u201317). Weakly-supervised video anomaly detection with robust temporal feature magnitude learning. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Virtual.","DOI":"10.1109\/ICCV48922.2021.00493"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Sapkota, H., and Yu, Q. (2022, January 18\u201324). Bayesian nonparametric submodular video partition for robust anomaly detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00321"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"3513","DOI":"10.1109\/TIP.2021.3062192","article-title":"Learning causal temporal relation and feature discrimination for anomaly detection","volume":"30","author":"Wu","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Zhang, C., Li, G., Qi, Y., Wang, S., Qing, L., Huang, Q., and Yang, M.H. (2023, January 18\u201322). Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly Supervised Video Anomaly Detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.01561"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Lv, H., Yue, Z., Sun, Q., Luo, B., Cui, Z., and Zhang, H. (2023, January 18\u201322). Unbiased Multiple Instance Learning for Weakly Supervised Video Anomaly Detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00775"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Cho, M., Kim, M., Hwang, S., Park, C., Lee, K., and Lee, S. (2023, January 18\u201322). Look Around for Anomalies: Weakly-Supervised Anomaly Detection via Context-Motion Relational Learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.01168"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Yu, G., Wang, S., Cai, Z., Liu, X., Xu, C., and Wu, C. (2022, January 18\u201324). Deep anomaly discovery from unlabeled videos via normality advantage and self-paced refinement. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01360"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Tur, A.O., Dall\u2019Asen, N., Beyan, C., and Ricci, E. (2023). Exploring Diffusion Models for Unsupervised Video Anomaly Detection. arXiv.","DOI":"10.1109\/ICIP49359.2023.10222594"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Sato, F., Hachiuma, R., and Sekii, T. (2023, January 18\u201322). Prompt-Guided Zero-Shot Anomaly Action Recognition Using Pretrained Deep Skeleton Features. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00626"},{"key":"ref_21","unstructured":"(2023, March 20). Available online: Chat.openai.com."},{"key":"ref_22","unstructured":"Menon, S., and Vondrick, C. (2022). Visual Classification via Description from Large Language Models. arXiv."},{"key":"ref_23","unstructured":"Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., and Clark, J. (2021, January 18\u201324). Learning transferable visual models from natural language supervision. Proceedings of the International Conference on Machine Learning, PMLR, Virtual."},{"key":"ref_24","unstructured":"Jia, C., Yang, Y., Xia, Y., Chen, Y.T., Parekh, Z., Pham, H., Le, Q., Sung, Y.H., Li, Z., and Duerig, T. (2021, January 18\u201324). Scaling up visual and vision-language representation learning with noisy text supervision. Proceedings of the International Conference on Machine Learning, PMLR, Virtual."},{"key":"ref_25","unstructured":"Li, J., Li, D., Xiong, C., and Hoi, S. (2022, January 17\u201323). Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Singh, A., Hu, R., Goswami, V., Couairon, G., Galuba, W., Rohrbach, M., and Kiela, D. (2022, January 18\u201324). Flava: A foundational language and vision alignment model. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01519"},{"key":"ref_27","unstructured":"Zeng, Y., Zhang, X., and Li, H. (2021). Multi-grained vision language pre-training: Aligning texts with visual concepts. arXiv."},{"key":"ref_28","unstructured":"Yao, L., Huang, R., Hou, L., Lu, G., Niu, M., Xu, H., Liang, X., Li, Z., Jiang, X., and Xu, C. (2021). FILIP: Fine-grained interactive language-image pre-training. arXiv."},{"key":"ref_29","unstructured":"Zhang, H., Zhang, P., Hu, X., Chen, Y.C., Li, L.H., Dai, X., Wang, L., Yuan, L., Hwang, J.N., and Gao, J. (2022). Advances in Neural Information Processing Systems 35, Proceedings of the Annual Conference on Neural Information Processing Systems 2022, New Orleans, LA, USA, 28 November\u20139 December 2022, Curran Associates, Inc."},{"key":"ref_30","first-page":"2609","article-title":"A deep one-class neural network for anomalous event detection in complex scenes","volume":"31","author":"Wu","year":"2019","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Liu, W., Luo, W., Lian, D., and Gao, S. (2018, January 8\u201314). Future frame prediction for anomaly detection\u2013a new baseline. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Munich, Germany.","DOI":"10.1109\/CVPR.2018.00684"},{"key":"ref_32","unstructured":"Redmon, J., and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Wortsman, M., Ilharco, G., Kim, J.W., Li, M., Kornblith, S., Roelofs, R., Lopes, R.G., Hajishirzi, H., Farhadi, A., and Namkoong, H. (2022, January 18\u201324). Robust fine-tuning of zero-shot models. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00780"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"2337","DOI":"10.1007\/s11263-022-01653-1","article-title":"Learning to prompt for vision-language models","volume":"130","author":"Zhou","year":"2022","journal-title":"Int. J. Comput. Vis."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Zhou, K., Yang, J., Loy, C.C., and Liu, Z. (2022, January 18\u201324). Conditional prompt learning for vision-language models. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01631"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Lu, Y., Liu, J., Zhang, Y., Liu, Y., and Tian, X. (2022, January 18\u201324). Prompt distribution learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00514"},{"key":"ref_37","unstructured":"Gao, P., Geng, S., Zhang, R., Ma, T., Fang, R., Zhang, Y., Li, H., and Qiao, Y. (2021). Clip-adapter: Better vision-language models with feature adapters. arXiv."},{"key":"ref_38","unstructured":"Zhang, R., Fang, R., Zhang, W., Gao, P., Li, K., Dai, J., Qiao, Y., and Li, H. (2021). Tip-adapter: Training-free clip-adapter for better vision-language modeling. arXiv."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"1265","DOI":"10.1109\/TPAMI.2005.151","article-title":"A sparse texture representation using local affine regions","volume":"27","author":"Lazebnik","year":"2005","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_40","unstructured":"Nilsback, M.E., and Zisserman, A. (2006, January 17\u201322). A visual vocabulary for flower classification. Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR\u201906), New York, NY, USA."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"31","DOI":"10.1016\/S0004-3702(96)00034-3","article-title":"Solving the multiple instance problem with axis-parallel rectangles","volume":"89","author":"Dietterich","year":"1997","journal-title":"Artif. Intell."},{"key":"ref_42","unstructured":"Teh, Y., Jordan, M., Beal, M., and Blei, D. (2004). Advances in Neural Information Processing Systems 17, Proceedings of the Annual Conference on Neural Information Processing Systems 2004, Vancouver, BC, Canada, 13\u201318 December 2004, Curran Associates, Inc."},{"key":"ref_43","unstructured":"Kumar, M., Packer, B., and Koller, D. (2010). Advances in Neural Information Processing Systems 23, Proceedings of the Annual Conference on Neural Information Processing Systems 2010, Vancouver, BC, Canada, 6\u20139 December 2010, Curran Associates, Inc."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Hara, K., Kataoka, H., and Satoh, Y. (2018, January 8\u201314). Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Munich, Germany.","DOI":"10.1109\/CVPR.2018.00685"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Purwanto, D., Chen, Y.T., and Fang, W.H. (2021, January 20\u201325). Dance with self-attention: A new look of conditional random fields on anomaly detection in videos. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Nashville, TN, USA.","DOI":"10.1109\/ICCV48922.2021.00024"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Rao, Y., Zhao, W., Chen, G., Tang, Y., Zhu, Z., Huang, G., Zhou, J., and Lu, J. (2022, January 18\u201324). Denseclip: Language-guided dense prediction with context-aware prompting. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01755"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Wang, C., Chai, M., He, M., Chen, D., and Liao, J. (2022, January 18\u201324). Clip-nerf: Text-and-image driven manipulation of neural radiance fields. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00381"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Shi, H., Hayat, M., Wu, Y., and Cai, J. (2022, January 18\u201324). ProposalCLIP: Unsupervised open-category object proposal generation via exploiting clip cues. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00939"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Wang, Z., Lu, Y., Li, Q., Tao, X., Guo, Y., Gong, M., and Liu, T. (2022, January 18\u201324). Cris: Clip-driven referring image segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01139"},{"key":"ref_50","unstructured":"Shen, S., Li, L.H., Tan, H., Bansal, M., Rohrbach, A., Chang, K.W., Yao, Z., and Keutzer, K. (2021). How much can clip benefit vision-and-language tasks?. arXiv."},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Li, M., Xu, R., Wang, S., Zhou, L., Lin, X., Zhu, C., Zeng, M., Ji, H., and Chang, S.F. (2022, January 18\u201324). Clip-event: Connecting text and images with event structures. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01593"},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Khandelwal, A., Weihs, L., Mottaghi, R., and Kembhavi, A. (2022, January 18\u201324). Simple but effective: Clip embeddings for embodied ai. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01441"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Qian, Q., Shang, L., Sun, B., Hu, J., Li, H., and Jin, R. (2019, January 15\u201320). Softtriple loss: Deep metric learning without triplet sampling. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Long Beach, CA, USA.","DOI":"10.1109\/ICCV.2019.00655"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014, January 6\u201312). Microsoft coco: Common objects in context. Proceedings of the Computer Vision\u2013ECCV 2014: 13th European Conference, Zurich, Switzerland. Proceedings, Part V 13.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Ionescu, R.T., Khan, F.S., Georgescu, M.I., and Shao, L. (2019, January 15\u201320). Object-centric auto-encoders and dummy anomalies for abnormal event detection in video. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00803"},{"key":"ref_56","unstructured":"Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., and Antiga, L. (2019). Advances in Neural Information Processing Systems 32, Proceedings of the Annual Conference on Neural Information Processing Systems 2019, Vancouver, BC, Canada, 8\u201314 December 2019, Curran Associates, Inc."},{"key":"ref_57","unstructured":"Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., and Isard, M. (2016, January 2\u20134). Tensorflow: A system for large-scale machine learning. Proceedings of the 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), Savannah, GA, USA."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Carreira, J., and Zisserman, A. (2017, January 21\u201326). Quo vadis, action recognition? A new model and the kinetics dataset. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.502"},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Liu, P., Lyu, M., King, I., and Xu, J. (2019, January 15\u201320). Selflow: Self-supervised learning of optical flow. Proceedings of the IEEE\/CVF conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00470"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/14\/6256\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,12,16]],"date-time":"2023-12-16T19:17:57Z","timestamp":1702754277000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/14\/6256"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,9]]},"references-count":59,"journal-issue":{"issue":"14","published-online":{"date-parts":[[2023,7]]}},"alternative-id":["s23146256"],"URL":"https:\/\/doi.org\/10.3390\/s23146256","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2023,7,9]]}}}