{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,9,19]],"date-time":"2024-09-19T16:08:12Z","timestamp":1726762092569},"reference-count":62,"publisher":"MDPI AG","issue":"17","license":[{"start":{"date-parts":[[2020,8,23]],"date-time":"2020-08-23T00:00:00Z","timestamp":1598140800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"This study proposes a framework for describing a scene change using natural language text based on indoor scene observations conducted before and after a scene change. The recognition of scene changes plays an essential role in a variety of real-world applications, such as scene anomaly detection. Most scene understanding research has focused on static scenes. Most existing scene change captioning methods detect scene changes from single-view RGB images, neglecting the underlying three-dimensional structures. Previous three-dimensional scene change captioning methods use simulated scenes consisting of geometry primitives, making it unsuitable for real-world applications. To solve these problems, we automatically generated large-scale indoor scene change caption datasets. We propose an end-to-end framework for describing scene changes from various input modalities, namely, RGB images, depth images, and point cloud data, which are available in most robot applications. We conducted experiments with various input modalities and models and evaluated model performance using datasets with various levels of complexity. Experimental results show that the models that combine RGB images and point cloud data as input achieve high performance in sentence generation and caption correctness and are robust for change type understanding for datasets with high complexity. The developed datasets and models contribute to the study of indoor scene change understanding.<\/jats:p>","DOI":"10.3390\/s20174761","type":"journal-article","created":{"date-parts":[[2020,8,24]],"date-time":"2020-08-24T01:28:06Z","timestamp":1598232486000},"page":"4761","source":"Crossref","is-referenced-by-count":17,"title":["Indoor Scene Change Captioning Based on Multimodality Data"],"prefix":"10.3390","volume":"20","author":[{"ORCID":"http:\/\/orcid.org\/0000-0002-2181-9475","authenticated-orcid":false,"given":"Yue","family":"Qiu","sequence":"first","affiliation":[{"name":"Graduate School of Science and Technology, University of Tsukuba, Tsukuba 305-8577, Japan"},{"name":"National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan"}]},{"given":"Yutaka","family":"Satoh","sequence":"additional","affiliation":[{"name":"Graduate School of Science and Technology, University of Tsukuba, Tsukuba 305-8577, Japan"},{"name":"National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan"}]},{"given":"Ryota","family":"Suzuki","sequence":"additional","affiliation":[{"name":"National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan"}]},{"given":"Kenji","family":"Iwata","sequence":"additional","affiliation":[{"name":"National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan"}]},{"ORCID":"http:\/\/orcid.org\/0000-0001-8844-165X","authenticated-orcid":false,"given":"Hirokatsu","family":"Kataoka","sequence":"additional","affiliation":[{"name":"National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan"}]}],"member":"1968","published-online":{"date-parts":[[2020,8,23]]},"reference":[{"key":"ref_1","unstructured":"(2020, July 31). Google Assistant Site. Available online: https:\/\/assistant.google.com\/."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1204","DOI":"10.1126\/science.aar6170","article-title":"Neural scene representation and rendering","volume":"360","author":"Eslami","year":"2018","journal-title":"Science"},{"key":"ref_3","unstructured":"Qi, C.R., Su, H., Mo, K., and Guibas, L.J. (2017, January 21\u201326). Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA."},{"key":"ref_4","unstructured":"Qi, C.R., Yi, L., Su, H., and Guibas, L.J. (2017, January 4\u20139). Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_5","unstructured":"Zhang, Z., Hua, B.S., and Yeung, S.K. (November, January 27). Shellnet: Efficient point cloud convolutional neural networks using concentric shells statistics. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1875","DOI":"10.1016\/j.imavis.2005.12.020","article-title":"Visual recognition of pointing gestures for human\u2013robot interaction","volume":"25","author":"Nickel","year":"2007","journal-title":"Image Vis. Comput."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"677","DOI":"10.1109\/34.598226","article-title":"Visual interpretation of hand gestures for human-computer interaction: A review","volume":"19","author":"Pavlovic","year":"1997","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"221","DOI":"10.1109\/TPAMI.2012.59","article-title":"3D convolutional neural networks for human action recognition","volume":"35","author":"Ji","year":"2012","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Xia, L., Chen, C.C., and Aggarwal, J.K. (2012, January 16\u201321). View invariant human action recognition using histograms of 3d joints. Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA.","DOI":"10.1109\/CVPRW.2012.6239233"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"10","DOI":"10.1016\/j.inffus.2018.10.009","article-title":"Human emotion recognition using deep belief network architecture","volume":"51","author":"Hassan","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1301","DOI":"10.1109\/JSTSP.2017.2764438","article-title":"End-to-end multimodal emotion recognition using deep neural networks","volume":"11","author":"Tzirakis","year":"2017","journal-title":"IEEE J. Sel. Top. Signal Process."},{"key":"ref_12","unstructured":"Afouras, T., Chung, J.S., Senior, A., Vinyals, O., and Zisserman, A. (2018). Deep audio-visual speech recognition. IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Dong, L., Xu, S., and Xu, B. (2018, January 15\u201320). Speech-transformer: A no-recurrence sequence-to-sequence model for speech recognition. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8462506"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Zhao, T., and Eskenazi, M. (2016). Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. arXiv.","DOI":"10.18653\/v1\/W16-3601"},{"key":"ref_15","unstructured":"Asadi, K., and Williams, J.D. (2016). Sample-efficient deep reinforcement learning for dialog control. arXiv."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Zhou, Y., and Tuzel, O. (2018, January 18\u201323). Voxelnet: End-to-end learning for point cloud based 3d object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00472"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Graham, B., Engelcke, M., and Van Der Maaten, L. (2018, January 18\u201322). 3d semantic segmentation with submanifold sparse convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00961"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Das, A., Datta, S., Gkioxari, G., Lee, S., Parikh, D., and Batra, D. (2018, January 18\u201322). Embodied question answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00008"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Wijmans, E., Datta, S., Maksymets, O., Das, A., Gkioxari, G., Lee, S., Essa, I., Parikh, D., and Batra, D. (2019, January 15\u201321). Embodied question answering in photorealistic environments with point cloud perception. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00682"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., S\u00fcnderhauf, N., and van den Hengel, A. (2019, January 15\u201321). Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2018.00387"},{"key":"ref_21","unstructured":"Fried, D., Hu, R., Cirik, V., Rohrbach, A., Andreas, J., Morency, L.P., and Darrell, T. (2018, January 3\u20138). Speaker-follower models for vision-and-language navigation. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Lawrence, Z.C., and Parikh, D. (2015, January 7\u201313). Vqa: Visual question answering. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.279"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., and Parikh, D. (2017, January 21\u201326). Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.670"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., and Girshick, R. (2017, January 21\u201326). Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.215"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., and Zhang, L. (2018, January 18\u201322). Bottom-up and top-down attention for image captioning and visual question answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00636"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Lu, J., Xiong, C., Parikh, D., and Socher, R. (2017, January 21\u201326). Knowing when to look: Adaptive attention via a visual sentinel for image captioning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.345"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Yao, T., Pan, Y., Li, Y., and Mei, T. (2018, January 8\u201314). Exploring visual relationship for image captioning. Proceedings of the European Conference on Computer Vision, Munich, Germany.","DOI":"10.1007\/978-3-030-01264-9_42"},{"key":"ref_28","unstructured":"Lee, K.H., Palangi, H., Chen, X., Hu, H., and Gao, J. (2019). Learning visual relation priors for image-text matching and image captioning with neural scene graph generators. arXiv."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Deshpande, A., Aneja, J., Wang, L., Schwing, A.G., and Forsyth, D. (2019, January 13\u201318). Fast, diverse and accurate image captioning guided by part-of-speech. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA.","DOI":"10.1109\/CVPR.2019.01095"},{"key":"ref_30","unstructured":"Chen, F., Ji, R., Ji, J., Sun, X., Zhang, B., Ge, X., and Wang, Y. (2019, January 3\u20138). Variational Structured Semantic Inference for Diverse Image Captioning. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Das, A., Kottur, S., Gupta, K., Singh, A., Yadav, D., Moura, J.M., and Batra, D. (2017, January 21\u201326). Visual dialog. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.121"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Das, A., Kottur, S., Moura, J.M., Lee, S., and Batra, D. (2017, January 22\u201329). Learning cooperative visual dialog agents with deep reinforcement learning. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.321"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Jhamtani, H., and Berg-Kirkpatrick, T. (2018). Learning to describe differences between pairs of similar images. arXiv.","DOI":"10.18653\/v1\/D18-1436"},{"key":"ref_34","unstructured":"Park, D.H., Darrell, T., and Rohrbach, A. (November, January 27). Robust change captioning. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"4743","DOI":"10.1109\/LRA.2020.3003290","article-title":"3D-Aware Scene Change Captioning From Multiview Images","volume":"5","author":"Qiu","year":"2020","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_36","unstructured":"(2020, July 31). Kinect Site. Available online: https:\/\/www.xbox.com\/en-US\/kinect\/."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Dai, A., Ruizhongtai, Q.C., and Nie\u00dfner, M. (2017, January 21\u201326). Shape completion using 3d-encoder-predictor cnns and shape synthesis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.693"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Su, H., Maji, S., Kalogerakis, E., and Learned-Miller, E. (2015, January 7\u201313). Multi-view convolutional neural networks for 3d shape recognition. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.114"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Kanezaki, A., Matsushita, Y., and Nishida, Y. (2018, January 18\u201322). Rotationnet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00526"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Esteves, C., Xu, Y., Allen-Blanchette, C., and Daniilidis, K. (2019, January 16\u201320). Equivariant multi-view networks. Proceedings of the IEEE International Conference on Computer Vision, Long Beach, CA, USA.","DOI":"10.1109\/ICCV.2019.00165"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Zhang, Y., and Funkhouser, T. (2018, January 18\u201322). Deep depth completion of a single rgb-d image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00026"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Qi, C.R., Liu, W., Wu, C., Su, H., and Guibas, L.J. (2018, January 18\u201322). Frustum pointnets for 3d object detection from rgb-d data. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00102"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., and Funkhouser, T. (2017, January 21\u201326). Semantic scene completion from a single depth image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.28"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Xia, F., Zamir, A.R., He, Z., Sax, A., Malik, J., and Savarese, S. (2018, January 18\u201322). Gibson env: Real-world perception for embodied agents. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00945"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Chang, A., Dai, A., Funkhouser, T., Halber, M., Niebner, M., Savva, M., and Zhang, Y. (2017, January 10\u201312). Matterport3D: Learning from RGB-D Data in Indoor Environments. Proceedings of the 2017 International Conference on 3D Vision (3DV), Qingdao, China.","DOI":"10.1109\/3DV.2017.00081"},{"key":"ref_46","unstructured":"Straub, J., Whelan, T., Ma, L., Chen, Y., Wijmans, E., Green, S., and Clarkson, A. (2019). The Replica dataset: A digital replica of indoor spaces. arXiv."},{"key":"ref_47","unstructured":"(2020, July 31). NEDO Item Database. Available online: http:\/\/mprg.cs.chubu.ac.jp\/NEDO_DB\/."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Calli, B., Singh, A., Walsman, A., Srinivasa, S., Abbeel, P., and Dollar, A.M. (2015, January 27\u201331). The ycb object and model set: Towards common benchmarks for manipulation research. Proceedings of the 2015 International Conference on Advanced Robotics (ICAR), Istanbul, Turkey.","DOI":"10.1109\/ICAR.2015.7251504"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"1301","DOI":"10.1007\/s10514-018-9734-5","article-title":"Street-view change detection with deconvolutional networks","volume":"42","author":"Alcantarilla","year":"2018","journal-title":"Auton. Robot."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Zhao, X., Li, H., Wang, R., Zheng, C., and Shi, S. (2019, January 25\u201327). Street-view Change Detection via Siamese Encoder-decoder Structured Convolutional Neural Networks. Proceedings of the VISIGRAPP, Prague, Czech Republic.","DOI":"10.5220\/0007407905250532"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Ambru\u015f, R., Bore, N., Folkesson, J., and Jensfelt, P. (2014, January 14\u201318). Meta-rooms: Building and maintaining long term spatial models in a dynamic world. Proceedings of the 2014 IEEE\/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA.","DOI":"10.1109\/IROS.2014.6942806"},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Fehr, M., Furrer, F., Dryanovski, I., Sturm, J., Gilitschenski, I., Siegwart, R., and Cadena, C. (June, January 29). TSDF-based change detection for consistent long-term dense reconstruction and dynamic object discovery. Proceedings of the 2017 IEEE International Conference on Robotics and Automation, Singapore.","DOI":"10.1109\/ICRA.2017.7989614"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Jinno, I., Sasaki, Y., and Mizoguchi, H. (2019, January 14\u201316). 3D Map Update in Human Environment Using Change Detection from LIDAR Equipped Mobile Robot. Proceedings of the 2019 IEEE\/SICE International Symposium on System Integration (SII), Paris, France.","DOI":"10.1109\/SII.2019.8700352"},{"key":"ref_54","first-page":"120","article-title":"The OpenCV Library","volume":"12","author":"Bradski","year":"2000","journal-title":"Dr. Dobb\u2019s J. Softw. Tools"},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"2451","DOI":"10.1162\/089976600300015015","article-title":"Learning to forget: Continual prediction with LSTM","volume":"12","author":"Gers","year":"2000","journal-title":"Neural Comput."},{"key":"ref_57","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_58","unstructured":"Savva, M., Kadian, A., Maksymets, O., Zhao, Y., Wijmans, E., Jain, B., and Parikh, D. (November, January 27). Habitat: A platform for embodied ai research. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Papineni, K., Roukos, S., Ward, T., and Zhu, W.J. (2002, January 7\u201312). BLEU: A method for automatic evaluation of machine translation. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, PA, USA.","DOI":"10.3115\/1073083.1073135"},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Lin, C.Y. (2002, January 11\u201312). Rouge: A package for automatic evaluation of summaries. Proceedings of the ACL-02 Workshop on Automatic Summarization, Phildadelphia, PA, USA.","DOI":"10.3115\/1118162.1118168"},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Anderson, P., Fernando, B., Johnson, M., and Gould, S. (2016, January 8\u201314). Spice: Semantic propositional image caption evaluation. Proceedings of the European Conference on Computer Vision, Munich, Germany.","DOI":"10.1007\/978-3-319-46454-1_24"},{"key":"ref_62","doi-asserted-by":"crossref","unstructured":"Denkowski, M., and Lavie, A. (2014, January 26\u201327). Meteor universal: Language specific translation evaluation for any target language. Proceedings of the Ninth Workshop on Statistical Machine Translation, Baltimore, MA, USA.","DOI":"10.3115\/v1\/W14-3348"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/17\/4761\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,2]],"date-time":"2024-07-02T11:48:43Z","timestamp":1719920923000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/17\/4761"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,8,23]]},"references-count":62,"journal-issue":{"issue":"17","published-online":{"date-parts":[[2020,9]]}},"alternative-id":["s20174761"],"URL":"https:\/\/doi.org\/10.3390\/s20174761","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,8,23]]}}}