{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,7]],"date-time":"2024-08-07T09:52:36Z","timestamp":1723024356433},"reference-count":79,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,9,1]],"date-time":"2023-09-01T00:00:00Z","timestamp":1693526400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,9,1]],"date-time":"2023-09-01T00:00:00Z","timestamp":1693526400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Vis. Intell."],"abstract":"Abstract<\/jats:title>Image inpainting is a critical area of research in computer vision with a broad range of applications, including image restoration and editing. However, current inpainting models often struggle to learn the specific painting styles and fine-grained brushstrokes of individual artists when restoring Chinese landscape paintings. To address this challenge, this paper proposes a novel inpainting model specifically designed for Chinese landscape paintings, featuring a hierarchical structure that can be applied to restore the famous Dwelling in the Fuchun Mountains<\/jats:italic> with remarkable fidelity. The proposed method leverages an image processing algorithm to extract the structural information of Chinese landscape paintings. This approach enables the model to decompose the inpainting process into two separate steps, generating less informative backgrounds and more detailed foregrounds. By seamlessly merging the generated results with the remaining portions of the original work, the proposed method can faithfully restore Chinese landscape paintings while preserving their rich details and fine-grained styles. Overall, the results of this study demonstrate that the proposed method represents a significant step forward in the field of image inpainting, particularly for the restoration of Chinese landscape paintings. The hierarchical structure and image processing algorithm used in this model is able to faithfully restore delicate and intricate details of these paintings, making it a promising tool for art restoration professionals and researchers.<\/jats:p>","DOI":"10.1007\/s44267-023-00021-y","type":"journal-article","created":{"date-parts":[[2023,9,1]],"date-time":"2023-09-01T00:02:08Z","timestamp":1693526528000},"update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Hierarchical painter: Chinese landscape painting restoration with fine-grained styles"],"prefix":"10.1007","volume":"1","author":[{"given":"Zhekai","family":"Xu","sequence":"first","affiliation":[]},{"given":"Haohong","family":"Shang","sequence":"additional","affiliation":[]},{"given":"Shaoze","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Ruiqi","family":"Xu","sequence":"additional","affiliation":[]},{"ORCID":"http:\/\/orcid.org\/0000-0003-3209-8965","authenticated-orcid":false,"given":"Yichao","family":"Yan","sequence":"additional","affiliation":[]},{"given":"Yixuan","family":"Li","sequence":"additional","affiliation":[]},{"given":"Jiawei","family":"Huang","sequence":"additional","affiliation":[]},{"given":"Howard C.","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Jianjun","family":"Zhou","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,9,1]]},"reference":[{"key":"21_CR1","doi-asserted-by":"publisher","first-page":"69728","DOI":"10.1109\/ACCESS.2018.2877401","volume":"6","author":"M. Isogawa","year":"2018","unstructured":"Isogawa, M., Mikami, D., Iwai, D., Kimata, H., & Sato, K. (2018). Mask optimization for image inpainting. IEEE Access, 6, 69728\u201369741.","journal-title":"IEEE Access"},{"issue":"12","key":"21_CR2","doi-asserted-by":"publisher","first-page":"3252","DOI":"10.1109\/TMM.2018.2831636","volume":"20","author":"J. Liu","year":"2018","unstructured":"Liu, J., Yang, S., Fang, Y., & Guo, Z. (2018). Structure-guided image inpainting using homography transformation. IEEE Transactions on Multimedia, 20(12), 3252\u20133265.","journal-title":"IEEE Transactions on Multimedia"},{"issue":"6","key":"21_CR3","doi-asserted-by":"publisher","first-page":"2023","DOI":"10.1109\/TVCG.2017.2702738","volume":"24","author":"Q. Guo","year":"2018","unstructured":"Guo, Q., Gao, S., Zhang, X., Yin, Y., & Zhang, C. (2018). Patch-based image inpainting via two-stage low rank approximation. IEEE Transactions on Visualization and Computer Graphics, 24(6), 2023\u20132036.","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"issue":"8","key":"21_CR4","doi-asserted-by":"publisher","first-page":"1200","DOI":"10.1109\/83.935036","volume":"10","author":"C. Ballester","year":"2001","unstructured":"Ballester, C., Bertalm\u00edo, M., Caselles, V., Sapiro, G., & Verdera, J. (2001). Filling-in by joint interpolation of vector fields and gray levels. IEEE Transactions on Image Processing, 10(8), 1200\u20131211.","journal-title":"IEEE Transactions on Image Processing"},{"issue":"12","key":"21_CR5","doi-asserted-by":"publisher","first-page":"3050","DOI":"10.1109\/TIFS.2017.2730822","volume":"12","author":"H. Li","year":"2017","unstructured":"Li, H., Luo, W., & Huang, J. (2017). Localization of diffusion-based inpainting in digital images. IEEE Transactions on Information Forensics and Security, 12(12), 3050\u20133064.","journal-title":"IEEE Transactions on Information Forensics and Security"},{"issue":"8","key":"21_CR6","doi-asserted-by":"publisher","first-page":"3802","DOI":"10.1007\/s00034-019-01029-w","volume":"38","author":"G. Sridevi","year":"2019","unstructured":"Sridevi, G., & Kumar, S. S. (2019). Image inpainting based on fractional-order nonlinear diffusion for image reconstruction. Circuits, Systems, and Signal Processing, 38(8), 3802\u20133817.","journal-title":"Circuits, Systems, and Signal Processing"},{"key":"21_CR7","doi-asserted-by":"crossref","unstructured":"Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2018). Generative image inpainting with contextual attention. arXiv preprint. arXiv:1801.07892.","DOI":"10.1109\/CVPR.2018.00577"},{"key":"21_CR8","first-page":"1","volume-title":"Proceedings of the tenth international conference on learning representations","author":"C. H. Lin","year":"2022","unstructured":"Lin, C. H., Cheng, Y.-C., Lee, H.-Y., Tulyakov, S., & Yang, M.-H. (2022). InfinityGAN: towards infinite-pixel image synthesis. In Proceedings of the tenth international conference on learning representations (pp. 1\u201343). ICLR."},{"key":"21_CR9","first-page":"10684","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"R. Rombach","year":"2022","unstructured":"Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 10684\u201310695). Los Alamitos: IEEE."},{"key":"21_CR10","first-page":"2149","volume-title":"IEEE\/CVF winter conference on applications of computer vision","author":"R. Suvorov","year":"2022","unstructured":"Suvorov, R., Logacheva, E., Mashikhin, A., Remizova, A., Ashukha, A., Silvestrov, A., et al. (2022). Resolution-robust large mask inpainting with Fourier convolutions. In IEEE\/CVF winter conference on applications of computer vision (pp. 2149\u20132159). Los Alamitos: IEEE."},{"key":"21_CR11","first-page":"700","volume-title":"Advances in neural information processing systems","author":"M.-Y. Liu","year":"2017","unstructured":"Liu, M.-Y., Breuel, T., & Kautz, J. (2017). Unsupervised image-to-image translation networks. In I. Guyon, U. Von Luxburg, S. Bengio, et al. (Eds.), Advances in neural information processing systems (Vol. 30, pp. 700\u2013708). Red Hook: Curran Associates."},{"key":"21_CR12","doi-asserted-by":"publisher","first-page":"126","DOI":"10.1016\/j.inffus.2021.02.014","volume":"72","author":"P. Shamsolmoali","year":"2021","unstructured":"Shamsolmoali, P., Zareapoor, M., Granger, E., Zhou, H., Wang, R., Celebi, M. E., et al. (2021). Image synthesis with adversarial networks: a comprehensive survey and case studies. Information Fusion, 72, 126\u2013146.","journal-title":"Information Fusion"},{"key":"21_CR13","unstructured":"Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv preprint. arXiv:1710.10196."},{"issue":"11","key":"21_CR14","doi-asserted-by":"publisher","first-page":"139","DOI":"10.1145\/3422622","volume":"63","author":"I. J. Goodfellow","year":"2020","unstructured":"Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139\u2013144.","journal-title":"Communications of the ACM"},{"issue":"6","key":"21_CR15","doi-asserted-by":"publisher","first-page":"921","DOI":"10.1109\/TEVC.2019.2895748","volume":"23","author":"C. Wang","year":"2019","unstructured":"Wang, C., Xu, C., Yao, X., & Tao, D. (2019). Evolutionary generative adversarial networks. IEEE Transactions on Evolutionary Computation, 23(6), 921\u2013934.","journal-title":"IEEE Transactions on Evolutionary Computation"},{"key":"21_CR16","first-page":"2018","volume-title":"Advances in neural information processing systems","author":"K. Roth","year":"2017","unstructured":"Roth, K., Lucchi, A., Nowozin, S., & Hofmann, T. (2017). Stabilizing training of generative adversarial networks through regularization. In I. Guyon, U. Von Luxburg, S. Bengio, et al. (Eds.), Advances in neural information processing systems (Vol. 30, pp. 2018\u20132028). Red Hook: Curran Associates."},{"key":"21_CR17","first-page":"8868","volume-title":"Advances in neural information processing systems","author":"Y. Li","year":"2022","unstructured":"Li, Y., Mo, Y., Shi, L., & Yan, J. (2022). Improving generative adversarial networks via adversarial learning in latent space. In S. Koyejo, S. Mohamed, A. Agarwal, et al. (Eds.), Advances in neural information processing systems (Vol. 35, pp. 8868\u20138881). Red Hook: Curran Associates."},{"key":"21_CR18","first-page":"12600","volume-title":"2021 IEEE international conference on computer vision","author":"L.-N. Ho","year":"2021","unstructured":"Ho, L.-N., Tran, A. T., Phung, Q., & Hoai, M. (2021). Toward realistic single-view 3D object reconstruction with unsupervised learning from multiple images. In 2021 IEEE international conference on computer vision (pp. 12600\u201312610). Los Alamitos: IEEE."},{"key":"21_CR19","first-page":"356","volume-title":"Proceedings of the 22nd international conference on image computing and computer assisted intervention","author":"A. V. Dalca","year":"2019","unstructured":"Dalca, A. V., Yu, E., Golland, P., Fischl, B., Sabuncu, M. R., & Iglesias, J. E. (2019). Unsupervised deep learning for Bayesian brain MRI segmentation. In D. Shen, T. Liu, T. M. Peters, et al. (Eds.), Proceedings of the 22nd international conference on image computing and computer assisted intervention (pp. 356\u2013365). Berlin: Springer."},{"key":"21_CR20","first-page":"4020","volume-title":"Advances in neural information processing systems","author":"T. Jakab","year":"2018","unstructured":"Jakab, T., Gupta, A., Bilen, H., & Vedaldi, A. (2018). Unsupervised learning of object landmarks through conditional image generation. In S. Bengio, H. Wallach, H. Larochelle, et al. (Eds.), Advances in neural information processing systems (Vol. 31, pp. 4020\u20134031). Red Hook: Curran Associates."},{"key":"21_CR21","first-page":"1538","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"A. Dosovitskiy","year":"2015","unstructured":"Dosovitskiy, A., Springenberg, J. T., & Brox, T. (2015). Learning to generate chairs with convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1538\u20131546). Los Alamitos: IEEE."},{"key":"21_CR22","first-page":"297","volume-title":"Proceedings of 17th European conference on computer vision","author":"Y. Luo","year":"2022","unstructured":"Luo, Y., Zhu, J., He, K., Chu, W., Tai, Y., Wang, C., et al. (2022). Styleface: towards identity-disentangled face generation on megapixels. In S. Avidan, G. J. Brostow, M. Ciss\u00e9, et al. (Eds.), Proceedings of 17th European conference on computer vision (pp. 297\u2013312). Berlin: Springer."},{"key":"21_CR23","doi-asserted-by":"publisher","first-page":"199","DOI":"10.1145\/3123266.3123277","volume-title":"Proceedings of the 2017 ACM on multimedia conference","author":"Y. Yan","year":"2017","unstructured":"Yan, Y., Xu, J., Ni, B., Zhang, W., & Yang, X. (2017). Skeleton-aided articulated motion generation. In Q. Liu, R. Lienhart, H. Wang, et al. Proceedings of the 2017 ACM on multimedia conference (pp. 199\u2013207). New York: ACM."},{"key":"21_CR24","unstructured":"Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint. arXiv:1511.06434."},{"key":"21_CR25","first-page":"448","volume-title":"Proceedings of the 32nd international conference on machine learning","author":"S. Ioffe","year":"2015","unstructured":"Ioffe, S., & Szegedy, C. (2015). Batch normalization: accelerating deep network training by reducing internal covariate shift. In F. R. Bach, & D. M. Blei (Eds.), Proceedings of the 32nd international conference on machine learning (pp. 448\u2013456). JMLR."},{"issue":"11","key":"21_CR26","doi-asserted-by":"publisher","first-page":"4451","DOI":"10.1021\/acs.molpharmaceut.9b00500","volume":"16","author":"Y. Bian","year":"2019","unstructured":"Bian, Y., Wang, J., Jun, J. J., & Xie, X.-Q. (2019). Deep convolutional generative adversarial network (DCGAN) models for screening and design of small molecules targeting cannabinoid receptors. Molecular Pharmaceutics, 16(11), 4451\u20134460.","journal-title":"Molecular Pharmaceutics"},{"key":"21_CR27","doi-asserted-by":"publisher","first-page":"98716","DOI":"10.1109\/ACCESS.2020.2997001","volume":"8","author":"Q. Wu","year":"2020","unstructured":"Wu, Q., Chen, Y., & Meng, J. (2020). DCGAN-based data augmentation for tomato leaf disease identification. IEEE Access, 8, 98716\u201398728.","journal-title":"IEEE Access"},{"key":"21_CR28","first-page":"97","volume-title":"Proceedings of the 9th international conference on image and graphics","author":"Y. Yu","year":"2017","unstructured":"Yu, Y., Gong, Z., Zhong, P., & Shan, J. (2017). Unsupervised representation learning with deep convolutional neural network for remote sensing images. In Y. Zhao, X. Kong, & D. Taubman (Eds.), Proceedings of the 9th international conference on image and graphics (pp. 97\u2013108). Berlin: Springer."},{"issue":"9","key":"21_CR29","doi-asserted-by":"publisher","first-page":"2352","DOI":"10.1162\/neco_a_00990","volume":"29","author":"W. Rawat","year":"2017","unstructured":"Rawat, W., & Wang, Z. (2017). Deep convolutional neural networks for image classification: a comprehensive review. Neural Computation, 29(9), 2352\u20132449.","journal-title":"Neural Computation"},{"key":"21_CR30","doi-asserted-by":"publisher","first-page":"65","DOI":"10.1016\/j.procs.2022.08.008","volume":"204","author":"M. Puttagunta","year":"2022","unstructured":"Puttagunta, M., Subban, R., & Nelson, K. B. C. (2022). A novel COVID-19 detection model based on DCGAN and deep transfer learning. Procedia Computer Science, 204, 65\u201372.","journal-title":"Procedia Computer Science"},{"key":"21_CR31","unstructured":"Curt\u00f3, J. D., Zarza, I. C., De La Torre, F., King, I., & Lyu, M. R. (2017). High-resolution deep convolutional generative adversarial networks. arXiv preprint. arXiv:1711.06491."},{"key":"21_CR32","doi-asserted-by":"publisher","first-page":"3626","DOI":"10.1109\/TIP.2020.2963957","volume":"29","author":"D. Xie","year":"2020","unstructured":"Xie, D., Deng, C., Li, C., Liu, X., & Tao, D. (2020). Multi-task consistency-preserving adversarial hashing for cross-modal retrieval. IEEE Transactions on Image Processing, 29, 3626\u20133637.","journal-title":"IEEE Transactions on Image Processing"},{"key":"21_CR33","first-page":"2223","volume-title":"2017 IEEE international conference on computer vision","author":"J.-Y. Zhu","year":"2017","unstructured":"Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE international conference on computer vision (pp. 2223\u20132232). Los Alamitos: IEEE."},{"key":"21_CR34","doi-asserted-by":"publisher","first-page":"3734","DOI":"10.1145\/3394171.3414027","volume-title":"Proceedings of 28th ACM international conference on multimedia","author":"L. Gao","year":"2020","unstructured":"Gao, L., Zhu, J., Song, J., Zheng, F., & Shen, H. T. (2020). Lab2pix: label-adaptive generative adversarial network for unsupervised image synthesis. In C. W. Chen, R. Cucchiara, X.-S. Hua, et al. (Eds.), Proceedings of 28th ACM international conference on multimedia (pp. 3734\u20133742). New York: ACM."},{"key":"21_CR35","first-page":"4401","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"T. Karras","year":"2019","unstructured":"Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4401\u20134410). Los Alamitos: IEEE."},{"issue":"3","key":"21_CR36","doi-asserted-by":"publisher","first-page":"302","DOI":"10.1007\/s11263-018-1140-0","volume":"127","author":"B. Zhou","year":"2019","unstructured":"Zhou, B., Zhao, H., Puig, X., Xiao, T., Fidler, S., Barriuso, A., et al. (2019). Semantic understanding of scenes through the ADE20K dataset. International Journal of Computer Vision, 127(3), 302\u2013321.","journal-title":"International Journal of Computer Vision"},{"key":"21_CR37","first-page":"694","volume-title":"Proceedings of 15th European conference on computer vision","author":"J. Johnson","year":"2016","unstructured":"Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In B. Leibe, J. Matas, N. Sebe, et al. (Eds.), Proceedings of 15th European conference on computer vision (pp. 694\u2013711). Berlin: Springer."},{"key":"21_CR38","first-page":"852","volume-title":"Advances in neural information processing systems","author":"T. Karras","year":"2021","unstructured":"Karras, T., Aittala, M., Laine, S., H\u00e4rk\u00f6nen, E., Hellsten, J., Lehtinen, J., et al. (2021). Alias-free generative adversarial networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, et al. (Eds.), Advances in neural information processing systems (Vol. 34, pp. 852\u2013863). Red Hook: Curran Associates."},{"key":"21_CR39","unstructured":"Azulay, A., & Weiss, Y. (2018). Why do deep convolutional networks generalize so poorly to small image transformations? arXiv preprint. arXiv:1805.12177."},{"key":"21_CR40","first-page":"7324","volume-title":"Proceedings of the 36th international conference on machine learning","author":"R. Zhang","year":"2019","unstructured":"Zhang, R. (2019). Making convolutional networks shift-invariant again. In K. Chaudhuri, & R. Salakhutdinov (Eds.), Proceedings of the 36th international conference on machine learning (pp. 7324\u20137334). JMLR."},{"key":"21_CR41","first-page":"1","volume-title":"SIGGRAPH \u201922: special interest group on computer graphics and interactive techniques conference","author":"A. Sauer","year":"2022","unstructured":"Sauer, A., Schwarz, K., & Geiger, A. (2022). StyleGAN-XL: scaling stylegan to large diverse datasets. In M. Nandigjav, N. J. Mitra, & A. Hertzmann (Eds.), SIGGRAPH \u201922: special interest group on computer graphics and interactive techniques conference (pp. 1\u201310). New York: ACM."},{"key":"21_CR42","first-page":"1833","volume-title":"2021 IEEE international conference on computer vision","author":"J. Liang","year":"2021","unstructured":"Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., & Timofte, R. (2021). SwinIR: image restoration using swin transformer. In 2021 IEEE international conference on computer vision (pp. 1833\u20131844). Los Alamitos: IEEE."},{"key":"21_CR43","unstructured":"Brock, A., Donahue, J., & Simonyan, K. (2018). Large scale gan training for high fidelity natural image synthesis. arXiv preprint. arXiv:1809.11096."},{"issue":"5","key":"21_CR44","volume":"9","author":"G. Shrivakshan","year":"2012","unstructured":"Shrivakshan, G., & Chandrasekar, C. (2012). A comparison of various edge detection techniques used in image processing. International Journal of Computer Science Issues, 9(5), 269.","journal-title":"International Journal of Computer Science Issues"},{"issue":"3","key":"21_CR45","doi-asserted-by":"publisher","first-page":"153","DOI":"10.7763\/IJET.2016.V8.876","volume":"8","author":"J. A. Saif","year":"2016","unstructured":"Saif, J. A., Hammad, M. H., & Alqubati, I. A. (2016). Gradient based image edge detection. International Journal of Engineering and Technology, 8(3), 153\u2013156.","journal-title":"International Journal of Engineering and Technology"},{"issue":"3","key":"21_CR46","doi-asserted-by":"publisher","first-page":"721","DOI":"10.1016\/S0031-3203(00)00023-6","volume":"34","author":"L. Ding","year":"2001","unstructured":"Ding, L., & Goshtasby, A. (2001). On the Canny edge detector. Pattern Recognition, 34(3), 721\u2013725.","journal-title":"Pattern Recognition"},{"key":"21_CR47","volume-title":"Digital image processing","author":"R. C. Gonzalez","year":"1987","unstructured":"Gonzalez, R. C., & Wintz, P. (1987). Digital image processing. Boston: Addison Wesley Longman."},{"key":"21_CR48","first-page":"97","volume-title":"Proceedings of Informing Science and IT Education Conference","author":"O. R. Vincent","year":"2009","unstructured":"Vincent, O. R., & Folorunso, O. (2009). A descriptive algorithm for Sobel image edge detection. In Proceedings of Informing Science and IT Education Conference (pp. 97\u2013107). Santa Rosa: ISI."},{"key":"21_CR49","doi-asserted-by":"crossref","first-page":"2558","DOI":"10.1109\/CCDC.2008.4597787","volume-title":"2008 Chinese control and decision conference","author":"G. B. Xu","year":"2008","unstructured":"Xu, G. B., Zhao, G. Y., & Yin, Y. X. (2008). A CNN-based edge detection algorithm for remote sensing image. In 2008 Chinese control and decision conference (pp. 2558\u20132561). Los Alamitos: IEEE."},{"key":"21_CR50","first-page":"3000","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Y. Liu","year":"2017","unstructured":"Liu, Y., Cheng, M.-M., Hu, X., Wang, K., & Bai, X. (2017). Richer convolutional features for edge detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3000\u20133009). Los Alamitos: IEEE."},{"key":"21_CR51","first-page":"3982","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"W. Shen","year":"2015","unstructured":"Shen, W., Wang, X., Wang, Y., Bai, X., & Zhang, Z. (2015). Deepcontour: a deep convolutional feature learned by positive-sharing loss for contour detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3982\u20133991). Los Alamitos: IEEE."},{"key":"21_CR52","first-page":"1395","volume-title":"2015 IEEE international conference on computer vision","author":"S. Xie","year":"2015","unstructured":"Xie, S., & Tu, Z. (2015). Holistically-nested edge detection. In 2015 IEEE international conference on computer vision (pp. 1395\u20131403). Los Alamitos: IEEE."},{"key":"21_CR53","volume":"10","author":"Z. Qin","year":"2023","unstructured":"Qin, Z., Lu, X., Nie, X., Liu, D., Yin, Y., & Wang, W. (2023). Coarse-to-fine video instance segmentation with factorized conditional appearance flows. Journal of Automatica Sinica, 10, 1.","journal-title":"Journal of Automatica Sinica"},{"issue":"11","key":"21_CR54","doi-asserted-by":"publisher","first-page":"7885","DOI":"10.1109\/TPAMI.2021.3115815","volume":"44","author":"X. Lu","year":"2022","unstructured":"Lu, X., Wang, W., Shen, J., Crandall, D. J., & Van Gool, L. (2022). Segmenting objects from relational visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11), 7885\u20137897.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"21_CR55","first-page":"3618","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"X. Lu","year":"2019","unstructured":"Lu, X., Wang, W., Ma, C., Shen, J., Shao, L., & Porikli, F. (2019). See more, know more: unsupervised video object segmentation with co-attention Siamese networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3618\u20133627). Los Alamitos: IEEE."},{"key":"21_CR56","first-page":"16312","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Y. Luo","year":"2021","unstructured":"Luo, Y., Zhang, Y., Yan, J., & Liu, W. (2021). Generalizing face forgery detection with high-frequency features. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 16312\u201316321). Los Alamitos: IEEE."},{"key":"21_CR57","first-page":"617","volume-title":"Proceedings of 15th European conference on computer vision","author":"N. Zhang","year":"2020","unstructured":"Zhang, N., & Yan, J. (2020). Rethinking the defocus blur detection problem and a real-time deep dbd model. In A. Vedaldi, H. Bischof, & T. Brox (Eds.), Proceedings of 15th European conference on computer vision (pp. 617\u2013632). Berlin: Springer."},{"issue":"9","key":"21_CR58","doi-asserted-by":"publisher","first-page":"1200","DOI":"10.1109\/TIP.2004.833105","volume":"13","author":"A. Criminisi","year":"2004","unstructured":"Criminisi, A., Perez, P., & Toyama, K. (2004). Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on Image Processing, 13(9), 1200\u20131212.","journal-title":"IEEE Transactions on Image Processing"},{"key":"21_CR59","first-page":"5967","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"P. Isola","year":"2017","unstructured":"Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5967\u20135976). Los Alamitos: IEEE."},{"key":"21_CR60","unstructured":"Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint. arXiv:1411.1784."},{"key":"21_CR61","first-page":"8798","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"T.-C. Wang","year":"2018","unstructured":"Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., & Kautz, J. (2018). Pix2pixhd: high-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8798\u20138807). Los Alamitos: IEEE."},{"key":"21_CR62","first-page":"2242","volume-title":"2017 IEEE international conference on computer vision","author":"J.-Y. Zhu","year":"2017","unstructured":"Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE international conference on computer vision (pp. 2242\u20132251). Los Alamitos: IEEE."},{"key":"21_CR63","first-page":"8789","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Y. Choi","year":"2018","unstructured":"Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., & Choo, J. (2018). Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8789\u20138797). Los Alamitos: IEEE."},{"key":"21_CR64","first-page":"6542","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"H. Zhao","year":"2017","unstructured":"Zhao, H., Shi, J., Qi, X., Wang, X., & Jia, J. (2017). Variational autoencoder for deep learning of images, labels and captions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6542\u20136550). Los Alamitos: IEEE."},{"issue":"6","key":"21_CR65","doi-asserted-by":"publisher","first-page":"740","DOI":"10.1016\/j.cag.2012.03.004","volume":"36","author":"H. W. E. Kyprianidis","year":"2012","unstructured":"Kyprianidis, H. W. E., & Olsen, S. C. (2012). XDoG: an extended difference-of-Gaussians compendium including advanced image stylization. Computers & Graphics, 36(6), 740\u2013753.","journal-title":"Computers & Graphics"},{"issue":"1167","key":"21_CR66","first-page":"187","volume":"207","author":"D. Marr","year":"1980","unstructured":"Marr, D., & Hildreth, E. (1980). Theory of edge detection. Proceedings of the Royal Society of London. Series B, 207(1167), 187\u2013217.","journal-title":"Proceedings of the Royal Society of London. Series B"},{"key":"21_CR67","first-page":"2337","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"T. Park","year":"2019","unstructured":"Park, T., Liu, M.-Y., Wang, T.-C., & Zhu, J.-Y. (2019). Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2337\u20132346). Los Alamitos: IEEE."},{"key":"21_CR68","first-page":"14134","volume-title":"2021 IEEE international conference on computer vision","author":"X. Guo","year":"2021","unstructured":"Guo, X., Yang, H., & Huang, D. (2021). Image inpainting via conditional texture and structure dual generation. In 2021 IEEE international conference on computer vision (pp. 14134\u201314143). Los Alamitos: IEEE."},{"key":"21_CR69","first-page":"6626","volume-title":"Advances in neural information processing systems","author":"M. Heusel","year":"2017","unstructured":"Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. In I. Guyon, U. Von Luxburg, S. Bengio, et al. (Eds.), Advances in neural information processing systems (Vol. 30, pp. 6626\u20136637). Red Hook: Curran Associates."},{"issue":"4","key":"21_CR70","doi-asserted-by":"publisher","first-page":"600","DOI":"10.1109\/TIP.2003.819861","volume":"13","author":"Z. Wang","year":"2004","unstructured":"Wang, Z., Bovik, A., Sheikh, H., & Simoncelli, E. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600\u2013612.","journal-title":"IEEE Transactions on Image Processing"},{"key":"21_CR71","unstructured":"Nilsson, J., & Akenine-M\u00f6ller, T. (2020). Understanding SSIM. arXiv preprint. arXiv:2006.13846."},{"key":"21_CR72","first-page":"2366","volume-title":"Proceedings of the 20th international conference on pattern recognition","author":"A. Hore","year":"2010","unstructured":"Hore, A., & Ziou, D. (2010). Image quality metrics: PSNR vs. SSIM. In Proceedings of the 20th international conference on pattern recognition (pp. 2366\u20132369). Los Alamitos: IEEE."},{"key":"21_CR73","first-page":"1923","volume-title":"IEEE\/CVF winter conference on applications of computer vision","author":"X. S. Poma","year":"2020","unstructured":"Poma, X. S., Riba, E., & Sappa, A. (2020). Dense extreme inception network: towards a robust CNN model for edge detection. In IEEE\/CVF winter conference on applications of computer vision (pp. 1923\u20131932). Los Alamitos: IEEE."},{"issue":"6","key":"21_CR74","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/JPHOT.2017.2766881","volume":"9","author":"C. Bi","year":"2017","unstructured":"Bi, C., Yuan, Y., Zhang, R., Xiang, Y., Wang, Y., & Zhang, J. (2017). A dynamic mode decomposition based edge detection method for art images. IEEE Photonics Journal, 9(6), 1\u201313.","journal-title":"IEEE Photonics Journal"},{"key":"21_CR75","doi-asserted-by":"publisher","first-page":"44276","DOI":"10.1109\/ACCESS.2020.2977386","volume":"8","author":"N. U. Din","year":"2020","unstructured":"Din, N. U., Javed, K., Bae, S., & Yi, J. (2020). A novel GAN-based network for unmasking of masked face. IEEE Access, 8, 44276\u201344287.","journal-title":"IEEE Access"},{"key":"21_CR76","first-page":"10441","volume-title":"Proceedings of the 25th international conference on pattern recognition","author":"F. Pinto","year":"2021","unstructured":"Pinto, F., Romanoni, A., Matteucci, M., & Torr, P. H. (2021). Seci-GAN: semantic and edge completion for dynamic objects removal. In Proceedings of the 25th international conference on pattern recognition (pp. 10441\u201310448). Los Alamitos: IEEE."},{"key":"21_CR77","first-page":"7","volume-title":"Proceedings of global intelligence industry conference","author":"Z. Xu","year":"2018","unstructured":"Xu, Z., Luo, H., Hui, B., & Chang, Z. (2018). Contour detection using an improved holistically-nested edge detection network. In Proceedings of global intelligence industry conference (Vol. 10835, pp. 7\u201313). Bellingham: SPIE."},{"issue":"1","key":"21_CR78","doi-asserted-by":"publisher","first-page":"57","DOI":"10.1109\/TPAMI.2003.1159946","volume":"25","author":"S. Konishi","year":"2003","unstructured":"Konishi, S., Yuille, A. L., Coughlan, J. M., & Zhu, S. C. (2003). Statistical edge detection: learning and evaluating edge cues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(1), 57\u201374.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"21_CR79","unstructured":"Zhang, L., & Agrawala, M. (2023). Adding conditional control to text-to-image diffusion models. arXiv preprint. arXiv:2302.05543."}],"container-title":["Visual Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44267-023-00021-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44267-023-00021-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44267-023-00021-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,11,20]],"date-time":"2023-11-20T02:05:55Z","timestamp":1700445955000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44267-023-00021-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,1]]},"references-count":79,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2023,12]]}},"alternative-id":["21"],"URL":"https:\/\/doi.org\/10.1007\/s44267-023-00021-y","relation":{},"ISSN":["2731-9008"],"issn-type":[{"value":"2731-9008","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,1]]},"assertion":[{"value":"7 March 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"20 July 2023","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 July 2023","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 September 2023","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"19"}}