Abstract
Thangka murals are an important part of the cultural heritage of Tibet, but many precious murals were damaged during the Tibetan history. Three reasons cause existing methods to fail to provide a feasible solution for Thanka mural restoration: 1) damaged Thanka murals contain multiple large irregular broken areas; 2) damaged Thanka murals should be repaired with the original content instead of imaginary content; and 3) there is no large Thanka dataset for training. We propose a damage sensitive and original restoration driven (DSORD) Thanka inpainting method to resolve this problem. The proposed method consists of two parts. In the first part, instead of using existing arbitrary mask sets, we propose a novel mask generation method to simulate real damage of the Thanka murals, both masked Thanka and the mask generated by our method are inputted into a partial convolution neural network for training, which makes our model familiar with a variety of irregular simulated damages; and in the second part, we propose a 2-phase original-restoration-driven learning method to guide the model to restore the original content of the Thanka mural. Experiments on both simulated and real damage demonstrated that our DSORD approach performed well on a small dataset (N = 3000), generated more realistic content, and restored better the damaged Thanka murals.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Iizuka, S., et al.: Globally and locally consistent image completion. ACM Trans. Graph. (ToG) 36(4), 107 (2017)
Barnes, C., et al.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. (ToG). 28(3), 24 (2009). ACM
Liu, G., et al.: Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
Yu, J., et al.: Free-form image inpainting with gated convolution. In: Proceedings of the IEEE International Conference on Computer Vision (2019)
Wang, W., et al.: Research outline and progress of digital protection on thangka. Adv. Top. Multimedia Res. 67 (2012)
Goodfellow, I., et al.: Generative adversarial nets. In: NIPS’14: Proceedings of the 27th International Conference on Neural Information Processing Systems, vol. 2, pp. 2672–2680, December 2014
Wang, Y., et al.: Image inpainting via generative multi-column convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 331–340 (2018)
Wang, H., et al.: Inpainting of Dunhuang murals by sparsely modeling the texture similarity and structure continuity. J. Comput. Cult. Heritage (JOCCH) 12(3), 17 (2019)
Yu, T., et al.: End-to-end partial convolutions neural networks for Dunhuang grottoes wall-painting restoration. In: Proceedings of the IEEE International Conference on Computer Vision Workshops (2019)
Pathak, D., et al.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2016)
Karras, T., et al.: Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv: 1710.10196 (2017)
Yu, J., et al.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
Nazeri, K., et al.: EdgeConnect: generative image inpainting with adversarial edge learning. arXiv preprint arXiv: 1901.00212 (2019)
Zhang, W., et al.: A precise-mask-based method for enhanced image inpainting. Math. Probl. Eng. 2016, 1–5 (2016)
Richard, M.M.O.B., et al.: Fast digital image inpainting. In: Appeared in the Proceedings of the International Conference on Visualization, Imaging and Image Processing. (VIIP 2001), Marbella, Spain (2001)
Chang, Y.L., et al.: Free-form video inpainting with 3D gated convolution and temporal patchGAN. In: Proceedings of the IEEE International Conference on Computer Vision (2019)
Isola, P., et al.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2017)
Gatys, L.A., et al.: A neural algorithm of artistic style. arXiv preprint arXiv: 1508.06576 (2015)
Russakovsky, O.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Xiong, W., et al.: Foreground-aware image inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5840–5848 (2019)
Acknowledgements
This work is jointly supported by NSFC (Grant No. 61862057), the Program for Innovative Research Team of SEAC ([2018]98), and the Fundamental Research Funds for the Central Universities (No. 31920200066).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, N., Wang, W., Hu, W., Fenster, A., Li, S. (2020). Damage Sensitive and Original Restoration Driven Thanka Mural Inpainting. In: Peng, Y., et al. Pattern Recognition and Computer Vision. PRCV 2020. Lecture Notes in Computer Science(), vol 12305. Springer, Cham. https://doi.org/10.1007/978-3-030-60633-6_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-60633-6_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60632-9
Online ISBN: 978-3-030-60633-6
eBook Packages: Computer ScienceComputer Science (R0)