Abstract
State-of-the-art face super-resolution methods employ deep convolutional neural networks to learn a mapping between low- and high-resolution facial patterns by exploring local appearance knowledge. However, most of these methods do not well exploit facial structures and identity information, and struggle to deal with facial images that exhibit large pose variations. In this paper, we propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures. Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes (e.g., identity, facial expression, texture, illumination, and face pose). Furthermore, the priors can easily be incorporated into any network and are extremely efficient in improving the performance and accelerating the convergence speed. Firstly, a 3D face rendering branch is set up to obtain 3D priors of salient facial structures and identity knowledge. Secondly, the Spatial Attention Module is used to better exploit this hierarchical information (i.e., intensity similarity, 3D facial structure, and identity content) for the super-resolution problem. Extensive experiments demonstrate that the proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
B. Menze, W. Liu—Contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In ACM SIGGRAPH (1999)
Booth, J., Roussos, A., Zafeiriou, S., Ponniah, A., Dunaway, D.: A 3D morphable model learnt from 10,000 faces. In: CVPR (2016)
Cao, Q., Lin, L., Shi, Y., Liang, X., Li, G.: Attention-aware face hallucination via deep reinforcement learning. In: CVPR (2017)
Chen, Y., Tai, Y., Liu, X., Shen, C., Yang, J.: FSRNet: end-to-end learning face super-resolution with facial priors. In: CVPR (2018)
Dahl, R., Norouzi, M., Shlens, J.: Pixel recursive super resolution. In: ICCV (2017)
Deng, Y., Yang, J., Xu, S., Chen, D., Jia, Y., Tong, X.: Accurate 3D face reconstruction with weakly-supervised learning: from single image to image set. In: CVPRW (2019)
Dong, C., Loy, C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. TPAMI 38(2), 295–307 (2016)
Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 391–407. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_25
Fritsche, M., Gu, S., Timofte, R.: Frequency separation for real-world super-resolution. In: CVPRW (2019)
Grm, K., Scheirer, W., Štruc, V.: Face hallucination using cascaded super-resolution and identity priors. TIP 29, 2150–2165 (2019)
Han, C., Shan, S., Kan, M., Wu, S., Chen, X.: Face recognition with contrastive convolution. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 120–135. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01240-3_8
Haris, M., Shakhnarovich, G., Ukita, N.: Deep back projection networks for super-resolution. In: CVPR (2018)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR (2018)
Huang, H., He, R., Sun, Z., Tan, T.: Wavelet-SRNet: a wavelet-based CNN for multi-scale face super resolution. In: ICCV (2017)
Jaderberg, M., Simonyan, K., Zisserman, A.: Spatial transformer networks. In: NIPS (2015)
Kim, D., Kim, M., Kwon, G., Kim, D.: Progressive face super-resolution via attention to facial landmark. In: BMVC (2019)
Kim, J., Lee, J., Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: CVPR (2016)
Kim, J., Lee, J., Lee, K.: Deeply recursive convolutional network for image super-resolution. In: CVPR (2016)
Lai, W., Huang, J., Ahuja, N., Yang, M.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: CVPR (2017)
Li, Z., Tang, J., Zhang, L., Yang, J.: Weakly-supervised semantic guided hashing for social image retrieval. Int. J. Comput. Vision 128(8), 2265–2278 (2020). https://doi.org/10.1007/s11263-020-01331-0
Lian, S., Zhou, H., Sun, Y.: A feature-guided super-resolution generative adversarial network for unpaired image super-resolution. In: NIPS (2019)
Lim, B., Son, S., Kim, H., Nah, S., Lee, K.: Enhanced deep residual networks for single image super-resolution. In: CVPRW, pp. 1646–1654 (2017)
Liu, C., Shum, H., Freeman, W.: Face hallucination: theory and practice. Int. J. Comput. Vision 75(1), 115–134 (2007). https://doi.org/10.1007/s11263-006-0029-5
Liu, W., Lin, D., Tang, X.: Hallucinating faces: TensorPatch super-resolution and coupled residue compensation. In: CVPR (2005)
Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: ICCV (2015)
Oord, A., Kalchbrenner, N., Kavukcuoglu, K.: Pixel recurrent neural networks. In: ICML (2016)
Ramamoorthi, R., Hanrahan, P.: An efficient representation for irradiance environment maps. In: SIGGRAPH Annual Conference on Computer Graphics and Interactive Techniques, pp. 497–500 (2001)
Ren, W., Yang, J., Deng, S., Wipf, D., Cao, X., Tong, X.: Face video deblurring via 3D facial priors. In: ICCV (2019)
Shen, Z., Lai, W., Xu, T., Kautz, J., Yang, M.: Deep semantic face deblurring. In: CVPR (2018)
Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: CVPR (2016)
Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: CVPR (2017)
Thies, J., Zollhofer, M., Stamminger, M., Theobalt, C., Nießner, M.: Face2Face: real-time face capture and reenactment of RGB videos. In: CVPR (2016)
Wang, X., Tang, X.: Hallucinating face by eigen transformation. Trans. Syst. Man Cybern. C 35(3), 425–434 (2005)
Wang, X., Yu, K., Dong, C., Loy, C.: Recovering realistic texture in image super-resolution by deep spatial feature transform. In: CVPR (2018)
Yu, X., Fernando, B., Ghanem, B., Porikli, F., Hartley, R.: Face super-resolution guided by facial component heatmaps. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 219–235. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01240-3_14
Yu, X., Fernando, B., Hartley, R., Porikli, F.: Super-resolving very low-resolution face images with supplementary attributes. In: CVPR (2018)
Yu, X., Porikli, F.: Ultra-resolving face images by discriminative generative networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 318–333. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_20
Yu, X., Porikli, F.: Hallucinating very low-resolution unaligned and noisy face images by transformative discriminative autoencoders. In: CVPR (2017)
Yu, X., Porikli, F.: Imagining the unimaginable faces by deconvolutional networks. TIP 27(6), 2747–2761 (2018)
Zafeiriou, S., Trigeorgis, G., Chrysos, G., Deng, J., Shen, J.: The menpo facial landmark localisation challenge: a step towards the solution. In: CVPRW (2017)
Zhang, K., et al.: Super-identity convolutional neural network for face hallucination. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 196–211. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01252-6_12
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 294–310. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_18
Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR (2018)
Zhao, J., Xiong, L., Li, J., Xing, J., Yan, S., Feng, J.: 3D-aided dual-agent GANs for unconstrained face recognition. TPAMI 41, 2380–2394 (2019)
Zhao, W., Chellappa, R., Phillips, P.J., Rosenfeld, A.: Face recognition: a literature survey. ACM Comput. Surv. (CSUR) 35(4), 399–458 (2003)
Zhou, E., Fan, H.: Learning face hallucination in the wild. In: AAAI (2015)
Zhu, S., Liu, S., Loy, C.C., Tang, X.: Deep cascaded bi-network for face hallucination. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 614–630. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_37
Acknowledgement
This work is supported by the National Key R&D Program of China under Grant 2018AAA0102503, Zhejiang Lab (NO.2019NB0AB01), Beijing Education Committee Cooperation Beijing Natural Science Foundation (No. KZ201910005007), National Natural Science Foundation of China (No. U1736219) and Peng Cheng Laboratory Project of Guangdong Province PCL2018KP004.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, X. et al. (2020). Face Super-Resolution Guided by 3D Facial Priors. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12349. Springer, Cham. https://doi.org/10.1007/978-3-030-58548-8_44
Download citation
DOI: https://doi.org/10.1007/978-3-030-58548-8_44
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58547-1
Online ISBN: 978-3-030-58548-8
eBook Packages: Computer ScienceComputer Science (R0)