TAPE: Task-Agnostic Prior Embedding for Image Restoration | SpringerLink
Skip to main content

TAPE: Task-Agnostic Prior Embedding for Image Restoration

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13678))

Included in the following conference series:

Abstract

Learning a generalized prior for natural image restoration is an important yet challenging task. Early methods mostly involved handcrafted priors including normalized sparsity, \(\ell _0\) gradients, dark channel priors, etc. Recently, deep neural networks have been used to learn various image priors but do not guarantee to generalize. In this paper, we propose a novel approach that embeds a task-agnostic prior into a transformer. Our approach, named Task-Agnostic Prior Embedding (TAPE), consists of two stages, namely, task-agnostic pre-training and task-specific fine-tuning, where the first stage embeds prior knowledge about natural images into the transformer and the second stage extracts the knowledge to assist downstream image restoration. Experiments on various types of degradation validate the effectiveness of TAPE. The image restoration performance in terms of PSNR is improved by as much as 1.45 dB and even outperforms task-specific algorithms. More importantly, TAPE shows the ability of disentangling generalized image priors from degraded images, which enjoys favorable transfer ability to unknown downstream tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 12583
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 15729
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We select a subset of TIP2018 and Snow100K with 10000 training image pairs and 200 test pairs; 10000 training image pairs and 500 test pairs, respectively.

References

  1. Abdelhamed, A., Lin, S., Brown, M.S.: A high-quality denoising dataset for smartphone cameras. In: CVPR (2018)

    Google Scholar 

  2. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW (2017)

    Google Scholar 

  3. Babacan, S.D., Molina, R., Katsaggelos, A.K.: Variational Bayesian blind deconvolution using a total variation prior. TIP 18, 12–26 (2008)

    MathSciNet  MATH  Google Scholar 

  4. Baek, K., Choi, Y., Uh, Y., Yoo, J., Shim, H.: Rethinking the truly unsupervised image-to-image translation. In: International Conference on Computer Vision (ICCV, 2021) (2021)

    Google Scholar 

  5. Bau, D., et al.: Semantic photo manipulation with a generative image prior. arXiv preprint arXiv:2005.07727 (2020)

  6. Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. In: ICLR (2018)

    Google Scholar 

  7. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13

    Chapter  Google Scholar 

  8. Chan, K.C., Wang, X., Xu, X., Gu, J., Loy, C.C.: Glean: generative latent bank for large-factor image super-resolution. arXiv preprint arXiv:2012.00739 (2020)

  9. Chang, M., Li, Q., Feng, H., Xu, Z.: Spatial-adaptive network for single image denoising. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12375, pp. 171–187. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58577-8_11

    Chapter  Google Scholar 

  10. Chen, H., et al.: Pre-trained image processing transformer. In: CVPR (2021)

    Google Scholar 

  11. Chen, L., Fang, F., Wang, T., Zhang, G.: Blind image deblurring with local maximum gradient prior. In: CVPR (2019)

    Google Scholar 

  12. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.E.: A simple framework for contrastive learning of visual representations. In: ICML (2020)

    Google Scholar 

  13. Chen, X., Wang, X., Zhou, J., Dong, C.: Activating more pixels in image super-resolution transformer. arXiv preprint arXiv:2205.04437 (2022)

  14. Chen, Y.L., Hsu, C.T.: A generalized low-rank appearance model for spatio-temporally correlated rain streaks. In: ICCV (2013)

    Google Scholar 

  15. Dai, T., et al.: Wavelet-based network for high dynamic range imaging. arXiv preprint 2108.01434 (2021)

    Google Scholar 

  16. Dai, Z., Cai, B., Lin, Y., Chen, J.: Up-detr: unsupervised pre-training for object detection with transformers. In: CVPR (2021)

    Google Scholar 

  17. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. TPAMI 38, 295–307 (2015)

    Article  Google Scholar 

  18. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  19. El Helou, M., Süsstrunk, S.: BIGPrior: towards decoupling learned prior hallucination and data fidelity in image restoration. arXiv preprint arXiv:2011.01406 (2020)

  20. Fu, X., Huang, J., Zeng, D., Huang, Y., Ding, X., Paisley, J.: Removing rain from single images via a deep detail network. In: CVPR (2017)

    Google Scholar 

  21. Golts, A., Freedman, D., Elad, M.: Unsupervised single image dehazing using dark channel prior loss. TIP 29, 2692–2701 (2020)

    MATH  Google Scholar 

  22. Gu, J., Shen, Y., Zhou, B.: Image processing using multi-code GAN prior. In: CVPR (2020)

    Google Scholar 

  23. Guo, S., Liang, Z., Zhang, L.: Joint denoising and demosaicking with green channel prior for real-world burst images. arXiv preprint arXiv:2101.09870 (2021)

  24. Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR (2019)

    Google Scholar 

  25. He, B., Wang, C., Shi, B., Duan, L.Y.: Mop moire patterns using mopnet. In: ICCV (2019)

    Google Scholar 

  26. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR (2020)

    Google Scholar 

  27. He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. TPAMI 33, 2341–2353 (2010)

    Google Scholar 

  28. Isobe, T., et al.: Video super-resolution with temporal group attention. In: CVPR (2020)

    Google Scholar 

  29. Jiang, K., Wang, Z., Yi, P., Chen, C., Lin, C.W.: PCNet: progressive coupled network for real-time image deraining. In: TIP (2021)

    Google Scholar 

  30. Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In: ICCV (2019)

    Google Scholar 

  31. Lee, H., Sohn, K., Min, D.: Unsupervised low-light image enhancement using bright channel prior. IEEE Signal Process. Lett. 27, 251–255 (2020)

    Article  Google Scholar 

  32. Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding and evaluating blind deconvolution algorithms. In: CVPR (2009)

    Google Scholar 

  33. Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: CVPR (2022)

    Google Scholar 

  34. Li, L., Pan, J., Lai, W.S., Gao, C., Sang, N., Yang, M.H.: Blind image deblurring via deep discriminative priors. IJCV 127, 1025–1043 (2019)

    Article  Google Scholar 

  35. Li, W., et al.: Sj-hd\(^2r\): Selective joint high dynamic range and denoising imaging for dynamic scenes. arXiv preprint 2206.09611 (2022)

    Google Scholar 

  36. Li, W., Lu, X., Lu, J., Zhang, X., Jia, J.: On efficient transformer and image pre-training for low-level vision. arXiv preprint arXiv:2112.10175

  37. Li, X., Wu, J., Lin, Z., Liu, H., Zha, H.: Recurrent squeeze-and-excitation context aggregation net for single image deraining. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 262–277. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_16

    Chapter  Google Scholar 

  38. Li, X., et al.: Learning disentangled feature representation for hybrid-distorted image restoration. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12374, pp. 313–329. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58526-6_19

    Chapter  Google Scholar 

  39. Li, Y., Tan, R.T., Guo, X., Lu, J., Brown, M.S.: Rain streak removal using layer priors. In: CVPR (2016)

    Google Scholar 

  40. Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: image restoration using swin transformer. In: ICCVW 2021

    Google Scholar 

  41. Liu, L., et al.: Wavelet-based dual-branch network for image Demoiréing. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12358, pp. 86–102. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58601-0_6

    Chapter  Google Scholar 

  42. Liu, L., Yuan, S., Liu, J., Guo, X., Yan, Y., Tian, Q.: Siamtrans: zero-shot multi-frame image restoration with pre-trained siamese transformers. In: AAAI (2022)

    Google Scholar 

  43. Liu, Y.F., Jaw, D.W., Huang, S.C., Hwang, J.N.: Desnownet: context-aware deep network for snow removal. TIP 27, 3064–3073 (2018)

    MathSciNet  Google Scholar 

  44. Nah, S., et al.: Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study. In: CVPRW (2019)

    Google Scholar 

  45. Pan, J., Bai, H., Tang, J.: Cascaded deep video deblurring using temporal sharpness prior. In: CVPR, 2020

    Google Scholar 

  46. Pan, J., Sun, D., Pfister, H., Yang, M.H.: Blind image deblurring using dark channel prior. In: CVPR (2016)

    Google Scholar 

  47. Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C.C., Luo, P.: Exploiting deep generative prior for versatile image restoration and manipulation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 262–277. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_16

    Chapter  Google Scholar 

  48. Park, T., Efros, A.A., Zhang, R., Zhu, J.-Y.: Contrastive learning for unpaired image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 319–345. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_19

    Chapter  Google Scholar 

  49. Qian, R., Tan, R.T., Yang, W., Su, J., Liu, J.: Attentive generative adversarial network for raindrop removal from a single image. In: CVPR (2018)

    Google Scholar 

  50. Ren, D., Shang, W., Zhu, P., Hu, Q., Meng, D., Zuo, W.: Single image deraining using bilateral recurrent network. TIP 29, 6852–6863 (2020)

    MATH  Google Scholar 

  51. Ren, D., Zuo, W., Hu, Q., Zhu, P., Meng, D.: Progressive image deraining networks: a better and simpler baseline. In: CVPR (2019)

    Google Scholar 

  52. Richardson, E., et al.: Encoding in style: a stylegan encoder for image-to-image translation. arXiv preprint arXiv:2008.00951 (2020)

  53. Roth, S., Black, M.J.: Fields of experts: a framework for learning image priors. In: CVPR (2005)

    Google Scholar 

  54. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  55. Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. TIP 27, 4160–4172 (2018)

    MATH  Google Scholar 

  56. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: CVPR (2018)

    Google Scholar 

  57. Vaswani, A., et al.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)

  58. Wang, J., Li, X., Yang, J.: Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. In: CVPR (2018)

    Google Scholar 

  59. Wang, T., Yang, X., Xu, K., Chen, S., Zhang, Q., Lau, R.W.: Spatial attentive single-image deraining with a high quality real rain dataset. In: CVPR (2019)

    Google Scholar 

  60. Wang, X., Li, Y., Zhang, H., Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: CVPR (2021)

    Google Scholar 

  61. Wang, Z., Cun, X., Bao, J., Liu, J.: Uformer: a general u-shaped transformer for image restoration. arXiv preprint arXiv:2106.03106

  62. Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: CVPR (2020)

    Google Scholar 

  63. Yang, S., Lei, Y., Xiong, S., Wang, W.: High resolution demoire network. In: ICIP (2020)

    Google Scholar 

  64. Yang, W., Tan, R.T., Feng, J., Liu, J., Guo, Z., Yan, S.: Deep joint rain detection and removal from a single image. In: CVPR (2017)

    Google Scholar 

  65. Yi, Q., Li, J., Dai, Q., Fang, F., Zhang, G., Zeng, T.: Structure-preserving deraining with residue channel prior guidance. ICCV (2021)

    Google Scholar 

  66. Yu, K., Dong, C., Lin, L., Loy, C.C.: Crafting a toolchain for image restoration by deep reinforcement learning. In: CVPR (2018)

    Google Scholar 

  67. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In: CVPR (2022)

    Google Scholar 

  68. Zamir, S.W., et al.: Multi-stage progressive image restoration. In: CVPR (2021)

    Google Scholar 

  69. Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 528–543. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_31

    Chapter  Google Scholar 

  70. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep CNN for image denoising. TIP 26, 3142–3155 (2017)

    MathSciNet  MATH  Google Scholar 

  71. Zhang, K., Zuo, W., Gu, S., Zhang, L.: Learning deep CNN denoiser prior for image restoration. In: CVPR (2017)

    Google Scholar 

  72. Zhang, K., Zuo, W., Zhang, L.: FFDNet: toward a fast and flexible solution for CNN based image denoising. TIP 27, 4608–4622 (2018)

    MathSciNet  Google Scholar 

  73. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image restoration. TPAMI 43, 2480–2495 (2020)

    Article  Google Scholar 

  74. Zheng, B., et al.: Domainplus: cross transform domain learning towards efficient high dynamic range imaging. In: ACM MM (2022)

    Google Scholar 

  75. Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: CVPR, 2020

    Google Scholar 

  76. Zheng, B., et al.: Learning frequency domain priors for image demoireing. TPAMI 44, 7705–7717 (2021)

    Article  Google Scholar 

  77. Zhu, L., Fu, C.W., Lischinski, D., Heng, P.A.: Joint bi-layer optimization for single-image rain streak removal. In: ICCV (2017)

    Google Scholar 

  78. Zhu, Q., Mai, J., Shao, L.: A fast single image haze removal algorithm using color attenuation prior. TIP 24, 3522–3533 (2015)

    MathSciNet  MATH  Google Scholar 

  79. Zhu, S.C., Mumford, D.: Prior learning and Gibbs reaction-diffusion. TPAMI 19, 1236–1250 (1997)

    Article  Google Scholar 

  80. Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: deformable transformers for end-to-end object detection. In: ICLR (2021)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Contract 61836011 and 62021001. It was also supported by the GPU cluster built by MCC Lab of Information Science and Technology Institution, USTC.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qi Tian .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 633 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, L. et al. (2022). TAPE: Task-Agnostic Prior Embedding for Image Restoration. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13678. Springer, Cham. https://doi.org/10.1007/978-3-031-19797-0_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19797-0_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19796-3

  • Online ISBN: 978-3-031-19797-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics