Lightweight Rolling Shutter Image Restoration Network Based on Undistorted Flow | SpringerLink
Skip to main content

Lightweight Rolling Shutter Image Restoration Network Based on Undistorted Flow

  • Conference paper
  • First Online:
Artificial Intelligence (CICAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14473))

Included in the following conference series:

  • 590 Accesses

Abstract

Rolling shutter(RS) cameras are widely used in fields such as drone photography and robot navigation. However, when shooting a fast-moving target, the captured image may be distorted and blurred due to the feature of progressive image collection by the rs camera. In order to solve this problem, researchers have proposed a variety of methods, among which the methods based on deep learning perform best, but it still faces the challenges of poor restoration effect and high practical application cost. To address this challenge, we propose a novel lightweight rolling image restoration network, which can restore the global image at the intermediate moment from two consecutive rolling images. We use a lightweight encoder-decoder network to extract the bidirectional optical flow between rolling images. We further introduce the concept of time factor and undistorted flow, calculate the undistorted flow by multiplying the optical flow by the time factor. Then bilinear interpolation is performed through the undistorted flow to obtain the intermediate moment global image. Our method achieves the state-of-the-art results in several indicators on the RS image dataset Fastec-RS with only about 6% of that of existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 9380
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 11725
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Peidong, L., Zhaopeng. C., Viktor. L.: Deep shutter unrolling network. In: CVPR, pp. 10–22 (2020)

    Google Scholar 

  2. Bin, F., Yuchao, D.: Deep shutter unrolling network. In: ICCV, pp. 4228–4237 (2021)

    Google Scholar 

  3. Bin, F., Yuchao, D.: Sunet: symmetric undistortion network for rolling shutter correction. In: ICCV, pp. 4541–4550 (2021)

    Google Scholar 

  4. Bin, F., Yuchao, D.: Context-aware video reconstruction for rolling shutter cameras. In: CVPR, pp. 17572–17582 (2022)

    Google Scholar 

  5. Xinyu, Z., Peiqi, D., Yi, M.: EvUnroll: neuromorphic events based rolling shutter image correction. In: CVPR, pp. 17751–17784 (2022)

    Google Scholar 

  6. Zhixiang, W., Xiang, J., Jia-Bin H.: Neural global shutter: learn to restore video from a rolling shutter camera with global reset feature. In: CVPR, pp. 17794–17803 (2022)

    Google Scholar 

  7. Mingdeng, C., Zhihang, Z., Jiahao, W.: Learning adaptive warping for real-world rolling shutter correction. In: CVPR, pp. 17785–17793 (2022)

    Google Scholar 

  8. Zhihang, Z., Yinqiang, Z., Imari, S.: Towards rolling shutter correction and deblurring in dynamic scenes. In: CVPR, pp. 9219–9228 (2020)

    Google Scholar 

  9. Zhihang, Z., Mingdeng, C., Xiao, S.: Bringing rolling shutter images alive with dual reversed distortion. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13667, pp. 223–249. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20071-7_14

    Chapter  Google Scholar 

  10. Naor, E., Antebi, I., Bagon, S., Irani, M.: Combining internal and external constraints for un-rolling shutter in videos. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13677, pp. 119–134. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19790-1_8

    Chapter  Google Scholar 

  11. Alexey, D., Philipp, F., Eddy, I.: FlowNet: learning optical flow with convolutional networks. In: ICCV, pp. 2758–2766 (2015)

    Google Scholar 

  12. Deqing, S., Xiaodong, Y., Ming-Yu, L.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: CVPR, pp. 8934–8943 (2018)

    Google Scholar 

  13. Teed, Z., Deng, J.: RAFT: recurrent all-pairs field transforms for optical flow. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 402–419. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_24

    Chapter  Google Scholar 

  14. Junheum, P., Chul, L., Chang-Su, K.: Asymmetric bilateral motion estimation for video frame interpolation. In: ICCV, pp. 3703–3712 (2021)

    Google Scholar 

  15. Shihao, J., Dylan, C., Yao, L.: Learning to estimate hidden motions with global motion aggregation. In: ICCV, pp. 9772–9781 (2021)

    Google Scholar 

  16. Zhewei, H., Tianyuan, Z., Wen, H.: Real-time intermediate flow estimation for video frame interpolation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13674, pp. 624–642. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19781-9_36

    Chapter  Google Scholar 

  17. Zou, Y., Zheng, Y., Takatani, T., Fu, Y.: Learning to reconstruct high speed and high dynamic range videos from events. In: CVPR, pp. 2024–2033 (2021)

    Google Scholar 

  18. Zeng, Y., Zou, Y., Fu, Y.: 3D\(^2\)Unet:3D deformable Unet for low-light video enhancement. In: PRCV, pp. 66–77 (2021)

    Google Scholar 

  19. Zhang, F., Li, Y., You, S., Fu, Y.: Learning temporal consistency for low light video enhancement from single images. In: CVPR, pp. 4967–4976 (2021)

    Google Scholar 

  20. Niklaus, S., Liu, F.: Context-aware synthesis for video frame interpolation. In: CVPR, pp. 1701–1710 (2018)

    Google Scholar 

  21. Xu, X., Siyao, L., Sun, W., Yin, Q., Yang, M.H.: Quadratic video interpolation. In: NeurIPS, pp. 1645–1654 (2019)

    Google Scholar 

  22. Jiang, H., et al.: Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In: CVPR, pp. 9000–9008 (2018)

    Google Scholar 

  23. Liu, Z., Yeh, R.A., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: ICCV, pp. 4473–4481 (2017)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grants (62171038, 62171042, and 62088101), and the R&D Program of Beijing Municipal Education Commission (Grant No. KZ202211417048).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhijie Gao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, B., Zou, Y., Gao, Z., Fu, Y. (2024). Lightweight Rolling Shutter Image Restoration Network Based on Undistorted Flow. In: Fang, L., Pei, J., Zhai, G., Wang, R. (eds) Artificial Intelligence. CICAI 2023. Lecture Notes in Computer Science(), vol 14473. Springer, Singapore. https://doi.org/10.1007/978-981-99-8850-1_16

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8850-1_16

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8849-5

  • Online ISBN: 978-981-99-8850-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics