Abstract
Rolling shutter(RS) cameras are widely used in fields such as drone photography and robot navigation. However, when shooting a fast-moving target, the captured image may be distorted and blurred due to the feature of progressive image collection by the rs camera. In order to solve this problem, researchers have proposed a variety of methods, among which the methods based on deep learning perform best, but it still faces the challenges of poor restoration effect and high practical application cost. To address this challenge, we propose a novel lightweight rolling image restoration network, which can restore the global image at the intermediate moment from two consecutive rolling images. We use a lightweight encoder-decoder network to extract the bidirectional optical flow between rolling images. We further introduce the concept of time factor and undistorted flow, calculate the undistorted flow by multiplying the optical flow by the time factor. Then bilinear interpolation is performed through the undistorted flow to obtain the intermediate moment global image. Our method achieves the state-of-the-art results in several indicators on the RS image dataset Fastec-RS with only about 6% of that of existing methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Peidong, L., Zhaopeng. C., Viktor. L.: Deep shutter unrolling network. In: CVPR, pp. 10–22 (2020)
Bin, F., Yuchao, D.: Deep shutter unrolling network. In: ICCV, pp. 4228–4237 (2021)
Bin, F., Yuchao, D.: Sunet: symmetric undistortion network for rolling shutter correction. In: ICCV, pp. 4541–4550 (2021)
Bin, F., Yuchao, D.: Context-aware video reconstruction for rolling shutter cameras. In: CVPR, pp. 17572–17582 (2022)
Xinyu, Z., Peiqi, D., Yi, M.: EvUnroll: neuromorphic events based rolling shutter image correction. In: CVPR, pp. 17751–17784 (2022)
Zhixiang, W., Xiang, J., Jia-Bin H.: Neural global shutter: learn to restore video from a rolling shutter camera with global reset feature. In: CVPR, pp. 17794–17803 (2022)
Mingdeng, C., Zhihang, Z., Jiahao, W.: Learning adaptive warping for real-world rolling shutter correction. In: CVPR, pp. 17785–17793 (2022)
Zhihang, Z., Yinqiang, Z., Imari, S.: Towards rolling shutter correction and deblurring in dynamic scenes. In: CVPR, pp. 9219–9228 (2020)
Zhihang, Z., Mingdeng, C., Xiao, S.: Bringing rolling shutter images alive with dual reversed distortion. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13667, pp. 223–249. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20071-7_14
Naor, E., Antebi, I., Bagon, S., Irani, M.: Combining internal and external constraints for un-rolling shutter in videos. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13677, pp. 119–134. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19790-1_8
Alexey, D., Philipp, F., Eddy, I.: FlowNet: learning optical flow with convolutional networks. In: ICCV, pp. 2758–2766 (2015)
Deqing, S., Xiaodong, Y., Ming-Yu, L.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: CVPR, pp. 8934–8943 (2018)
Teed, Z., Deng, J.: RAFT: recurrent all-pairs field transforms for optical flow. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 402–419. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_24
Junheum, P., Chul, L., Chang-Su, K.: Asymmetric bilateral motion estimation for video frame interpolation. In: ICCV, pp. 3703–3712 (2021)
Shihao, J., Dylan, C., Yao, L.: Learning to estimate hidden motions with global motion aggregation. In: ICCV, pp. 9772–9781 (2021)
Zhewei, H., Tianyuan, Z., Wen, H.: Real-time intermediate flow estimation for video frame interpolation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13674, pp. 624–642. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19781-9_36
Zou, Y., Zheng, Y., Takatani, T., Fu, Y.: Learning to reconstruct high speed and high dynamic range videos from events. In: CVPR, pp. 2024–2033 (2021)
Zeng, Y., Zou, Y., Fu, Y.: 3D\(^2\)Unet:3D deformable Unet for low-light video enhancement. In: PRCV, pp. 66–77 (2021)
Zhang, F., Li, Y., You, S., Fu, Y.: Learning temporal consistency for low light video enhancement from single images. In: CVPR, pp. 4967–4976 (2021)
Niklaus, S., Liu, F.: Context-aware synthesis for video frame interpolation. In: CVPR, pp. 1701–1710 (2018)
Xu, X., Siyao, L., Sun, W., Yin, Q., Yang, M.H.: Quadratic video interpolation. In: NeurIPS, pp. 1645–1654 (2019)
Jiang, H., et al.: Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In: CVPR, pp. 9000–9008 (2018)
Liu, Z., Yeh, R.A., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: ICCV, pp. 4473–4481 (2017)
Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grants (62171038, 62171042, and 62088101), and the R&D Program of Beijing Municipal Education Commission (Grant No. KZ202211417048).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wang, B., Zou, Y., Gao, Z., Fu, Y. (2024). Lightweight Rolling Shutter Image Restoration Network Based on Undistorted Flow. In: Fang, L., Pei, J., Zhai, G., Wang, R. (eds) Artificial Intelligence. CICAI 2023. Lecture Notes in Computer Science(), vol 14473. Springer, Singapore. https://doi.org/10.1007/978-981-99-8850-1_16
Download citation
DOI: https://doi.org/10.1007/978-981-99-8850-1_16
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8849-5
Online ISBN: 978-981-99-8850-1
eBook Packages: Computer ScienceComputer Science (R0)