Deep Snapshot HDR Imaging Using Multi-exposure Color Filter Array | SpringerLink
Skip to main content

Deep Snapshot HDR Imaging Using Multi-exposure Color Filter Array

  • Conference paper
  • First Online:
Computer Vision – ACCV 2020 (ACCV 2020)

Abstract

In this paper, we propose a deep snapshot high dynamic range (HDR) imaging framework that can effectively reconstruct an HDR image from the RAW data captured using a multi-exposure color filter array (ME-CFA), which consists of a mosaic pattern of RGB filters with different exposure levels. To effectively learn the HDR image reconstruction network, we introduce the idea of luminance normalization that simultaneously enables effective loss computation and input data normalization by considering relative local contrasts in the “normalized-by-luminance” HDR domain. This idea makes it possible to equally handle the errors in both bright and dark areas regardless of absolute luminance levels, which significantly improves the visual image quality in a tone-mapped domain. Experimental results using two public HDR image datasets demonstrate that our framework outperforms other snapshot methods and produces high-quality HDR images with fewer visual artifacts.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.hdrsoft.com.

References

  1. Debevec, P., Malik, J.: Recovering high dynamic range radiance maps from photographs. In: Proceedings of SIGGRAPH, pp. 1–10 (1997)

    Google Scholar 

  2. Kalantari, N.K., Ramamoorthi, R.: Deep high dynamic range imaging of dynamic scenes. ACM Trans. Graph. 36(144), 1–12 (2017)

    Article  Google Scholar 

  3. Wu, S., Xu, J., Tai, Y.-W., Tang, C.-K.: Deep high dynamic range imaging with large foreground motions. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 120–135. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_8

    Chapter  Google Scholar 

  4. Yan, Q., et al.: Attention-guided network for ghost-free high dynamic range imaging. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1751–1760 (2019)

    Google Scholar 

  5. Marnerides, D., Bashford-Rogers, T., Hatchett, J., Debattista, K.: ExpandNet: a deep convolutional neural network for high dynamic range expansion from low dynamic range content. Comput. Graph. Forum 37, 37–49 (2018)

    Article  Google Scholar 

  6. Eilertsen, G., Kronander, J., Denes, G., Mantiuk, R.K., Unger, J.: HDR image reconstruction from a single exposure using deep CNNs. ACM Trans. Graph. 36(178), 1–15 (2017)

    Article  Google Scholar 

  7. Lee, S., An, G.H., Kang, S.J.: Deep chain HDRI: reconstructing a high dynamic range image from a single low dynamic range image. IEEE Access 6, 49913–49924 (2018)

    Article  Google Scholar 

  8. Cho, H., Kim, S.J., Lee, S.: Single-shot high dynamic range imaging using coded electronic shutter. Comput. Graph. Forum 33, 329–338 (2014)

    Article  Google Scholar 

  9. Choi, I., Baek, S.H., Kim, M.H.: Reconstructing interlaced high-dynamic-range video using joint learning. IEEE Trans. Image Process. 26, 5353–5366 (2017)

    Article  MathSciNet  Google Scholar 

  10. Narasimhan, S.G., Nayar, S.K.: Enhancing resolution along multiple imaging dimensions using assorted pixels. IEEE Trans. Pattern Anal. Mach. Intell. 27, 518–530 (2005)

    Article  Google Scholar 

  11. Nayar, S.K., Mitsunaga, T.: High dynamic range imaging: spatially varying pixel exposures. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2000)

    Google Scholar 

  12. Eilertsen, G., Mantiuk, R.K., Unger, J.: Real-time noise-aware tone mapping. ACM Trans. Graph. 34(198), 1–15 (2015)

    Article  Google Scholar 

  13. Ma, K., Duanmu, Z., Yeganeh, H., Wang, Z.: Multi-exposure image fusion by optimizing a structural similarity index. IEEE Trans. Comput. Imaging 4, 60–72 (2017)

    Article  MathSciNet  Google Scholar 

  14. Ma, K., Li, H., Yong, H., Wang, Z., Meng, D., Zhang, L.: Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Trans. Image Process. 26, 2519–2532 (2017)

    Article  MathSciNet  Google Scholar 

  15. Mertens, T., Kautz, J., Van Reeth, F.: Exposure fusion: a simple and practical alternative to high dynamic range photography. Comput. Graph. Forum 28, 161–171 (2009)

    Article  Google Scholar 

  16. Hu, J., Gallo, O., Pulli, K., Sun, X.: HDR deghosting: how to deal with saturation? In: Proceedings of IEEE Conferenc on Computer Vision and Pattern Recognition (CVPR), pp. 1163–1170 (2013)

    Google Scholar 

  17. Sen, P., Kalantari, N.K., Yaesoubi, M., Darabi, S., Goldman, D.B., Shechtman, E.: Robust patch-based HDR reconstruction of dynamic scenes. ACM Trans. Graph. 31(203), 1–11 (2012)

    Google Scholar 

  18. Lee, C., Li, Y., Monga, V.: Ghost-free high dynamic range imaging via rank minimization. IEEE Signal Process. Lett. 21, 1045–1049 (2014)

    Article  Google Scholar 

  19. Oh, T.H., Lee, J.Y., Tai, Y.W., Kweon, I.S.: Robust high dynamic range imaging by rank minimization. IEEE Trans. Pattern Anal. Mach. Intell. 37, 1219–1232 (2014)

    Google Scholar 

  20. Hasinoff, S.W., et al.: Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Trans. Graph. 35(192), 1–12 (2016)

    Article  Google Scholar 

  21. Kalantari, N.K., Ramamoorthi, R.: Deep HDR video from sequences with alternating exposures. Comput. Graph. Forum 38, 193–205 (2019)

    Article  Google Scholar 

  22. Prabhakar, K.R., Arora, R., Swaminathan, A., Singh, K.P., Babu, R.V.: A fast, scalable, and reliable deghosting method for extreme exposure fusion. In: Proceedings of IEEE Interntaional Conference on Computational Photography (ICCP), pp. 170–177 (2019)

    Google Scholar 

  23. Ram Prabhakar, K., Sai Srikar, V., Venkatesh Babu, R.: DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 4724–4732 (2017)

    Google Scholar 

  24. Yan, Q., et al.: Multi-scale dense networks for deep high dynamic range imaging. In: Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 41–50 (2019)

    Google Scholar 

  25. Tursun, O.T., Akyüz, A.O., Erdem, A., Erdem, E.: The state of the art in HDR deghosting: a survey and evaluation. Comput. Graph. Forum 34, 683–707 (2015)

    Article  Google Scholar 

  26. Ogino, Y., Tanaka, M., Shibata, T., Okutomi, M.: Super high dynamic range video. In: Proc. of International Conference on Pattern Recognition (ICPR), pp. 4208–4213 (2016)

    Google Scholar 

  27. Tocci, M.D., Kiser, C., Tocci, N., Sen, P.: A versatile HDR video production system. ACM Trans. Graph. 30(41), 1–9 (2011)

    Article  Google Scholar 

  28. Yang, X., Xu, K., Song, Y., Zhang, Q., Wei, X., Lau, R.W.: Image correction via deep reciprocating HDR transformation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1798–1807 (2018)

    Google Scholar 

  29. Moriwaki, K., Yoshihashi, R., Kawakami, R., You, S., Naemura, T.: Hybrid loss for learning single-image-based HDR reconstruction. arXiv preprint 1812.07134 (2018)

    Google Scholar 

  30. Kim, S.Y., Kim, D.-E., Kim, M.: ITM-CNN: learning the inverse tone mapping from low dynamic range video to high dynamic range displays using convolutional neural networks. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11363, pp. 395–409. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20893-6_25

    Chapter  Google Scholar 

  31. Endo, Y., Kanamori, Y., Mitani, J.: Deep reverse tone mapping. ACM Trans. Graph. 36(177), 1–10 (2017)

    Article  Google Scholar 

  32. Lee, S., An, G.H., Kang, S.-J.: Deep recursive HDRI: inverse tone mapping using generative adversarial networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 613–628. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_37

    Chapter  Google Scholar 

  33. Gu, J., Hitomi, Y., Mitsunaga, T., Nayar, S.: Coded rolling shutter photography: flexible space-time sampling. In: Proceedings of IEEE International Conference on Computational Photography (ICCP), pp. 1–8 (2010)

    Google Scholar 

  34. Uda, S., Sakaue, F., Sato, J.: Variable exposure time imaging for obtaining unblurred HDR images. IPSJ Trans. Comput. Vis. Appl. 8, 3:1–3:7 (2016)

    Google Scholar 

  35. Alghamdi, M., Fu, Q., Thabet, A., Heidrich, W.: Reconfigurable snapshot HDR imaging using coded masks and inception network. In: Proceedings of Vision, Modeling, and Visualization (VMV), pp. 1–9 (2019)

    Google Scholar 

  36. Nagahara, H., Sonoda, T., Liu, D., Gu, J.: Space-time-brightness sampling using an adaptive pixel-wise coded exposure. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1834–1842 (2018)

    Google Scholar 

  37. Serrano, A., Heide, F., Gutierrez, D., Wetzstein, G., Masia, B.: Convolutional sparse coding for high dynamic range imaging. Comput. Graph. Forum 35, 153–163 (2016)

    Article  Google Scholar 

  38. Go, C., Kinoshita, Y., Shiota, S., Kiua, H.: Image fusion for single-shot high dynamic range imaging with spatially varying exposures. In: Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 1082–1086 (2018)

    Google Scholar 

  39. Hajisharif, S., Kronander, J., Unger, J.: Adaptive dualISO HDR reconstruction. EURASIP J. Image Video Process. 2015, 1–13 (2015)

    Article  Google Scholar 

  40. Heide, F., et al.: FlexISP: a flexible camera image processing framework. ACM Trans. Graph. 33(231), 1–13 (2014)

    Google Scholar 

  41. Aguerrebere, C., Almansa, A., Delon, J., Gousseau, Y., Musé, P.: A Bayesian hyperprior approach for joint image denoising and interpolation, with an application to HDR imaging. IEEE Trans. Comput. Imaging 3, 633–646 (2017)

    Article  MathSciNet  Google Scholar 

  42. Aguerrebere, C., Almansa, A., Gousseau, Y., Delon, J., Muse, P.: Single shot high dynamic range imaging using piecewise linear estimators. In: Proceedings of IEEE International Conference on Computational Photography (ICCP), pp. 1–10 (2014)

    Google Scholar 

  43. An, V.G., Lee, C.: Single-shot high dynamic range imaging via deep convolutional neural network. In: Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 1768–1772 (2017)

    Google Scholar 

  44. Rouf, M., Ward, R.K.: high dynamic range imaging with a single exposure-multiplexed image using smooth contour prior. In: Proceedings of IS&T International Symposium on Electronic Imaging (EI), vol. 440, pp. 1–6 (2018)

    Google Scholar 

  45. Cheng, C.H., Au, O.C., Cheung, N.M., Liu, C.H., Yip, K.Y.: High dynamic range image capturing by spatial varying exposed color filter array with specific demosaicking algorithm. In: Proceedings of IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM), pp. 648–653 (2009)

    Google Scholar 

  46. Bayer, B.E.: Color imaging array, US patent 3,971,065 (1976)

    Google Scholar 

  47. Cui, K., Jin, Z., Steinbach, E.: Color image demosaicking using a 3-stage convolutional neural network structure. In: Proceedings of IEEE International Conference on Image Processing (ICIP), pp. 2177–2181 (2018)

    Google Scholar 

  48. Kokkinos, F., Lefkimmiatis, S.: Deep image demosaicking using a cascade of convolutional residual denoising networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 317–333. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_19

    Chapter  Google Scholar 

  49. Grossberg, M.D., Nayar, S.K.: What is the space of camera response functions? In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2003)

    Google Scholar 

  50. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  51. Henz, B., Gastal, E.S., Oliveira, M.M.: Deep joint design of color filter arrays and demosaicing. Comput. Graphics Forum 37, 389–399 (2018)

    Article  Google Scholar 

  52. Funt, B., Shi, L.: The rehabilitation of MaxRGB. In: Proceedings of Color and Imaging Conference (CIC), pp. 256–259 (2010)

    Google Scholar 

  53. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint 1412.6980 (2014)

    Google Scholar 

  54. Mantiuk, R., Kim, K.J., Rempel, A.G., Heidrich, W.: HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graphics 30(40), 1–13 (2011)

    Article  Google Scholar 

  55. Monno, Y., Kiku, D., Tanaka, M., Okutomi, M.: Adaptive residual interpolation for color and multispectral image demosaicking. Sensors 17(2787), 1–21 (2017)

    MATH  Google Scholar 

  56. Wang, X., et al.: ESRGAN: enhanced super-resolution generative adversarial networks. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11133, pp. 63–79. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11021-5_5

    Chapter  Google Scholar 

  57. Fan, Y., Yu, J., Huang, T.S.: Wide-activated deep residual networks based restoration for BPG-compressed images. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition workshops (CVPRW), pp. 2621–2624 (2018)

    Google Scholar 

  58. Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1132–1140 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yusuke Monno .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 93647 KB)

Supplementary material 2 (pdf 12424 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Suda, T., Tanaka, M., Monno, Y., Okutomi, M. (2021). Deep Snapshot HDR Imaging Using Multi-exposure Color Filter Array. In: Ishikawa, H., Liu, CL., Pajdla, T., Shi, J. (eds) Computer Vision – ACCV 2020. ACCV 2020. Lecture Notes in Computer Science(), vol 12623. Springer, Cham. https://doi.org/10.1007/978-3-030-69532-3_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-69532-3_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-69531-6

  • Online ISBN: 978-3-030-69532-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics