Near-infrared fusion for deep lightness enhancement | International Journal of Machine Learning and Cybernetics Skip to main content
Log in

Near-infrared fusion for deep lightness enhancement

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Lightness enhancement is a long-standing research topic in computer vision. Existing deep learning-based approaches usually extract features from the low-light image to model the enlightening process, which may fall short of robustness since low-light features can be unreliable in heavily dark regions. Inspired by the fact that infrared imaging is immune to illumination variation, we propose to exploit an extra infrared image to help brighten the low-light one. Specifically, we design a deep convolutional neural network to jointly extract the infrared and low-light features and produce a normal-light image under the supervision of multi-scale loss functions, including a discriminator loss that enforces the network output image to mimic a real one. Moreover, a contextual attention module is proposed to reconstruct reliable low-light features in heavily dark regions by exploring feature correlation consistency among low-light and infrared features. Extensive experiments on two composited and one real-world datasets demonstrate the superiority of the proposed approach over existing methods qualitatively and quantitatively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Pisano ED, Zong S, Hemminger BM, DeLuca M, Johnston RE, Muller K, Braeuning MP, Pizer SM (1998) Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J Digital Imaging 11(4):193

    Article  Google Scholar 

  2. Abdullah-Al-Wadud M, Kabir MH, Dewan MAA, Chae O (2007) A dynamic histogram equalization for image contrast enhancement. IEEE Trans Consumer Electronics 53(2):593–600

    Article  Google Scholar 

  3. Celik T, Tjahjadi T (2011) Contextual and variational contrast enhancement. IEEE Trans Image Process 20(12):3431–3441

    Article  MathSciNet  MATH  Google Scholar 

  4. Lee C, Lee C, Kim C-S (2013) Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans Image Process 22(12):5372–5384

    Article  Google Scholar 

  5. Jobson DJ, Rahman Z-U, Woodell GA (1997) Properties and performance of a center/surround retinex. IEEE Trans Image Process 6(3):451–462

    Article  Google Scholar 

  6. Jobson DJ, Rahman Z-U, Woodell GA (1997) A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans Image Process 6(7):965–976

    Article  Google Scholar 

  7. Wang S, Zheng J, Hu H-M, Li B (2013) Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans Image Process 22(9):3538–3548

    Article  Google Scholar 

  8. Fu X, Zeng D, Huang Y, Liao Y, Ding X, Paisley J (2016) A fusion-based enhancing method for weakly illuminated images. Signal Process 129:82–96

    Article  Google Scholar 

  9. Fu X, Zeng D, Huang Y, Zhang X.-P, Ding X (2016) “A weighted variational model for simultaneous reflectance and illumination estimation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2782–2790

  10. Li M, Liu J, Yang W, Sun X, Guo Z (2018) Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans Image Process 27(6):2828–2841

    Article  MathSciNet  MATH  Google Scholar 

  11. Guo X, Li Y, Ling H (2016) Lime: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993

    Article  MathSciNet  MATH  Google Scholar 

  12. Lore KG, Akintayo A, Sarkar S (2017) Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit 61:650–662

    Article  Google Scholar 

  13. Wang W, Wei C, Yang W, Liu J (2018) “Gladnet: Low-light enhancement network with global awareness,” In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, pp. 751–755

  14. Lv F, Lu F, Wu J, Lim C (2018) “Mbllen: low-light image/video enhancement using cnns.” In BMVC, p. 220

  15. Li J, Li J, Fang F, Li F, Zhang G (2020) “Luminance-aware pyramid network for low-light image enhancement,” IEEE Trans Multimedia

  16. Wang L-W, Liu Z-S, Siu W-C, Lun DP (2020) Lightening network for low-light image enhancement. IEEE Trans Image Process 29:7984–7996

    Article  MATH  Google Scholar 

  17. Lim S, Kim W (2020) “Dslr: deep stacked laplacian restorer for low-light image enhancement,” IEEE Trans Multimedia

  18. Cai J, Gu S, Zhang L (2018) Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans Image Process 27(4):2049–2062

    Article  MathSciNet  MATH  Google Scholar 

  19. Zhu M, Pan P, Chen W, Yang Y (2020) “Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network,” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 13 106–13 113

  20. Lu K, Zhang L (2020) “Tbefn: a two-branch exposure-fusion network for low-light image enhancement,” IEEE Trans Multimedia

  21. Lv F, Liu B, Lu F (2020) “Fast enhancement for non-uniform illumination images using light-weight cnns,” In Proceedings of the 28th ACM International Conference on Multimedia, pp. 1450–1458

  22. Wei C, Wang W, Yang W, Liu J (2018) “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference

  23. Zhang Y, Zhang J, Guo X (2019) “Kindling the darkness: A practical low-light image enhancer,” In Proceedings of the 27th ACM international conference on multimedia, pp. 1632–1640

  24. Zhang Y, Guo X, Ma J, Liu W, Zhang J (2021) Beyond brightening low-light images. Int J Computer Vision 129(4):1013–1037

    Article  Google Scholar 

  25. Wang R, Zhang Q, Fu C.-W, Shen X, Zheng W.-S, Jia J (2019) “Underexposed photo enhancement using deep illumination estimation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6849–6857

  26. Wang Y, Cao Y, Zha Z.-J, Zhang J, Xiong Z, Zhang W, Wu F (2019) “Progressive retinex: mutually reinforced illumination-noise perception network for low-light image enhancement,” In Proceedings of the 27th ACM international conference on multimedia, pp. 2015–2023

  27. Shen L, Yue Z, Feng F, Chen Q, Liu S, Ma J (2017) “Msr-net: low-light image enhancement using deep convolutional network,” arXiv preprint arXiv:1711.02488

  28. Ignatov A, Kobyshev N, Timofte R, Vanhoey K, Van Gool L (2018) “Wespe: weakly supervised photo enhancer for digital cameras,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 691–700

  29. Zhang L, Zhang L, Liu X, Shen Y, Zhang S, Zhao S (2019) “Zero-shot restoration of back-lit images using deep internal learning,” In Proceedings of the 27th ACM International Conference on Multimedia, pp. 1623–1631

  30. Guo C, Li C, Guo J, Loy C. C, Hou J, Kwong S, Cong R (2020) “Zero-reference deep curve estimation for low-light image enhancement,” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1780–1789

  31. Wang H, Zhang D, Ding S, Gao Z, Feng J, Wan S (2021) “Rib segmentation algorithm for x-ray image based on unpaired sample augmentation and multi-scale network,” Neural Computing Appl, 1–15

  32. Wang L, Li M, Fang X, Nappi M, Wan S (2022) Improving random walker segmentation using a nonlocal bipartite graph. Biomed Signal Process Control 71:103154

    Article  Google Scholar 

  33. Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graphics (ToG) 36(4):1–14

    Article  Google Scholar 

  34. Yu J, Lin Z, Yang J, Shen X, Lu X, Huang T. S (2018) “Generative image inpainting with contextual attention,” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5505–5514

  35. Harley A. W, Derpanis K. G, Kokkinos I (2017) “Segmentation-aware convolutional networks using local attention masks,” In Proceedings of the IEEE International Conference on Computer Vision, pp. 5038–5047

  36. Gou J, Sun L, Yu B, Wan S, Ou W, Yi Z (2022) “Multi-level attention-based sample correlations for knowledge distillation,” IEEE Trans Indus Inform

  37. Liu N, Han J, Yang M.-H (2018) “Picanet: learning pixel-wise contextual attention for saliency detection,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3089–3098

  38. Zhang Y, Zhang F, Jin Y, Cen Y, Voronin V, Wan S (2022) “Local correlation ensemble with gcn based on attention features for cross-domain person re-id,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)

  39. Johnson J, Alahi A, Fei-Fei L (2016) “Perceptual losses for real-time style transfer and super-resolution,” In European conference on computer vision. Springer, pp. 694–711

  40. Liu G, Reda F. A, Shih K. J, Wang T.-C, Tao A, Catanzaro B (2018) “Image inpainting for irregular holes using partial convolutions,” In Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100

  41. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Int J Computer Vision 115(3):211–252

    Article  MathSciNet  Google Scholar 

  42. Simonyan K, Zisserman A (2014) “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556

  43. Gatys L. A, Ecker A. S, Bethge M (2015) “A neural algorithm of artistic style,” arXiv preprint arXiv:1508.06576

  44. Arjovsky M, Chintala S, Bottou L (2017) “Wasserstein gan,” arXiv preprint arXiv:1701.07875

  45. Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville A. C (2017) Improved training of wasserstein gans, Adv Neural Information Process Syst, 5767–5777

  46. Brown M, Süsstrunk S (2011) Multi-spectral sift for scene category recognition, in CVPR. IEEE 2011:177–184

    Google Scholar 

  47. Salamati N, Larlus D, Csurka G, Süsstrunk S (2014) “Incorporating near-infrared information into semantic image segmentation,” arXiv preprint arXiv:1406.6147

Download references

Acknowledgements

This work is co-supported by the National Natural Science Foundation of China (61502005), Key Program of Natural Science Project of Educational Commission of Anhui Province (KJ2021A0042).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaohua Wan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, L., Wang, T., Yang, D. et al. Near-infrared fusion for deep lightness enhancement. Int. J. Mach. Learn. & Cyber. 14, 1621–1633 (2023). https://doi.org/10.1007/s13042-022-01716-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-022-01716-2

Keywords

Navigation