Abstract
Lightness enhancement is a long-standing research topic in computer vision. Existing deep learning-based approaches usually extract features from the low-light image to model the enlightening process, which may fall short of robustness since low-light features can be unreliable in heavily dark regions. Inspired by the fact that infrared imaging is immune to illumination variation, we propose to exploit an extra infrared image to help brighten the low-light one. Specifically, we design a deep convolutional neural network to jointly extract the infrared and low-light features and produce a normal-light image under the supervision of multi-scale loss functions, including a discriminator loss that enforces the network output image to mimic a real one. Moreover, a contextual attention module is proposed to reconstruct reliable low-light features in heavily dark regions by exploring feature correlation consistency among low-light and infrared features. Extensive experiments on two composited and one real-world datasets demonstrate the superiority of the proposed approach over existing methods qualitatively and quantitatively.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Pisano ED, Zong S, Hemminger BM, DeLuca M, Johnston RE, Muller K, Braeuning MP, Pizer SM (1998) Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J Digital Imaging 11(4):193
Abdullah-Al-Wadud M, Kabir MH, Dewan MAA, Chae O (2007) A dynamic histogram equalization for image contrast enhancement. IEEE Trans Consumer Electronics 53(2):593–600
Celik T, Tjahjadi T (2011) Contextual and variational contrast enhancement. IEEE Trans Image Process 20(12):3431–3441
Lee C, Lee C, Kim C-S (2013) Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans Image Process 22(12):5372–5384
Jobson DJ, Rahman Z-U, Woodell GA (1997) Properties and performance of a center/surround retinex. IEEE Trans Image Process 6(3):451–462
Jobson DJ, Rahman Z-U, Woodell GA (1997) A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans Image Process 6(7):965–976
Wang S, Zheng J, Hu H-M, Li B (2013) Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans Image Process 22(9):3538–3548
Fu X, Zeng D, Huang Y, Liao Y, Ding X, Paisley J (2016) A fusion-based enhancing method for weakly illuminated images. Signal Process 129:82–96
Fu X, Zeng D, Huang Y, Zhang X.-P, Ding X (2016) “A weighted variational model for simultaneous reflectance and illumination estimation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2782–2790
Li M, Liu J, Yang W, Sun X, Guo Z (2018) Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans Image Process 27(6):2828–2841
Guo X, Li Y, Ling H (2016) Lime: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993
Lore KG, Akintayo A, Sarkar S (2017) Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit 61:650–662
Wang W, Wei C, Yang W, Liu J (2018) “Gladnet: Low-light enhancement network with global awareness,” In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, pp. 751–755
Lv F, Lu F, Wu J, Lim C (2018) “Mbllen: low-light image/video enhancement using cnns.” In BMVC, p. 220
Li J, Li J, Fang F, Li F, Zhang G (2020) “Luminance-aware pyramid network for low-light image enhancement,” IEEE Trans Multimedia
Wang L-W, Liu Z-S, Siu W-C, Lun DP (2020) Lightening network for low-light image enhancement. IEEE Trans Image Process 29:7984–7996
Lim S, Kim W (2020) “Dslr: deep stacked laplacian restorer for low-light image enhancement,” IEEE Trans Multimedia
Cai J, Gu S, Zhang L (2018) Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans Image Process 27(4):2049–2062
Zhu M, Pan P, Chen W, Yang Y (2020) “Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network,” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 13 106–13 113
Lu K, Zhang L (2020) “Tbefn: a two-branch exposure-fusion network for low-light image enhancement,” IEEE Trans Multimedia
Lv F, Liu B, Lu F (2020) “Fast enhancement for non-uniform illumination images using light-weight cnns,” In Proceedings of the 28th ACM International Conference on Multimedia, pp. 1450–1458
Wei C, Wang W, Yang W, Liu J (2018) “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference
Zhang Y, Zhang J, Guo X (2019) “Kindling the darkness: A practical low-light image enhancer,” In Proceedings of the 27th ACM international conference on multimedia, pp. 1632–1640
Zhang Y, Guo X, Ma J, Liu W, Zhang J (2021) Beyond brightening low-light images. Int J Computer Vision 129(4):1013–1037
Wang R, Zhang Q, Fu C.-W, Shen X, Zheng W.-S, Jia J (2019) “Underexposed photo enhancement using deep illumination estimation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6849–6857
Wang Y, Cao Y, Zha Z.-J, Zhang J, Xiong Z, Zhang W, Wu F (2019) “Progressive retinex: mutually reinforced illumination-noise perception network for low-light image enhancement,” In Proceedings of the 27th ACM international conference on multimedia, pp. 2015–2023
Shen L, Yue Z, Feng F, Chen Q, Liu S, Ma J (2017) “Msr-net: low-light image enhancement using deep convolutional network,” arXiv preprint arXiv:1711.02488
Ignatov A, Kobyshev N, Timofte R, Vanhoey K, Van Gool L (2018) “Wespe: weakly supervised photo enhancer for digital cameras,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 691–700
Zhang L, Zhang L, Liu X, Shen Y, Zhang S, Zhao S (2019) “Zero-shot restoration of back-lit images using deep internal learning,” In Proceedings of the 27th ACM International Conference on Multimedia, pp. 1623–1631
Guo C, Li C, Guo J, Loy C. C, Hou J, Kwong S, Cong R (2020) “Zero-reference deep curve estimation for low-light image enhancement,” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1780–1789
Wang H, Zhang D, Ding S, Gao Z, Feng J, Wan S (2021) “Rib segmentation algorithm for x-ray image based on unpaired sample augmentation and multi-scale network,” Neural Computing Appl, 1–15
Wang L, Li M, Fang X, Nappi M, Wan S (2022) Improving random walker segmentation using a nonlocal bipartite graph. Biomed Signal Process Control 71:103154
Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graphics (ToG) 36(4):1–14
Yu J, Lin Z, Yang J, Shen X, Lu X, Huang T. S (2018) “Generative image inpainting with contextual attention,” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5505–5514
Harley A. W, Derpanis K. G, Kokkinos I (2017) “Segmentation-aware convolutional networks using local attention masks,” In Proceedings of the IEEE International Conference on Computer Vision, pp. 5038–5047
Gou J, Sun L, Yu B, Wan S, Ou W, Yi Z (2022) “Multi-level attention-based sample correlations for knowledge distillation,” IEEE Trans Indus Inform
Liu N, Han J, Yang M.-H (2018) “Picanet: learning pixel-wise contextual attention for saliency detection,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3089–3098
Zhang Y, Zhang F, Jin Y, Cen Y, Voronin V, Wan S (2022) “Local correlation ensemble with gcn based on attention features for cross-domain person re-id,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)
Johnson J, Alahi A, Fei-Fei L (2016) “Perceptual losses for real-time style transfer and super-resolution,” In European conference on computer vision. Springer, pp. 694–711
Liu G, Reda F. A, Shih K. J, Wang T.-C, Tao A, Catanzaro B (2018) “Image inpainting for irregular holes using partial convolutions,” In Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Int J Computer Vision 115(3):211–252
Simonyan K, Zisserman A (2014) “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556
Gatys L. A, Ecker A. S, Bethge M (2015) “A neural algorithm of artistic style,” arXiv preprint arXiv:1508.06576
Arjovsky M, Chintala S, Bottou L (2017) “Wasserstein gan,” arXiv preprint arXiv:1701.07875
Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville A. C (2017) Improved training of wasserstein gans, Adv Neural Information Process Syst, 5767–5777
Brown M, Süsstrunk S (2011) Multi-spectral sift for scene category recognition, in CVPR. IEEE 2011:177–184
Salamati N, Larlus D, Csurka G, Süsstrunk S (2014) “Incorporating near-infrared information into semantic image segmentation,” arXiv preprint arXiv:1406.6147
Acknowledgements
This work is co-supported by the National Natural Science Foundation of China (61502005), Key Program of Natural Science Project of Educational Commission of Anhui Province (KJ2021A0042).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, L., Wang, T., Yang, D. et al. Near-infrared fusion for deep lightness enhancement. Int. J. Mach. Learn. & Cyber. 14, 1621–1633 (2023). https://doi.org/10.1007/s13042-022-01716-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13042-022-01716-2