MFIF-DWT-CNN: Multi-focus ımage fusion based on discrete wavelet transform with deep convolutional neural network | Multimedia Tools and Applications Skip to main content
Log in

MFIF-DWT-CNN: Multi-focus ımage fusion based on discrete wavelet transform with deep convolutional neural network

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

A new fusion method based on Multi-Focus Image Fusion Based on Discrete Wavelet Transform with Deep Convolutional Neural Network (MFIF-DWT-CNN) is presented to reduce spatial artifacts and blurring effects in edge details and increase the robustness of multifocal image fusion. The main purpose of the MFIF-DWT-CNN approach is to create a new merged image by collecting the required features from the main image. With the MFIF-DWT-CNN approach, information focused on individual images is combined into a single image, resulting in a clearer image. Within the scope of MFIF-DWT-CNN approach, DWT is applied to the image pairs and the obtained images are then given to the CNN architecture. The MFIF-DWT-CNN approach was developed in this study to reduce spatial artifacts and blurring effects in edge details and to increase the robustness of multifocal image fusion. In order to evaluate our proposed MFIF-DWT-CNN method, QMI, QG, QYi QCB evaluations were made on the public data set. From the experimental results, it is seen that the proposed method gives better results in the relevant metrics than the other methods. This demonstrated the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig.2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability

Data is available publicly on websites.

References

  1. Adu J, Wang M, Wu Z, Zhou Z (2012) Multi-focus image fusion based on the non-subsampled contourlet transform. J Mod Opt 59(15):1355–1362

    Google Scholar 

  2. Agarap AF (2018). Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375

  3. Amin-Naji M, Aghagolzadeh A (2018) Multi-focus image fusion in DCT domain using variance and energy of Laplacian and correlation coefficient for visual sensor networks. Journal of AI and Data Mining 6(2):233–250

    Google Scholar 

  4. Amin-Naji M, Aghagolzadeh A, Ezoji M (2019) Ensemble of CNN for multi-focus image fusion. Information fusion 51:201–214

    Google Scholar 

  5. Amin-Naji M, Aghagolzadeh A, Ezoji M (2020) CNNs hard voting for multi-focus image fusion. J Ambient Intell Humaniz Comput 11(4):1749–1769

    Google Scholar 

  6. Amin-Naji M, Ranjbar-Noiey P, Aghagolzadeh A (2017). Multi-focus image fusion using singular value decomposition in DCT domain. In 2017 10th Iranian Conference on Machine Vision and Image Processing (MVIP) 45–51. IEEE

  7. Anish A, Jebaseeli TJ (2012). A survey on multi-focus image fusion methods. Int J Adv Res Comput Eng Technol (IJARCET), 1(8), 2012

  8. Ardeshir Goshtasby A, Nikolov S (2007) Guest editorial: Image fusion: Advances in the state of the art. Information Fusion 8(2):114–118

    Google Scholar 

  9. Aslantas V, Kurban R (2009). Evaluation of criterion functions for the fusion of multi-focus noisy images. In 2009 IEEE 17th Signal Processing and Communications Applications Conference (pp. 492–495). IEEE

  10. Aymaz S, Köse C (2019) A novel image decomposition-based hybrid technique with super-resolution method for multi-focus image fusion. Information Fusion 45:113–127

    Google Scholar 

  11. Balasubramaniam P, Ananthi VP (2014) Image fusion using intuitionistic fuzzy sets. Information Fusion 20:21–30

    Google Scholar 

  12. Bhat S, Koundal D (2021) Multi-focus Image Fusion using Neutrosophic based Wavelet Transform. Appl Soft Comput 106:107307

    Google Scholar 

  13. Bhat S, Koundal D (2021). Multi-focus image fusion techniques: a survey. Artificial Intelligence Review, 1–53

  14. Broussard RP, Rogers SK, Oxley ME, Tarr GL (1999) Physiologically motivated image fusion for object detection using a pulse coupled neural network. IEEE Trans Neural Networks 10(3):554–563

    Google Scholar 

  15. Chen Y, Blum RS (2009) A new automated quality assessment algorithm for image fusion. Image Vis Comput 27(10):1421–1432

    Google Scholar 

  16. Chen WB, Hu M, Zhou L, Gu H, Zhang X (2019) Fusion Algorithm of Multi-focus Images with Weighted Ratios and Weighted Gradient Based on Wavelet Transform. J Intell Syst 28(4):505–516

    Google Scholar 

  17. De I, Chanda B, Chattopadhyay B (2006) Enhancing effective depth-of-field by image fusion using mathematical morphology. Image Vis Comput 24(12):1278–1287

    Google Scholar 

  18. Deighan AJ, Watts DR (1997) Ground-roll suppression using the wavelet transform. Geophysics 62(6):1896–1903

    Google Scholar 

  19. Diwakar M, Tripathi A, Joshi K, Sharma A, Singh P, Memoria M (2020). A comparative review: Medical image fusion using SWT and DWT. Materials Today: Proceedings

  20. Du C, Gao S (2017) Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network. IEEE access 5:15750–15761

    Google Scholar 

  21. Farid MS, Mahmood A, Al-Maadeed SA (2019) Multi-focus image fusion using content adaptive blurring. Information fusion 45:96–112

    Google Scholar 

  22. Gogu LB, Kumer SA (2021). Multifocus image fusion using te-cnn technique. Materials Today: Proceedings

  23. Goudarzi AR, Ali Riahi M (2012) Adaptive seismic ground roll attenuation using the double density dual tree discrete wavelet transform (DWT) method. Earth Sci Res J 16(2):113–120

    Google Scholar 

  24. Guo L, Dai M, Zhu M (2012) Multifocus color image fusion based on quaternion curvelet transform. Opt Express 20(17):18846–18860

    Google Scholar 

  25. Guo X, Nie R, Cao J, Zhou D, Qian W (2018) Fully convolutional network-based multifocus image fusion. Neural Comput 30(7):1775–1800

    MathSciNet  Google Scholar 

  26. Han Z, Gao J (2019) Pixel-level aflatoxin detecting based on deep learning and hyperspectral imaging. Comput Electron Agric 164:104888

    Google Scholar 

  27. He K, Zhou D, Zhang X, Nie R (2018) Multi-focus: focused region finding and multi-scale transform for image fusion. Neurocomputing 320:157–170

    Google Scholar 

  28. Hossny M, Nahavandi S, Creighton D (2008) Comments on’Information measure for performance of image fusion’. Electron Lett 44(18):1066–1067

    Google Scholar 

  29. Hua KL, Wang HC, Rusdi AH, Jiang SY (2014) A novel multi-focus image fusion algorithm based on random walks. J Vis Commun Image Represent 25(5):951–962

    Google Scholar 

  30. Jiang Q, Jin X, Lee SJ, Yao S (2017) A novel multi-focus image fusion method based on stationary wavelet transform and local features of fuzzy sets. IEEE Access 5:20286–20302

    Google Scholar 

  31. Jinju J, Santhi N, Ramar K, Bama BS (2019) Spatial frequency discrete wavelet transform image fusion technique for remote sensing applications. Engineering Science and Technology, an International Journal 22(3):715–726

    Google Scholar 

  32. Karlik B, Olgac AV (2011) Performance analysis of various activation functions in generalized MLP architectures of neural networks. Int. J. Artif. Intell 1(4):111–122

    Google Scholar 

  33. Kaur G, Kaur P (2016). Survey on multifocus image fusion techniques. In 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT) 1420–1424. IEEE

  34. Kaur H, Koundal D, Kadyan V (2021). Image fusion techniques: a survey. Archives of Computational Methods in Engineering, 1–23

  35. Li S, Yang B, Hu J (2011) Performance comparison of different multi-resolution transforms for image fusion. Information Fusion 12(2):74–84

    Google Scholar 

  36. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875

    Google Scholar 

  37. Li Y, Sun Y, Huang X, Qi G, Zheng M, Zhu Z (2018) An image fusion method based on sparse representation and sum modified-Laplacian in NSCT domain. Entropy 20(7):522

    Google Scholar 

  38. Li Y, Zhao J, Lv Z, Li J (2021) Medical image fusion method by deep learning. International Journal of Cognitive Computing in Engineering 2:21–29

    Google Scholar 

  39. Liu Z, Blasch E, Xue Z, Zhao J, Laganiere R, Wu W (2011) Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study. IEEE Trans Pattern Anal Mach Intell 34(1):94–109

    Google Scholar 

  40. Liu Y, Jin J, Wang Q, Shen Y, Dong X (2014) Region level based multi-focus image fusion using quaternion wavelet and normalized cut. Signal Process 97:9–30

    Google Scholar 

  41. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Information fusion 24:147–164

    Google Scholar 

  42. Liu Y, Liu S, Wang Z (2015) Multi-focus image fusion with dense SIFT. Information Fusion 23:139–155

    Google Scholar 

  43. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Information Fusion 36:191–207

    Google Scholar 

  44. Liu Y, Chen X, Wang Z, Wang ZJ, Ward RK, Wang X (2018) Deep learning for pixel-level image fusion: Recent advances and future prospects. Information Fusion 42:158–173

    Google Scholar 

  45. Luo X, Zhang Z, Zhang C, Wu X (2017) Multi-focus image fusion using HOSVD and edge intensity. J Vis Commun Image Represent 45:46–61

    Google Scholar 

  46. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: A survey. Information Fusion 45:153–178

    Google Scholar 

  47. Meng F, Song M, Guo B, Shi R, Shan D (2017) Image fusion based on object region detection and non-subsampled contourlet transform. Comput Electr Eng 62:375–383

    Google Scholar 

  48. Miao Q, Shi C, Xu P, Yang M, Shi Y (2011) Multi-focus image fusion algorithm based on shearlets. Chin Opt Lett 9(4):041001

    Google Scholar 

  49. Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Information Fusion 25:72–84

    Google Scholar 

  50. Özkan İNİK, Ülker E (2017) Derin öğrenme ve görüntü analizinde kullanılan derin öğrenme modelleri. Gaziosmanpaşa Bilimsel Araştırma Dergisi 6(3):85–104

    Google Scholar 

  51. Özyurt F (2020) Efficient deep feature selection for remote sensing image recognition with fused deep learning architectures. J Supercomput 76(11):8413–8431

    Google Scholar 

  52. Özyurt F (2020) A fused CNN model for WBC detection with MRMR feature selection and extreme learning machine. Soft Comput 24(11):8163–8172

    Google Scholar 

  53. Özyurt F, Sert E, Avci E, Dogantekin E (2019) Brain tumor detection based on Convolutional Neural Network with neutrosophic expert maximum fuzzy sure entropy. Measurement 147:106830

    Google Scholar 

  54. Özyurt F, Sert E, Avcı D (2020) An expert system for brain tumor detection: Fuzzy C-means with super resolution and convolutional neural network with extreme learning machine. Med Hypotheses 134:109433

    Google Scholar 

  55. Pajares G, De La Cruz JM (2004) A wavelet-based image fusion tutorial. Pattern Recogn 37(9):1855–1872

    Google Scholar 

  56. Phamila YAV, Amutha R (2014) Discrete Cosine Transform based fusion of multi-focus images for visual sensor networks. Signal Process 95:161–170

    Google Scholar 

  57. Ramakrishnan V, Pete DJ (2020) Non Subsampled Shearlet Transform Based Fusion of Multiple Exposure Images. SN Computer Science 1(6):1–5

    Google Scholar 

  58. Saeedi J, Faez K (2009). Fisher classifier and fuzzy logic based multi-focus image fusion. In 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems (Vol. 4, pp. 420–425). IEEE

  59. Saeedi J, Faez K, Mozaffari S (2009). Multi-focus image fusion based on fuzzy and wavelet transform. In Iberoamerican Congress on Pattern Recognition 970–977. Springer, Berlin, Heidelberg

  60. Varga D, Szirányi T (2017). Person re-identification based on deep multi-instance learning. In 2017 25th European Signal Processing Conference(EUSIPCO)1559–1563. IEEE

  61. Tang H, Xiao B, Li W, Wang G (2018) Pixel convolutional neural network for multi-focus image fusion. Inf Sci 433:125–141

    MathSciNet  Google Scholar 

  62. Tian J, Chen L (2012) Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure. Signal Process 92(9):2137–2146

    Google Scholar 

  63. Varga D (2020) Composition-preserving deep approach to full-reference image quality assessment. SIViP 14(6):1265–1272

    Google Scholar 

  64. Wan T, Canagarajah N, Achim A (2009) Segmentation-driven image fusion based on alpha-stable modeling of wavelet coefficients. IEEE Trans Multimedia 11(4):624–633

    Google Scholar 

  65. Wan T, Zhu C, Qin Z (2013) Multifocus image fusion based on robust principal component analysis. Pattern Recogn Lett 34(9):1001–1008

    Google Scholar 

  66. Wang W, Chang F (2011) A Multi-focus Image Fusion Method Based on Laplacian Pyramid. JCP 6(12):2559–2566

    Google Scholar 

  67. Wang W, Chang F (2011) A Multi-focus Image Fusion Method Based on Laplacian Pyramid. J Comput 6(12):2559–2566

    Google Scholar 

  68. Wang Q, Nie RC, Zhou DM, Jin X, He KJ, Yu JF (2016) Image fusion algorithm using PCNN model parameters of multi-objective particle swarm optimization. JOIG 21(10):1295–1306

    Google Scholar 

  69. Wang C, Zhao Z, Ren Q, Xu Y, Yu Y (2020) A novel multi-focus image fusion by combining simplified very deep convolutional networks and patch-based sequential reconstruction strategy. Appl Soft Comput 91:106253

    Google Scholar 

  70. Xia X, Yao Y, Yin L, Wu S, Li H, Yang Z (2018) Multi-focus image fusion based on probability filtering and region correction. Signal Process 153:71–82

    Google Scholar 

  71. Xu K, Qin Z, Wang G, Zhang H, Huang K, Ye S (2018) Multi-focus image fusion using fully convolutional two-stream network for visual sensors. KSII Transactions on Internet and Information Systems (TIIS) 12(5):2253–2272

    Google Scholar 

  72. Xydeas CA, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309

    Google Scholar 

  73. Yang Y (2011) A novel DWT based multi-focus image fusion method. Procedia engineering 24:177–181

    Google Scholar 

  74. Yang B, Li S (2009) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884–892

    Google Scholar 

  75. Yang C, Zhang JQ, Wang XR, Liu X (2008) A novel similarity based quality metric for image fusion. Information Fusion 9(2):156–160

    Google Scholar 

  76. Yang Y, Yang M, Huang S, Que Y, Ding M, Sun J (2017) Multifocus image fusion based on extreme learning machine and human visual system. IEEE access 5:6989–7000

    Google Scholar 

  77. Zhang Q, Guo BL (2009) Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 89(7):1334–1346

    Google Scholar 

  78. Zhao W, Wang D, Lu H (2018) Multi-focus image fusion with a natural enhancement via a joint multi-level deeply supervised convolutional neural network. IEEE Trans Circuits Syst Video Technol 29(4):1102–1115

    Google Scholar 

  79. Zheng Y, Essock EA, Hansen BC, Haun AM (2007) A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Information Fusion 8(2):177–192

    Google Scholar 

  80. Zhou Z, Li S, Wang B (2014) Multi-scale weighted gradient-based fusion for multi-focus images. Information Fusion 20:60–72

    Google Scholar 

  81. Zhang Q, Guo BL (2007). Research on image fusion based on the nonsubsampled contourlet transform. In 2007 IEEE International Conference on Control and Automation 3239–3243. IEEE

Download references

Acknowledgements

This work is supported by Fırat University Scientific Research Projects Coordination Unit (FÜBAP) with project number ADEP.23.21.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Derya Avcı.

Ethics declarations

Conflict of Interests

There is no conflict of interest.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Avcı, D., Sert, E., Özyurt, F. et al. MFIF-DWT-CNN: Multi-focus ımage fusion based on discrete wavelet transform with deep convolutional neural network. Multimed Tools Appl 83, 10951–10968 (2024). https://doi.org/10.1007/s11042-023-16074-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-16074-6

Keywords

Navigation