A New Dataset and Framework for Real-World Blurred Images Super-Resolution | SpringerLink
Skip to main content

A New Dataset and Framework for Real-World Blurred Images Super-Resolution

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Recent Blind Image Super-Resolution (BSR) methods have shown proficiency in general images. However, we find that the efficacy of recent methods obviously diminishes when employed on image data with blur, while image data with intentional blur constitute a substantial proportion of general data. To further investigate and address this issue, we developed a new super-resolution dataset specifically tailored for blur images, named the Real-world Blur-kept Super-Resolution (ReBlurSR) dataset, which consists of nearly 3000 defocus and motion blur image samples with diverse blur sizes and varying blur intensities. Furthermore, we propose a new BSR framework for blur images called Perceptual-Blur-adaptive Super-Resolution (PBaSR), which comprises two main modules: the Cross Disentanglement Module (CDM) and the Cross Fusion Module (CFM). The CDM utilizes a dual-branch parallelism to isolate conflicting blur and general data during optimization. The CFM fuses the well-optimized prior from these distinct domains cost-effectively and efficiently based on model interpolation. By integrating these two modules, PBaSR achieves commendable performance on both general and blur data without any additional inference and deployment cost and is generalizable across multiple model architectures. Rich experiments show that PBaSR achieves state-of-the-art performance across various metrics without incurring extra inference costs. Within the widely adopted LPIPS metrics, PBaSR achieves an improvement range of approximately 0.02–0.10 with diverse anchor methods and blur types, across both the ReBlurSR and multiple common general BSR benchmarks. Code here.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8465
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abuolaim, A., Brown, M.S.: Defocus deblurring using dual-pixel data. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12355, pp. 111–126. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_7

    Chapter  Google Scholar 

  2. Agustsson, E., Timofte, R.: NTIRE 2017 challenge on single image super-resolution: dataset and study. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017

    Google Scholar 

  3. Bevilacqua, M., Roumy, A., Guillemot, C.M., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: British Machine Vision Conference (2012). https://api.semanticscholar.org/CorpusID:5250573

  4. Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11896–11905 (2021)

    Google Scholar 

  5. Chen, C., et al.: TOPIQ: a top-down approach from semantics to distortions for image quality assessment. arXiv preprint arXiv:2308.03060 (2023)

  6. Chen, C., et al.: Real-world blind super-resolution via feature matching with implicit high-resolution priors. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 1329–1338 (2022)

    Google Scholar 

  7. Chen, H., et al.: CasSR: activating image power for real-world image super-resolution. arXiv preprint arXiv:2403.11451 (2024)

  8. Chen, J., Li, B., Xue, X.: Scene text telescope: text-focused scene image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12026–12035, June 2021

    Google Scholar 

  9. Chen, X., Wang, X., Zhou, J., Qiao, Y., Dong, C.: Activating more pixels in image super-resolution transformer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 22367–22377, June 2023

    Google Scholar 

  10. Chen, Z., et al.: NTIRE 2024 challenge on image super-resolution (X4): methods and results. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6108–6132 (2024)

    Google Scholar 

  11. Cho, S.J., Ji, S.W., Hong, J.P., Jung, S.W., Ko, S.J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4641–4650 (2021)

    Google Scholar 

  12. Ding, K., Ma, K., Wang, S., Simoncelli, E.P.: Image quality assessment: unifying structure and texture similarity. CoRR abs/2004.07728 (2020). https://arxiv.org/abs/2004.07728

  13. Ding, K., Ma, K., Wang, S., Simoncelli, E.P.: Comparison of full-reference image quality models for optimization of image processing systems. Int. J. Comput. Vision 129, 1258–1281 (2021)

    Article  Google Scholar 

  14. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 184–199. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10593-2_13

    Chapter  Google Scholar 

  15. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2015)

    Article  Google Scholar 

  16. Fritsche, M., Gu, S., Timofte, R.: Frequency separation for real-world super-resolution. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 3599–3608. IEEE (2019)

    Google Scholar 

  17. Ghildyal, A., Liu, F.: Shift-tolerant perceptual similarity metric. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13678, pp. 91–107. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19797-0_6

    Chapter  Google Scholar 

  18. Gong, Y., et al.: Enlighten-GAN for super resolution reconstruction in mid-resolution remote sensing images. Remote Sens. 13(6), 1104 (2021)

    Article  Google Scholar 

  19. Google Chrome Team: Blur camera background (2023). https://developer.chrome.com/blog/background-blur?hl=zh-cn

  20. Gu, J., Lu, H., Zuo, W., Dong, C.: Blind super-resolution with iterative kernel correction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1604–1613 (2019)

    Google Scholar 

  21. Gu, S., Lugmayr, A., Danelljan, M., Fritsche, M., Lamour, J., Timofte, R.: DIV8K: DIVerse 8K resolution image dataset. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 3512–3516. IEEE (2019)

    Google Scholar 

  22. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  23. Hore, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369. IEEE (2010). https://doi.org/10.1109/ICPR.2010.579

  24. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  25. Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015

    Google Scholar 

  26. Huang, Y., Li, S., Wang, L., Tan, T., et al.: Unfolding the alternating optimization for blind super resolution. In: Advances in Neural Information Processing Systems, vol. 33, pp. 5632–5643 (2020)

    Google Scholar 

  27. Ji, X., Cao, Y., Tai, Y., Wang, C., Li, J., Huang, F.: Real-world super-resolution via kernel estimation and noise injection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 466–467 (2020)

    Google Scholar 

  28. Jin, Y., Qian, M., Xiong, J., Xue, N., Xia, G.S.: Depth and DOF cues make a better defocus blur detector. arXiv preprint arXiv:2306.11334 (2023)

  29. Kim, B., Son, H., Park, S.J., Cho, S., Lee, S.: Defocus and motion blur detection with deep contextual features. Comput. Graph. Forum 37, 277–288 (2018)

    Google Scholar 

  30. Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1637–1645 (2016)

    Google Scholar 

  31. Lao, S., et al.: Attentions help CNNs see better: attention-based hybrid image quality assessment network. arXiv preprint arXiv:2204.10485 (2022)

  32. Li, X., Chen, C., Zhou, S., Lin, X., Zuo, W., Zhang, L.: Blind face restoration via deep multi-scale component dictionaries. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 399–415. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_23

    Chapter  Google Scholar 

  33. Li, X., Li, W., Ren, D., Zhang, H., Wang, M., Zuo, W.: Enhanced blind face restoration with multi-exemplar images and adaptive spatial feature fusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2706–2715 (2020)

    Google Scholar 

  34. Liang, J., Zeng, H., Zhang, L.: Details or artifacts: a locally discriminative learning approach to realistic image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2022)

    Google Scholar 

  35. Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: SwinIR: image restoration using Swin transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1833–1844 (2021)

    Google Scholar 

  36. Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136–144 (2017)

    Google Scholar 

  37. Liu, G., Ding, Y., Li, M., Sun, M., Wen, X., Wang, B.: Reconstructed convolution module based look-up tables for efficient image super-resolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12217–12226 (2023)

    Google Scholar 

  38. Liu, H., et al.: Ada-DQA: adaptive diverse quality-aware feature acquisition for video quality assessment. In: Proceedings of the 31st ACM International Conference on Multimedia, pp. 6695–6704. Association for Computing Machinery (2023)

    Google Scholar 

  39. Lu, Y., et al.: KVQ: Kaleidoscope video quality assessment for short-form videos. arXiv preprint arXiv:2402.07220 (2024)

  40. Ma, C., Yang, C.Y., Yang, X., Yang, M.H.: Learning a no-reference quality metric for single-image super-resolution. Comput. Vis. Image Underst. 158, 1–16 (2017)

    Article  Google Scholar 

  41. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings Eighth IEEE International Conference on Computer Vision, ICCV 2001, vol. 2, pp. 416–423. IEEE (2001)

    Google Scholar 

  42. Matsui, Y., et al.: Sketch-based manga retrieval using Manga109 dataset. Multimedia Tools Appl. 76, 21811–21838 (2017)

    Article  Google Scholar 

  43. Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: PULSE: self-supervised photo upsampling via latent space exploration of generative models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2437–2445 (2020)

    Google Scholar 

  44. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind’’ image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012). https://doi.org/10.1109/LSP.2012.2227726

    Article  Google Scholar 

  45. Mou, C., Wu, Y., Wang, X., Dong, C., Zhang, J., Shan, Y.: Metric learning based interactive modulation for real-world super-resolution. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13677, pp. 723–740. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19790-1_43

    Chapter  Google Scholar 

  46. OpenAI: ChatGPT: optimizing language models for dialogue (2023). https://openai.com/chatgpt. Accessed 18 Jan 2024

  47. Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C.C., Luo, P.: Exploiting deep generative prior for versatile image restoration and manipulation. IEEE Trans. Pattern Anal. Mach. Intell. 44(11), 7474–7489 (2021)

    Article  Google Scholar 

  48. Park, J., Son, S., Lee, K.M.: Content-aware local GAN for photo-realistic super-resolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10585–10594 (2023)

    Google Scholar 

  49. Qin, R., Sun, M., Zhang, F., Wen, X., Wang, B.: Blind image super-resolution with rich texture-aware codebook. In: Proceedings of the 31st ACM International Conference on Multimedia, pp. 676–687 (2023)

    Google Scholar 

  50. Qin, R., Wang, B., Tai, Y.W.: Scene text image super-resolution via content perceptual loss and Criss-Cross transformer blocks. arXiv preprint arXiv:2210.06924 (2022)

  51. Qu, Y., et al.: XPSR: cross-modal priors for diffusion-based image super-resolution. arXiv preprint arXiv:2403.05049 (2024)

  52. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models (2021)

    Google Scholar 

  53. Sheikh, H.R., Bovik, A.C.: Image information and visual quality. IEEE Trans. Image Process. 15(2), 430–444 (2006)

    Article  Google Scholar 

  54. Shi, J., Xu, L., Jia, J.: Discriminative blur detection features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2965–2972 (2014)

    Google Scholar 

  55. Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883 (2016)

    Google Scholar 

  56. Soundararajan, R., Bovik, A.C.: RRED indices: reduced reference entropic differencing for image quality assessment. IEEE Trans. Image Process. 21(2), 517–526 (2011)

    Article  MathSciNet  Google Scholar 

  57. Tang, C., et al.: DeFusionNET: defocus blur detection via recurrently fusing and refining discriminative multi-scale deep features. IEEE Trans. Pattern Anal. Mach. Intell. 44(2), 955–968 (2020)

    Article  Google Scholar 

  58. Tang, C., et al.: R\(^2\)MRF: defocus blur detection via recurrently refining multi-scale residual features. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12063–12070 (2020)

    Google Scholar 

  59. Photography tips: when to use a large aperture (2024). https://www.adobe.com/creativecloud/photography/hub/guides/when-to-use-large-aperture.html

  60. Wang, J., et al.: GIT: a generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100 (2022)

  61. Wang, L., et al.: Unsupervised degradation representation learning for blind super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10581–10590 (2021)

    Google Scholar 

  62. Wang, X., Xie, L., Dong, C., Shan, Y.: Real-ESRGAN: training real-world blind super-resolution with pure synthetic data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1905–1914 (2021)

    Google Scholar 

  63. Wang, X., et al.: ESRGAN: enhanced super-resolution generative adversarial networks. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11133, pp. 63–79. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11021-5_5

    Chapter  Google Scholar 

  64. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  65. Wu, Y., Wang, X., Li, G., Shan, Y.: AnimeSR: learning real-world super-resolution models for animation videos. In: Advances in Neural Information Processing Systems, vol. 35, pp. 11241–11252 (2022)

    Google Scholar 

  66. Xiao, Y., Yuan, Q., Jiang, K., He, J., Wang, Y., Zhang, L.: From degrade to upgrade: learning a self-supervised degradation guided adaptive network for blind remote sensing image super-resolution. Inf. Fusion 96, 297–311 (2023)

    Article  Google Scholar 

  67. Xie, L., et al.: DESRA: detect and delete the artifacts of GAN-based real-world super-resolution models. In: Proceedings of the 40th International Conference on Machine Learning, ICML 2023 (2023)

    Google Scholar 

  68. Xue, W., Zhang, L., Mou, X., Bovik, A.C.: Gradient magnitude similarity deviation: a highly efficient perceptual image quality index. IEEE Trans. Image Process. 23(2), 684–695 (2013)

    Article  MathSciNet  Google Scholar 

  69. Yuan, K., Kong, Z., Zheng, C., Sun, M., Wen, X.: Capturing co-existing distortions in user-generated content for no-reference video quality assessment. In: Proceedings of the 31st ACM International Conference on Multimedia, pp. 1098–1107. Association for Computing Machinery (2023)

    Google Scholar 

  70. Yuan, K., et al.: PTM-VQA: efficient video quality assessment leveraging diverse pretrained models from the wild. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2835–2845, June 2024

    Google Scholar 

  71. Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Boissonnat, J.-D., et al. (eds.) Curves and Surfaces 2010. LNCS, vol. 6920, pp. 711–730. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27413-8_47

    Chapter  Google Scholar 

  72. Zhang, K., Liang, J., Van Gool, L., Timofte, R.: Designing a practical degradation model for deep blind image super-resolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4791–4800 (2021)

    Google Scholar 

  73. Zhang, L., Shen, Y., Li, H.: VSI: a visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 23(10), 4270–4281 (2014). https://doi.org/10.1109/TIP.2014.2346028

    Article  MathSciNet  Google Scholar 

  74. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

  75. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472–2481 (2018)

    Google Scholar 

  76. Zhao, C., et al.: Scene text image super-resolution via parallelly contextual attention network. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 2908–2917 (2021)

    Google Scholar 

  77. Zhao, K., Yuan, K., Sun, M., Li, M., Wen, X.: Quality-aware pre-trained models for blind image quality assessment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 22302–22313, June 2023

    Google Scholar 

  78. Zhao, K., Yuan, K., Sun, M., Wen, X.: Zoom-VQA: patches, frames and CLIPs integration for video quality assessment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 1302–1310, June 2023

    Google Scholar 

  79. Zheng, H., Yang, H., Fu, J., Zha, Z.J., Luo, J.: Learning conditional knowledge distillation for degraded-reference image quality assessment. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10222–10231 (2021). https://doi.org/10.1109/ICCV48922.2021.01008

  80. Zhou, S., Chan, K., Li, C., Loy, C.C.: Towards robust blind face restoration with codebook lookup transformer. In: Advances in Neural Information Processing Systems, vol. 35, pp. 30599–30611 (2022)

    Google Scholar 

  81. Zhou, Y., Li, Z., Guo, C.L., Bai, S., Cheng, M.M., Hou, Q.: SRFormer: permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023)

  82. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

  83. Zhu, Q., et al.: CPGA: coding priors-guided aggregation network for compressed video quality enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2964–2974, June 2024

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant 62072271.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Wang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 2902 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qin, R., Sun, M., Zhou, C., Wang, B. (2025). A New Dataset and Framework for Real-World Blurred Images Super-Resolution. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15086. Springer, Cham. https://doi.org/10.1007/978-3-031-73390-1_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73390-1_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73389-5

  • Online ISBN: 978-3-031-73390-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics