BAFFLE: A Baseline of Backpropagation-Free Federated Learning | SpringerLink
Skip to main content

BAFFLE: A Baseline of Backpropagation-Free Federated Learning

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15133))

Included in the following conference series:

  • 217 Accesses

Abstract

Federated learning (FL) is a general principle for decentralized clients to train a server model collectively without sharing local data. FL is a promising framework with practical applications, but its standard training paradigm requires the clients to backpropagate through the model to compute gradients. Since these clients are typically edge devices and not fully trusted, executing backpropagation on them incurs computational and storage overhead as well as white-box vulnerability. In light of this, we develop backpropagation-free federated learning, dubbed BAFFLE, in which backpropagation is replaced by multiple forward processes to estimate gradients. BAFFLE is 1) memory-efficient and easily fits uploading bandwidth; 2) compatible with inference-only hardware optimization and model quantization or pruning; and 3) well-suited to trusted execution environments, because the clients in BAFFLE only execute forward propagation and return a set of scalars to the server. Empirically we use BAFFLE to train deep models from scratch or to finetune pretrained models, achieving acceptable results.

H. Feng—Work done during an internship at Sea AI Lab.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8465
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016 (2016)

    Google Scholar 

  2. Acharya, J., Canonne, C.L., Tyagi, H.: Inference under information constraints I: lower bounds from chi-square contraction. IEEE Trans. Inf. Theory (2020)

    Google Scholar 

  3. Adamczak, R., Litvak, A.E., Pajor, A., Tomczak-Jaegermann, N.: Sharp bounds on the rate of convergence of the empirical covariance matrix. C.R. Math. 349(3–4), 195–200 (2011)

    Article  MathSciNet  Google Scholar 

  4. Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. In: Advances in Neural Information Processing Systems (NeurIPS) (2017)

    Google Scholar 

  5. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics (AISTATS) (2020)

    Google Scholar 

  6. Balakrishnan, R., Li, T., Zhou, T., Himayat, N., Smith, V., Bilmes, J.: Diverse client selection for federated learning via submodular maximization. In: International Conference on Learning Representations (ICLR) (2022)

    Google Scholar 

  7. Barnes, L.P., Han, Y., Özgür, A.: Lower bounds for learning distributions under communication constraints via fisher information. J. Mach. Learn. Res. (JMLR) (2020)

    Google Scholar 

  8. Basu, D., Data, D., Karakus, C., Diggavi, S.: Qsparse-Local-SGD: distributed SGD with quantization, sparsification and local computations. In: Advances in Neural Information Processing Systems (NeurIPS) (2019)

    Google Scholar 

  9. Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: International Conference on Machine Learning (ICML) (2019)

    Google Scholar 

  10. Bonawitz, K.A., et al.: Practical secure aggregation for privacy-preserving machine learning. In: ACM Conference on Computer and Communications Security (CCS) (2017)

    Google Scholar 

  11. Bonawitz, K., et al.: Practical secure aggregation for federated learning on user-held data. arXiv preprint arXiv:1611.04482 (2016)

  12. Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: ACM SIGSAC Conference on Computer and Communications Security (2017)

    Google Scholar 

  13. Borgnia, E., et al.: DP-InstaHide: provably defusing poisoning and backdoor attacks with differentially private data augmentations. arXiv preprint arXiv:2103.02079 (2021)

  14. Bradbury, J., et al.: JAX: composable transformations of Python+NumPy programs (2018). http://github.com/google/jax

  15. Braverman, M., Garg, A., Ma, T., Nguyen, H.L., Woodruff, D.P.: Communication lower bounds for statistical estimation problems via a distributed data processing inequality. In: ACM Symposium on Theory of Computing (2016)

    Google Scholar 

  16. Caldas, S., Konečny, J., McMahan, H.B., Talwalkar, A.: Expanding the reach of federated learning by reducing client resource requirements. arXiv preprint arXiv:1812.07210 (2018)

  17. Caldas, S., et al.: LEAF: a benchmark for federated settings. CoRR (2018)

    Google Scholar 

  18. Chen, M., Shlezinger, N., Poor, H.V., Eldar, Y.C., Cui, S.: Communication-efficient federated learning. Proc. Natl. Acad. Sci. 118(17), e2024789118 (2021)

    Article  Google Scholar 

  19. Chen, Y., Luo, F., Li, T., Xiang, T., Liu, Z., Li, J.: A training-integrity privacy-preserving federated learning scheme with trusted execution environment. Inf. Sci. 522, 69–79 (2020)

    Article  Google Scholar 

  20. Dong, Y., et al.: Black-box detection of backdoor attacks with limited information and data. In: IEEE International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  21. Eichner, H., Koren, T., McMahan, B., Srebro, N., Talwar, K.: Semi-cyclic stochastic gradient descent. In: International Conference on Machine Learning (ICML) (2019)

    Google Scholar 

  22. Fang, W., Yu, Z., Jiang, Y., Shi, Y., Jones, C.N., Zhou, Y.: Communication-efficient stochastic zeroth-order optimization for federated learning. IEEE Trans. Signal Process. (2022)

    Google Scholar 

  23. Flaxman, A.D., Kalai, A.T., McMahan, H.B.: Online convex optimization in the bandit setting: gradient descent without a gradient. arXiv preprint cs/0408007 (2004)

    Google Scholar 

  24. Gao, Y., et al.: Estimating GPU memory consumption of deep learning models. In: ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (2020)

    Google Scholar 

  25. Geiping, J., Bauermeister, H., Dröge, H., Moeller, M.: Inverting gradients-how easy is it to break privacy in federated learning? In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  26. Ghazi, B., Kumar, R., Manurangsi, P., Pagh, R.: Private counting from anonymous messages: near-optimal accuracy with vanishing communication overhead. In: International Conference on Machine Learning (ICML) (2020)

    Google Scholar 

  27. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    Google Scholar 

  28. Grill, J., et al.: Bootstrap your own latent - a new approach to self-supervised learning. In: NeurIPS 2020 (2020)

    Google Scholar 

  29. Hamer, J., Mohri, M., Suresh, A.T.: FedBoost: a communication-efficient algorithm for federated learning. In: International Conference on Machine Learning (ICML) (2020)

    Google Scholar 

  30. Han, Y., Özgür, A., Weissman, T.: Geometric lower bounds for distributed parameter estimation under communication constraints. In: Conference on Learning Theory (COLT) (2018)

    Google Scholar 

  31. Hao, M., Li, H., Luo, X., Xu, G., Yang, H., Liu, S.: Efficient and privacy-enhanced federated learning for industrial artificial intelligence. IEEE Trans. Ind. Inform. (2019)

    Google Scholar 

  32. Hard, A., et al.: Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018)

  33. He, C., Annavaram, M., Avestimehr, S.: Group knowledge transfer: federated learning of large CNNs at the edge. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  34. He, D., et al.: Learning physics-informed neural networks without stacked back-propagation. arXiv preprint arXiv:2202.09340 (2022)

  35. Hinton, G., Srivastava, N.: CSC321: introduction to neural networks and machine learning. Lecture 10 (2010)

    Google Scholar 

  36. Horváth, S., Ho, C.Y., Horvath, L., Sahu, A.N., Canini, M., Richtárik, P.: Natural compression for distributed deep learning. arXiv preprint arXiv:1905.10988 (2019)

  37. Howard, A., et al.: Searching for MobileNetV3. In: IEEE International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  38. Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. CoRR (2017)

    Google Scholar 

  39. Huang, Y., Gupta, S., Song, Z., Li, K., Arora, S.: Evaluating gradient inversion attacks and defenses in federated learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2021)

    Google Scholar 

  40. Huh, M., Agrawal, P., Efros, A.A.: What makes ImageNet good for transfer learning? CoRR (2016)

    Google Scholar 

  41. Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. In: Hua, K.A., Rui, Y., Steinmetz, R., Hanjalic, A., Natsev, A., Zhu, W. (eds.) Proceedings of the ACM International Conference on Multimedia (2014)

    Google Scholar 

  42. Kairouz, P., et al.: Advances and open problems in federated learning. Found. Trends® Mach. Learn. 14(1–2), 1–210 (2021)

    Google Scholar 

  43. Kang, J., Xiong, Z., Niyato, D., Zou, Y., Zhang, Y., Guizani, M.: Reliable federated learning for mobile networks. IEEE Wirel. Commun. (2020)

    Google Scholar 

  44. Kim, K., et al.: Vessels: efficient and scalable deep learning prediction on trusted processors. In: ACM Symposium on Cloud Computing (2020)

    Google Scholar 

  45. Kim, Y., Sun, J., Yu, H., Jiang, X.: Federated tensor factorization for computational phenotyping. In: ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD) (2017)

    Google Scholar 

  46. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015 (2015)

    Google Scholar 

  47. Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016)

  48. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  49. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  50. Li, J., Luo, X., Qiao, M.: On generalization error bounds of noisy gradient methods for non-convex learning. In: International Conference on Learning Representations (ICLR) (2020)

    Google Scholar 

  51. Li, L., Fan, Y., Tse, M., Lin, K.Y.: A review of applications in federated learning. Comput. Ind. Eng. 149, 106854 (2020)

    Article  Google Scholar 

  52. Li, T., Hu, S., Beirami, A., Smith, V.: Ditto: fair and robust federated learning through personalization. In: International Conference on Machine Learning (ICML) (2021)

    Google Scholar 

  53. Li, X., Huang, K., Yang, W., Wang, S., Zhang, Z.: On the convergence of FedAvg on Non-IID data. In: International Conference on Learning Representations (ICLR) (2020)

    Google Scholar 

  54. Li, Z., Chen, L.: Communication-efficient decentralized zeroth-order method on heterogeneous data. In: WCSP 2021 (2021)

    Google Scholar 

  55. Liu, S., Chen, P.Y., Kailkhura, B., Zhang, G., Hero III, A.O., Varshney, P.K.: A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications. IEEE Signal Process. Mag. 37(5), 43–54 (2020)

    Google Scholar 

  56. Liu, Y., Suresh, A.T., Yu, F.X.X., Kumar, S., Riley, M.: Learning discrete distributions: user vs item-level privacy. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  57. Luo, X., Wu, Y., Xiao, X., Ooi, B.C.: Feature inference attack on model predictions in vertical federated learning. In: IEEE International Conference on Data Engineering (ICDE) (2021)

    Google Scholar 

  58. Lyu, L., et al.: Privacy and robustness in federated learning: attacks and defenses. IEEE Trans. Neural Netw. Learn. Syst., 1–21 (2022)

    Google Scholar 

  59. Ma, J., Zhang, Q., Lou, J., Ho, J.C., Xiong, L., Jiang, X.: Privacy-preserving tensor factorization for collaborative health data analysis. In: ACM International Conference on Information and Knowledge Management (CIKM) (2019)

    Google Scholar 

  60. Marfoq, O., Neglia, G., Bellet, A., Kameni, L., Vidal, R.: Federated multi-task learning under a mixture of distributions. In: Advances in Neural Information Processing Systems (NeurIPS) (2021)

    Google Scholar 

  61. McKeen, F., et al.: Intel® software guard extensions (intel® SGX) support for dynamic memory management inside an enclave. In: Proceedings of the Hardware and Architectural Support for Security and Privacy (2016)

    Google Scholar 

  62. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: International Conference on Artificial Intelligence and Statistics (AISTATS) (2017)

    Google Scholar 

  63. McMahan, B., Ramage, D., Talwar, K., Zhang, L.: Learning differentially private recurrent language models. In: International Conference on Learning Representations (ICLR) (2018)

    Google Scholar 

  64. Mo, F., Haddadi, H., Katevas, K., Marin, E., Perino, D., Kourtellis, N.: PPFL: privacy-preserving federated learning with trusted execution environments. In: Annual International Conference on Mobile Systems, Applications, and Services (2021)

    Google Scholar 

  65. Mondal, A., More, Y., Rooparaghunath, R.H., Gupta, D.: FLATEE: federated learning across trusted execution environments. arXiv preprint arXiv:2111.06867 (2021)

  66. Moulay, E., Léchappé, V., Plestan, F.: Properties of the sign gradient descent algorithms. Inf. Sci. 492, 29–39 (2019)

    Article  MathSciNet  Google Scholar 

  67. Nakandala, S., Nagrecha, K., Kumar, A., Papakonstantinou, Y.: Incremental and approximate computations for accelerating deep CNN inference. ACM Trans. Database Syst. 45, 1–42 (2020)

    Article  MathSciNet  Google Scholar 

  68. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: IEEE Symposium on Security and Privacy (S &P). IEEE (2019)

    Google Scholar 

  69. Nesterov, Y., Spokoiny, V.: Random gradient-free minimization of convex functions. Found. Comput. Math. 17, 527–566 (2017)

    Article  MathSciNet  Google Scholar 

  70. Pang, T., Xu, K., Li, C., Song, Y., Ermon, S., Zhu, J.: Efficient learning of generative models via finite-difference score matching. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  71. Pang, T., Yang, X., Dong, Y., Su, H., Zhu, J.: Accumulative poisoning attacks on real-time data. In: Advances in Neural Information Processing Systems (NeurIPS) (2021)

    Google Scholar 

  72. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  73. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems (NeurIPS) (2019)

    Google Scholar 

  74. Rothchild, D., et al.: FetchSGD: communication-efficient federated learning with sketching. In: International Conference on Machine Learning (ICML) (2020)

    Google Scholar 

  75. Sabt, M., Achemlal, M., Bouabdallah, A.: Trusted execution environment: what it is, and what it is not. In: IEEE Trustcom/BigDataSE/ISPA (2015)

    Google Scholar 

  76. Sattler, F., Wiedemann, S., Müller, K.R., Samek, W.: Robust and communication-efficient federated learning from Non-IID data. IEEE Trans. Neural Network Learn. Syst. (TNNLS) (2019)

    Google Scholar 

  77. Saxe, A.M., McClelland, J.L., Ganguli, S.: Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 (2013)

  78. Seetharaman, S., Malaviya, S., KV, R., Shukla, M., Lodha, S.: Influence based defense against data poisoning attacks in online learning. arXiv preprint arXiv:2104.13230 (2021)

  79. Sharma, H., et al.: Bit fusion: bit-level dynamically composable architecture for accelerating deep neural network. In: IEEE Annual International Symposium on Computer Architecture (ISCA) (2018)

    Google Scholar 

  80. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: IEEE Symposium on Security and Privacy (S &P). IEEE (2017)

    Google Scholar 

  81. Smith, V., Chiang, C.K., Sanjabi, M., Talwalkar, A.S.: Federated multi-task learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2017)

    Google Scholar 

  82. Stein, C.M.: Estimation of the mean of a multivariate normal distribution. Ann. Stat. 9, 1135–1151 (1981)

    Article  MathSciNet  Google Scholar 

  83. Sun, Z., Kairouz, P., Suresh, A.T., McMahan, H.B.: Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019)

  84. Suresh, A.T., Felix, X.Y., Kumar, S., McMahan, H.B.: Distributed mean estimation with limited communication. In: International Conference on Machine Learning (ICML) (2017)

    Google Scholar 

  85. Tramer, F., Boneh, D.: Slalom: fast, verifiable and private execution of neural networks in trusted hardware. In: International Conference on Learning Representations (ICLR) (2019)

    Google Scholar 

  86. Truex, S., et al.: A hybrid approach to privacy-preserving federated learning. In: ACM Workshop on Artificial Intelligence and Security (2019)

    Google Scholar 

  87. Truong, J.B., Gallagher, W., Guo, T., Walls, R.J.: Memory-efficient deep learning inference in trusted execution environments. In: IEEE International Conference on Cloud Engineering (IC2E) (2021)

    Google Scholar 

  88. Umuroglu, Y., Rasnayake, L., Själander, M.: BISMO: a scalable bit-serial matrix multiplication overlay for reconfigurable computing. In: International Conference on Field Programmable Logic and Applications (FPL) (2018)

    Google Scholar 

  89. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: CVPR (2017)

    Google Scholar 

  90. Wang, H., et al.: Attack of the tails: Yes, you really can backdoor federated learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  91. Wang, H.P., Stich, S., He, Y., Fritz, M.: ProgFed: effective, communication, and computation efficient federated learning by progressive training. In: International Conference on Machine Learning (ICML) (2022)

    Google Scholar 

  92. Wang, J., Liu, Q., Liang, H., Joshi, G., Poor, H.V.: Tackling the objective inconsistency problem in heterogeneous federated optimization. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  93. Wang, K., Liu, Z., Lin, Y., Lin, J., Han, S.: HAQ: hardware-aware automated quantization with mixed precision. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  94. Wang, L., Wang, W., Li, B.: CMFL: mitigating communication overhead for federated learning. In: IEEE International Conference on Distributed Computing Systems (ICDCS) (2019)

    Google Scholar 

  95. Wei, K., et al.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020)

    Article  Google Scholar 

  96. Wu, Y., He, K.: Group normalization. Int. J. Comput. Vis. 128(3), 742–755 (2020)

    Article  Google Scholar 

  97. Xie, C., Huang, K., Chen, P.Y., Li, B.: DBA: distributed backdoor attacks against federated learning. In: International Conference on Learning Representations (ICLR) (2020)

    Google Scholar 

  98. Yang, T., et al.: Applied federated learning: improving google keyboard query suggestions. arXiv preprint arXiv:1812.02903 (2018)

  99. Yang, X., et al.: An accuracy-lossless perturbation method for defending privacy attacks in federated learning. In: ACM Web Conference (WWW) (2022)

    Google Scholar 

  100. Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Proceedings of the British Machine Vision Conference (2016)

    Google Scholar 

  101. Zeng, D., Liang, S., Hu, X., Wang, H., Xu, Z.: FedLab: a flexible federated learning framework. arXiv preprint arXiv:2107.11621 (2021)

  102. Zhang, Y., Duchi, J., Jordan, M.I., Wainwright, M.J.: Information-theoretic lower bounds for distributed statistical estimation with communication constraints. In: Advances in Neural Information Processing Systems (NeurIPS) (2013)

    Google Scholar 

  103. Zhang, Y., Jia, R., Pei, H., Wang, W., Li, B., Song, D.: The secret revealer: generative model-inversion attacks against deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  104. Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with Non-IID data. arXiv preprint arXiv:1806.00582 (2018)

  105. Zhu, Z., Wu, J., Yu, B., Wu, L., Ma, J.: The anisotropic noise in stochastic gradient descent: its behavior of escaping from sharp minima and regularization effects. In: International Conference on Machine Learning (ICML) (2019)

    Google Scholar 

Download references

Acknowledgments

This paper is supported by the National Science Foundation of China (62132017), Zhejiang Provincial Natural Science Foundation of China (LD24F020011).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Tianyu Pang or Wei Chen .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 387 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Feng, H., Pang, T., Du, C., Chen, W., Yan, S., Lin, M. (2025). BAFFLE: A Baseline of Backpropagation-Free Federated Learning. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15133. Springer, Cham. https://doi.org/10.1007/978-3-031-73226-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73226-3_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73225-6

  • Online ISBN: 978-3-031-73226-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics