Optimising Communication Overhead in Federated Learning Using NSGA-II | SpringerLink
Skip to main content

Optimising Communication Overhead in Federated Learning Using NSGA-II

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2022)

Abstract

Federated learning is a training paradigm according to which a server-based model is cooperatively trained using local models running on edge devices and ensuring data privacy. These devices exchange information that induces a substantial communication’s load, which jeopardises the functioning efficiency. The difficulty of reducing this overhead stands in achieving this without decreasing the model’s efficiency (contradictory relation). To do so, many works investigated the compression of the pre/mid/post-trained models and the communication rounds, separately, although they jointly contribute to the communication overload. Our work aims at optimising communication overhead in federated learning by (I) modelling it as a multi-objective problem and (II) applying a multi-objective optimization algorithm (NSGA-II) to solve it. To the best of the author’s knowledge, this is the first work that (I) explores the add-in that evolutionary computation could bring for solving such a problem, and (II) considers both the neuron and devices features together. We perform the experimentation by simulating a server/client architecture with 4 slaves. We investigate both convolutional and fully-connected neural networks with 12 and 3 layers, 887,530 and 33,400 weights, respectively. We conducted the validation on the MNIST dataset containing 70,000 images. The experiments have shown that our proposal could reduce the communication by 99% and maintain an accuracy equal to the one obtained by the FedAvg Algorithm that uses 100% of communications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 17159
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 21449
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    www.statista.com/statistics/1101442/iot-number-of-connected-devices-worldwide.

  2. 2.

    blogs.cisco.com/sp/five-things-that-are-bigger-than-the-internet-findings-from-this-years-global-cloud-index.

  3. 3.

    https://github.com/NEO-Research-Group/flcop.

References

  1. Alistarh, D., Grubic, D., Li, J.Z., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 1707–1718 (2017)

    Google Scholar 

  2. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Ev. Comp. 6(2), 182–197 (2002)

    Article  Google Scholar 

  3. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  4. Mayer, R., Jacobsen, H.A.: Scalable deep learning on distributed infrastructures: challenges, techniques, and tools. ACM Comput. Surv. 53(1), 1–37 (2020)

    Article  Google Scholar 

  5. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, PMLR, pp. 1273–1282 (2017)

    Google Scholar 

  6. Sheller, M.J., et al.: Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Sci. Rep. 10(1), 1–12 (2020)

    Article  Google Scholar 

  7. Tak, A., Cherkaoui, S.: Federated edge learning: design issues and challenges. IEEE Network (2020)

    Google Scholar 

  8. Wangni, J., Wang, J., Liu, J., Zhang, T.: Gradient sparsification for communication-efficient distributed optimization. In: Proceedings of 32nd International Conference on Neural Information Processing Systems, pp. 1306–1316 (2018)

    Google Scholar 

  9. Xu, J., Du, W., Jin, Y., He, W., Cheng, R.: Ternary compression for communication-efficient federated learning. IEEE Trans. Neural Networks Learn. Syst. (2020)

    Google Scholar 

  10. Zhou, Y., Ye, Q., Lv, J.C.: Communication-efficient federated learning with compensated overlap-fedavg. IEEE Trans. Parallel Distr. Syst. (2021)

    Google Scholar 

Download references

Acknowledgments

This research is partially funded by the Universidad de Málaga, Consejería de Economía y Conocimiento de la Junta de Andalucía and FEDER under grant number UMA18-FEDERJA-003 (PRECOG); under grant PID 2020-116727RB-I00 (HUmove) funded by MCIN/AEI/ 10.13039/501100011033; and TAILOR ICT-48 Network (No 952215) funded by EU Horizon 2020 research and innovation programme. José Ángel Morell is supported by an FPU grant from the Ministerio de Educación, Cultura y Deporte, Gobierno de España (FPU16/02595). The authors thank the Supercomputing and Bioinnovation Center (SCBI) for their provision of computational resources and technical support. The views expressed are purely those of the writer and may not in any circumstances be regarded as stating an official position of the European Commission.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José Ángel Morell .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Morell, J.Á., Dahi, Z.A., Chicano, F., Luque, G., Alba, E. (2022). Optimising Communication Overhead in Federated Learning Using NSGA-II. In: Jiménez Laredo, J.L., Hidalgo, J.I., Babaagba, K.O. (eds) Applications of Evolutionary Computation. EvoApplications 2022. Lecture Notes in Computer Science, vol 13224. Springer, Cham. https://doi.org/10.1007/978-3-031-02462-7_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-02462-7_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-02461-0

  • Online ISBN: 978-3-031-02462-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics