Abstract
Although machine learning (ML) is a field that has been the subject of research for decades, a large number of applications with high computational power have recently emerged. Usually, we only focus on solving machine learning problems without considering how much energy has been consumed by the different frameworks used for such applications. This study aims to provide a comparison among four widely used frameworks such as Tensorflow, Keras, Pytorch, and Scikit-learn in terms of many aspects, including energy efficiency, memory usage, execution time, and accuracy. We monitor the performance of such frameworks using different well-known machine learning benchmark problems. Our results show interesting findings, such as slower and faster frameworks consuming less or more energy, higher or lower memory usage, etc. We show how to use our results to provide machine learning developers with information to decide which framework to use for their applications when energy efficiency is a concern.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Aakash, N., Sayak, P.M.M.R.: Keras: high-level neural networks API. https://keras.io/ (2018). Accessed 20 Oct 2018
Ahmed, F.S., et al.: A hybrid machine learning framework to predict mortality in paralytic ileus patients using electronic health records (EHRs). J. Ambient Intell. Human Comput. 12, 685–693 (2021)
Ali, R.B., Ejbali, R., Zaied, M.: A deep convolutional neural wavelet network for classification of medical images, vol. 14, pp. 1488–1498. Science Publications (2018). https://doi.org/10.3844/jcssp.2018.1488.1498
Alzubaidi, L., et al.: Review of deep learning: concepts, CNN architectures, challenges, applications, future directions, vol. 8 (2021)
Bahrampour, S., Ramakrishnan, N., Schott, L., Shah, M.: Comparative study of deep learning software frameworks (2015)
Chintala, S.: Convnet-benchmarks. https://mse238blog.stanford.edu/2017/07/gnakhare/hardware-options-for-machinedeep-learning. https://github.com/soumith/convnet-benchmarks/ (2015). Accessed 30 Oct 2015
Deng, L.: The MNIST database of handwritten digit images for machine learning research, vol. 29, pp. 141–142 (2012). https://doi.org/10.1109/MSP.2012.2211477
Elman, J.: Distributed representations, simple recurrent networks, and grammatical structure, vol. 7 (1993)
Gevorkyan, M.N., Demidova, A.V., Demidova, T.S., Sobolev, A.A.: Review and comparative analysis of machine learning libraries for machine learning, vol. 27, pp. 305–315 (2019)
Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding, vol. abs/1408.5093 (2014). http://arxiv.org/abs/1408.5093
Nguyen, G., et al.: Machine learning and deep learning frameworks and libraries for large-scale data mining: a survey. Artif. Intell. Rev. 52(1), 77–124 (2019). https://doi.org/10.1007/s10462-018-09679-z
Pereira, R., et al.: Energy efficiency across programming languages: how do energy, time, and memory related. In: Proceedings of the 10th ACM SIGPLAN International Conference on Software Language Engineering, pp. 256–267 (2017)
Phaladisailoed, T., Numnonda, T.: Machine learning models comparison for bitcoin price prediction. In: 2018 10th International Conference on Information Technology and Electrical Engineering (ICITEE), pp. 506–511 (2018)
PyTorch: PyTorch deep learning framework that puts python first. http://pytorch.org/ (2018). Accessed 20 Oct 2018
Said, S., Jemai, O., Hassairi, S., Ejbali, R., Zaied, M., Ben Amar, C.: Deep wavelet network for image classification. In: 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 922–927 (2016)
Shatnawi, A., Al-Bdour, G., Al-Qurran, R., Al-Ayyoub, M.: A comparative study of open source deep learning frameworks. In: 2018 9th International Conference on Information and Communication Systems (ICICS), pp. 72–77 (2018). https://doi.org/10.1109/IACS.2018.8355444
Shi, S., Wang, Q., Xu, P., Chu, X.: Benchmarking state-of-the-art deep learning software tools. In: 2016 7th International Conference on Cloud Computing and Big Data (CCBD), pp. 99–104 (2016). https://doi.org/10.1109/CCBD.2016.029
Shorten, C., Khoshgoftaar, T.M.: A survey on image data augmentation for deep learning. J. Big Data 6(1), 1–48 (2019). https://doi.org/10.1186/s40537-019-0197-0
Wu, Y., et al.: A comparative measurement study of deep learning as a service framework. IEEE Trans. Services Comput. 15, 551–566 (2022). https://doi.org/10.1109/TSC.2019.2928551
Acknowledgement
We want to thank the Ministry of Higher Education and Gabes University for facilitating the travel of Salwa Ajel to Portugal, the HASLab/INESC TEC, Universidade do Minho (Portugal) for the technical support of the work, and the Erasmus Jamies for accepting Salwa Ajel’s application.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ajel, S., Ribeiro, F., Ejbali, R., Saraiva, J. (2023). Energy Efficiency of Python Machine Learning Frameworks. In: Abraham, A., Pllana, S., Casalino, G., Ma, K., Bajaj, A. (eds) Intelligent Systems Design and Applications. ISDA 2022. Lecture Notes in Networks and Systems, vol 715. Springer, Cham. https://doi.org/10.1007/978-3-031-35507-3_57
Download citation
DOI: https://doi.org/10.1007/978-3-031-35507-3_57
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35506-6
Online ISBN: 978-3-031-35507-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)