Abstract
A change of the prevalent supervised learning techniques is foreseeable in the near future: from the complex, computational expensive algorithms to more flexible and elementary training ones. The strong revitalization of randomized algorithms can be framed in this prospect steering. We recently proposed a model for distributed classification based on randomized neural networks and hyperdimensional computing, which takes into account cost of information exchange between agents using compression. The use of compression is important as it addresses the issues related to the communication bottleneck, however, the original approach is rigid in the way the compression is used. Therefore, in this work, we propose a more flexible approach to compression and compare it to conventional compression algorithms, dimensionality reduction, and quantization techniques.
The work of D.K. was supported by the European Union’s Horizon 2020 Programme under the Marie Skłodowska-Curie Individual Fellowship Grant (839179) and in part by the DARPA’s AIE (HyDDENN) program and by AFOSR FA9550-19-1-0241.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
The names of the datasets were: Abalone, Cardiotocography (10 classes) Chess (King-Rook vs. King-Pawn), Letter, Oocytes merluccius nucleus 4D, Oocytes merluccius states 2F, Pendigits, Plant margin, Plant shape, Plant texture, Ringnorm,Semeion, Spambase, Statlog landsat, Steel plates, Waveform, Waveform noise, Yeast.
References
Alonso, P., Shridhar, K., Kleyko, D., Osipov, E., Liwicki, M.: HyperEmbed: HyperEmbed: Trade-offs Between Resources and Performance in NLP Tasks with Hyperdimensional Computing enabled Embedding of n-gram Statisticss. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–9 (2021)
Ben-Nun, T., Hoefler, T.: Demystifying parallel and distributed deep learning: an in-depth concurrency analysis. ACM Comput. Surv. 52(4), 1–43 (2019)
Cheung, B., Terekhov, A., Chen, Y., Agrawal, P., Olshausen, B.A.: Superposition of Many Models into One. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 10868–10877 (2019). https://papers.nips.cc/paper/2019/file/4c7a167bb329bd92580a99ce422d6fa6-Paper.pdf
Deutsch, P., Gailly, J.L.: RFC1950: ZLIB Compressed Data Format Specification Version, vol. 3, p. 3 (1996)
Diao, C., Kleyko, D., Rabaey, J.M., Olshausen, B.A.: Generalized Learning Vector Quantization for Classification in Randomized Neural Networks and Hyper-dimensional Computing. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–9 (2021)
Dua, D., Graff, C.: UCI Machine Learning Repository (2017). http://archive.ics.uci.edu/ml
Fernández-Delgado, M., Cernadas, E., Barro, S., Amorim, D.: Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15(90), 3133–3181 (2014)
Fierimonte, R., Scardapane, S., Uncini, A., Panella, M.: Fully decentralized semi-supervised learning via privacy-preserving matrix completion. IEEE Trans. Neural Netw. Learn. Syst. 28(11), 2699–2711 (2017)
Frady, E.P., Kleyko, D., Sommer, F.T.: A theory of sequence indexing and working memory in recurrent neural networks. Neural Comput. 30, 1449–1513 (2018)
Frady, E.P., Kleyko, D., Sommer, F.T.: Variable binding for sparse distributed representations: theory and applications. arXiv:2009.06734, pp. 1–16 (2020)
Gayler, R.W.: Multiplicative binding, representation operators and analogy. In: Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences, pp. 1–4 (1998)
Hannagan, T., Dupoux, E., Christophe, A.: Holographic string encoding. Cogn. Sci. 35(1), 79–118 (2011)
Hersche, M., Rupp, P., Benini, L., Rahimi, A.: Compressing subject-specific brain-computer interface models into one model by superposition in hyperdimensional space. In: 2020 Design, Automation and Test in Europe Conference and Exhibition (DATE), pp. 246–251. IEEE (2020)
Huang, G., Zhu, Q., Siew, C.: Extreme learning machine: theory and applications. Neurocomputing 70(1–3), 489–501 (2006)
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18, 1–30 (2018)
Igelnik, B., Pao, Y.: Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans. Neural Netw. 6, 1320–1329 (1995)
Imani, M., et al.: A framework for collaborative learning in secure high-dimensional space. In: 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), pp. 435-446. IEEE (2019)
Jakimovski, P., Schmidtke, H.R., Sigg, S., Chaves, L.W.F., Beigl, M.: Collective communication for dense sensing environments. J. Ambient Intell. Smart Environ. 4(2), 123–134 (2012)
Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., et al.: Advances and Open Problems in Federated Learning. arXiv:1912.04977, pp. 1–121 (2019)
Kanerva, P.: Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cogn. Comput. 1(2), 139–159 (2009)
Kim, H.: HDM: Hyper-dimensional modulation for robust low-power communications. In: 2018 IEEE International Conference on Communications (ICC), pp. 1–6. IEEE (2018)
Kim, Y., Imani, M., Rosing, T.S.: Efficient human activity recognition using hyperdimensional computing. In: Proceedings of the 8th International Conference on the Internet of Things, pp. 1–6 (2018)
Kleyko, D., et al.: Vector Symbolic Architectures as a Computing Framework for Nanoscale Hardware. arXiv:2106.05268, pp. 1–28 (2021)
Kleyko, D., Frady, E.P., Kheffache, M., Osipov, E.: Integer echo state networks: efficient reservoir computing for digital hardware. IEEE Trans. Neural Netw. Learn. Syst. PP(99), 1–14 (2020)
Kleyko, D., Kheffache, M., Frady, E.P., Wiklund, U., Osipov, E.: Density encoding enables resource-efficient randomly connected neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(8), 3777–3783 (2021)
Kleyko, D., Lyamin, N., Osipov, E., Riliskis, L.: Dependable MAC layer architecture based on holographic data representation using hyper-dimensional binary spatter codes. In: Multiple Access Communications (MACOM). Lecture Notes in Computer Science, vol. 7642, pp. 134–145 (2012)
Kleyko, D., Osipov, E., De Silva, D., Wiklund, U., Alahakoon, D.: Integer self-organizing maps for digital hardware. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2019)
Kleyko, D., Osipov, E., Papakonstantinou, N., Vyatkin, V.: Hyperdimensional computing in industrial systems: the use-case of distributed fault isolation in a power plant. IEEE Access 6, 30766–30777 (2018)
Kleyko, D., Rahimi, A., Gayler, R.W., Osipov, E.: Autoscaling bloom filter: controlling trade-off between true and false positives. Neural Comput. Appl. 32, 3675–3684 (2020)
Kleyko, D., Rosato, A., Frady, E.P., Panella, M., Sommer, F.T.: Perceptron Theory for Predicting the Accuracy of Neural Networks. arXiv:2012.07881, pp. 1–12 (2020)
Osipov, E., Kleyko, D., Legalov, A.: Associative synthesis of finite state automata model of a controlled object with hyperdimensional computing. In: IECON 2017-43rd Annual Conference of the IEEE Industrial Electronics Society, pp. 3276-3281. IEEE (2017)
Plate, T.A.: Holographic Reduced Representations: Distributed Representation for Cognitive Structures. Center for the Study of Language and Information (CSLI), Stanford (2003)
Rachkovskij, D.A.: Representation and processing of structures with binary sparse distributed codes. IEEE Trans. Knowl. Data Eng. 3(2), 261–276 (2001)
Rizzi, A., Buccino, N.M., Panella, M., Uncini, A.: Genre classification of compressed audio data. In: 2008 IEEE 10th Workshop on Multimedia Signal Processing, pp. 654–659. IEEE (2008)
Rosato, A., Altilio, R., Panella, M.: Finite precision implementation of random vector functional-link networks. In: 2017 22nd International Conference on Digital Signal Processing (DSP), pp. 1–5. IEEE (2017)
Rosato, A., Panella, M., Kleyko, D.: Hyperdimensional Computing for Efficient Distributed Classification with Randomized Neural Networks. In: International Joint Conference on Neural Networks(IJCNN), pp. 1–10 (2021)
Scardapane, S., Wang, D.: Randomness in neural networks: an overview. Data Min. Knowl. Disc. 7(2), 1–18 (2017)
Shridhar, K., Jain, H., Agarwal, A., Kleyko, D.: End to end binarized neural networks for text classification. In: Workshop on Simple and Efficient Natural Language Processing (SustaiNLP), pp. 29–34 (2020)
Simpkin, C., Taylor, I., Bent, G.A., de Mel, G., Rallapalli, S., Ma, L., Srivatsa, M.: Constructing distributed time-critical applications using cognitive enabled services. Futur. Gener. Comput. Syst. 100, 70–85 (2019)
Thomas, A., Dasgupta, S., Rosing, T.: Theoretical Foundations of Hyperdimensional Computing. arXiv:2010.07426, pp. 1–32 (2020)
Xu, P.: Truncated SVD methods for discrete linear Ill-posed problems. Geophys. J. Int. 135(2), 505–514 (1998)
Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 1–19 (2019)
Yazici, M.T., Basurra, S., Gaber, M.M.: Edge machine learning: enabling smart internet of things applications. Big Data Cogn. Comput. 2(3), 1–17 (2018)
Yerxa, T., Anderson, A., Weiss, E.: The hyperdimensional stack machine. In: Cognitive Computing, pp. 1–2 (2018)
Zeman, M., Osipov, E., Bosnic, Z.: Compressed Superposition of Neural Networks for Deep Learning in Edge Computing. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Rosato, A., Panella, M., Osipov, E., Kleyko, D. (2021). On Effects of Compression with Hyperdimensional Computing in Distributed Randomized Neural Networks. In: Rojas, I., Joya, G., Català, A. (eds) Advances in Computational Intelligence. IWANN 2021. Lecture Notes in Computer Science(), vol 12862. Springer, Cham. https://doi.org/10.1007/978-3-030-85099-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-85099-9_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-85098-2
Online ISBN: 978-3-030-85099-9
eBook Packages: Computer ScienceComputer Science (R0)