On Effects of Compression with Hyperdimensional Computing in Distributed Randomized Neural Networks | SpringerLink
Skip to main content

On Effects of Compression with Hyperdimensional Computing in Distributed Randomized Neural Networks

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12862))

Included in the following conference series:

Abstract

A change of the prevalent supervised learning techniques is foreseeable in the near future: from the complex, computational expensive algorithms to more flexible and elementary training ones. The strong revitalization of randomized algorithms can be framed in this prospect steering. We recently proposed a model for distributed classification based on randomized neural networks and hyperdimensional computing, which takes into account cost of information exchange between agents using compression. The use of compression is important as it addresses the issues related to the communication bottleneck, however, the original approach is rigid in the way the compression is used. Therefore, in this work, we propose a more flexible approach to compression and compare it to conventional compression algorithms, dimensionality reduction, and quantization techniques.

The work of D.K. was supported by the European Union’s Horizon 2020 Programme under the Marie Skłodowska-Curie Individual Fellowship Grant (839179) and in part by the DARPA’s AIE (HyDDENN) program and by AFOSR FA9550-19-1-0241.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    This is possible since HDC/VSA provide primitives for representing structured data in hypervectors such as sequences [9, 12, 20, 40], sets [20, 29], state automata [31, 44], hierarchies, or predicate relations [32, 33]. Please consult [23] for a general overview.

  2. 2.

    The names of the datasets were: Abalone, Cardiotocography (10 classes) Chess (King-Rook vs. King-Pawn), Letter, Oocytes merluccius nucleus 4D, Oocytes merluccius states 2F, Pendigits, Plant margin, Plant shape, Plant texture, Ringnorm,Semeion, Spambase, Statlog landsat, Steel plates, Waveform, Waveform noise, Yeast.

References

  1. Alonso, P., Shridhar, K., Kleyko, D., Osipov, E., Liwicki, M.: HyperEmbed: HyperEmbed: Trade-offs Between Resources and Performance in NLP Tasks with Hyperdimensional Computing enabled Embedding of n-gram Statisticss. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–9 (2021)

    Google Scholar 

  2. Ben-Nun, T., Hoefler, T.: Demystifying parallel and distributed deep learning: an in-depth concurrency analysis. ACM Comput. Surv. 52(4), 1–43 (2019)

    Article  Google Scholar 

  3. Cheung, B., Terekhov, A., Chen, Y., Agrawal, P., Olshausen, B.A.: Superposition of Many Models into One. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 10868–10877 (2019). https://papers.nips.cc/paper/2019/file/4c7a167bb329bd92580a99ce422d6fa6-Paper.pdf

  4. Deutsch, P., Gailly, J.L.: RFC1950: ZLIB Compressed Data Format Specification Version, vol. 3, p. 3 (1996)

    Google Scholar 

  5. Diao, C., Kleyko, D., Rabaey, J.M., Olshausen, B.A.: Generalized Learning Vector Quantization for Classification in Randomized Neural Networks and Hyper-dimensional Computing. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–9 (2021)

    Google Scholar 

  6. Dua, D., Graff, C.: UCI Machine Learning Repository (2017). http://archive.ics.uci.edu/ml

  7. Fernández-Delgado, M., Cernadas, E., Barro, S., Amorim, D.: Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15(90), 3133–3181 (2014)

    MathSciNet  MATH  Google Scholar 

  8. Fierimonte, R., Scardapane, S., Uncini, A., Panella, M.: Fully decentralized semi-supervised learning via privacy-preserving matrix completion. IEEE Trans. Neural Netw. Learn. Syst. 28(11), 2699–2711 (2017)

    Article  MathSciNet  Google Scholar 

  9. Frady, E.P., Kleyko, D., Sommer, F.T.: A theory of sequence indexing and working memory in recurrent neural networks. Neural Comput. 30, 1449–1513 (2018)

    Article  MathSciNet  Google Scholar 

  10. Frady, E.P., Kleyko, D., Sommer, F.T.: Variable binding for sparse distributed representations: theory and applications. arXiv:2009.06734, pp. 1–16 (2020)

  11. Gayler, R.W.: Multiplicative binding, representation operators and analogy. In: Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences, pp. 1–4 (1998)

    Google Scholar 

  12. Hannagan, T., Dupoux, E., Christophe, A.: Holographic string encoding. Cogn. Sci. 35(1), 79–118 (2011)

    Article  Google Scholar 

  13. Hersche, M., Rupp, P., Benini, L., Rahimi, A.: Compressing subject-specific brain-computer interface models into one model by superposition in hyperdimensional space. In: 2020 Design, Automation and Test in Europe Conference and Exhibition (DATE), pp. 246–251. IEEE (2020)

    Google Scholar 

  14. Huang, G., Zhu, Q., Siew, C.: Extreme learning machine: theory and applications. Neurocomputing 70(1–3), 489–501 (2006)

    Article  Google Scholar 

  15. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18, 1–30 (2018)

    MathSciNet  MATH  Google Scholar 

  16. Igelnik, B., Pao, Y.: Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans. Neural Netw. 6, 1320–1329 (1995)

    Article  Google Scholar 

  17. Imani, M., et al.: A framework for collaborative learning in secure high-dimensional space. In: 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), pp. 435-446. IEEE (2019)

    Google Scholar 

  18. Jakimovski, P., Schmidtke, H.R., Sigg, S., Chaves, L.W.F., Beigl, M.: Collective communication for dense sensing environments. J. Ambient Intell. Smart Environ. 4(2), 123–134 (2012)

    Article  Google Scholar 

  19. Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., et al.: Advances and Open Problems in Federated Learning. arXiv:1912.04977, pp. 1–121 (2019)

  20. Kanerva, P.: Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cogn. Comput. 1(2), 139–159 (2009)

    Article  Google Scholar 

  21. Kim, H.: HDM: Hyper-dimensional modulation for robust low-power communications. In: 2018 IEEE International Conference on Communications (ICC), pp. 1–6. IEEE (2018)

    Google Scholar 

  22. Kim, Y., Imani, M., Rosing, T.S.: Efficient human activity recognition using hyperdimensional computing. In: Proceedings of the 8th International Conference on the Internet of Things, pp. 1–6 (2018)

    Google Scholar 

  23. Kleyko, D., et al.: Vector Symbolic Architectures as a Computing Framework for Nanoscale Hardware. arXiv:2106.05268, pp. 1–28 (2021)

  24. Kleyko, D., Frady, E.P., Kheffache, M., Osipov, E.: Integer echo state networks: efficient reservoir computing for digital hardware. IEEE Trans. Neural Netw. Learn. Syst. PP(99), 1–14 (2020)

    Google Scholar 

  25. Kleyko, D., Kheffache, M., Frady, E.P., Wiklund, U., Osipov, E.: Density encoding enables resource-efficient randomly connected neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(8), 3777–3783 (2021)

    Article  Google Scholar 

  26. Kleyko, D., Lyamin, N., Osipov, E., Riliskis, L.: Dependable MAC layer architecture based on holographic data representation using hyper-dimensional binary spatter codes. In: Multiple Access Communications (MACOM). Lecture Notes in Computer Science, vol. 7642, pp. 134–145 (2012)

    Google Scholar 

  27. Kleyko, D., Osipov, E., De Silva, D., Wiklund, U., Alahakoon, D.: Integer self-organizing maps for digital hardware. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2019)

    Google Scholar 

  28. Kleyko, D., Osipov, E., Papakonstantinou, N., Vyatkin, V.: Hyperdimensional computing in industrial systems: the use-case of distributed fault isolation in a power plant. IEEE Access 6, 30766–30777 (2018)

    Article  Google Scholar 

  29. Kleyko, D., Rahimi, A., Gayler, R.W., Osipov, E.: Autoscaling bloom filter: controlling trade-off between true and false positives. Neural Comput. Appl. 32, 3675–3684 (2020)

    Article  Google Scholar 

  30. Kleyko, D., Rosato, A., Frady, E.P., Panella, M., Sommer, F.T.: Perceptron Theory for Predicting the Accuracy of Neural Networks. arXiv:2012.07881, pp. 1–12 (2020)

  31. Osipov, E., Kleyko, D., Legalov, A.: Associative synthesis of finite state automata model of a controlled object with hyperdimensional computing. In: IECON 2017-43rd Annual Conference of the IEEE Industrial Electronics Society, pp. 3276-3281. IEEE (2017)

    Google Scholar 

  32. Plate, T.A.: Holographic Reduced Representations: Distributed Representation for Cognitive Structures. Center for the Study of Language and Information (CSLI), Stanford (2003)

    Google Scholar 

  33. Rachkovskij, D.A.: Representation and processing of structures with binary sparse distributed codes. IEEE Trans. Knowl. Data Eng. 3(2), 261–276 (2001)

    Article  Google Scholar 

  34. Rizzi, A., Buccino, N.M., Panella, M., Uncini, A.: Genre classification of compressed audio data. In: 2008 IEEE 10th Workshop on Multimedia Signal Processing, pp. 654–659. IEEE (2008)

    Google Scholar 

  35. Rosato, A., Altilio, R., Panella, M.: Finite precision implementation of random vector functional-link networks. In: 2017 22nd International Conference on Digital Signal Processing (DSP), pp. 1–5. IEEE (2017)

    Google Scholar 

  36. Rosato, A., Panella, M., Kleyko, D.: Hyperdimensional Computing for Efficient Distributed Classification with Randomized Neural Networks. In: International Joint Conference on Neural Networks(IJCNN), pp. 1–10 (2021)

    Google Scholar 

  37. Scardapane, S., Wang, D.: Randomness in neural networks: an overview. Data Min. Knowl. Disc. 7(2), 1–18 (2017)

    Google Scholar 

  38. Shridhar, K., Jain, H., Agarwal, A., Kleyko, D.: End to end binarized neural networks for text classification. In: Workshop on Simple and Efficient Natural Language Processing (SustaiNLP), pp. 29–34 (2020)

    Google Scholar 

  39. Simpkin, C., Taylor, I., Bent, G.A., de Mel, G., Rallapalli, S., Ma, L., Srivatsa, M.: Constructing distributed time-critical applications using cognitive enabled services. Futur. Gener. Comput. Syst. 100, 70–85 (2019)

    Article  Google Scholar 

  40. Thomas, A., Dasgupta, S., Rosing, T.: Theoretical Foundations of Hyperdimensional Computing. arXiv:2010.07426, pp. 1–32 (2020)

  41. Xu, P.: Truncated SVD methods for discrete linear Ill-posed problems. Geophys. J. Int. 135(2), 505–514 (1998)

    Article  Google Scholar 

  42. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 1–19 (2019)

    Article  Google Scholar 

  43. Yazici, M.T., Basurra, S., Gaber, M.M.: Edge machine learning: enabling smart internet of things applications. Big Data Cogn. Comput. 2(3), 1–17 (2018)

    Google Scholar 

  44. Yerxa, T., Anderson, A., Weiss, E.: The hyperdimensional stack machine. In: Cognitive Computing, pp. 1–2 (2018)

    Google Scholar 

  45. Zeman, M., Osipov, E., Bosnic, Z.: Compressed Superposition of Neural Networks for Deep Learning in Edge Computing. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Massimo Panella .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rosato, A., Panella, M., Osipov, E., Kleyko, D. (2021). On Effects of Compression with Hyperdimensional Computing in Distributed Randomized Neural Networks. In: Rojas, I., Joya, G., Català, A. (eds) Advances in Computational Intelligence. IWANN 2021. Lecture Notes in Computer Science(), vol 12862. Springer, Cham. https://doi.org/10.1007/978-3-030-85099-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85099-9_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85098-2

  • Online ISBN: 978-3-030-85099-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics