An Interpretable Two-Layered Neural Network Structure–Based on Component-Wise Reasoning | SpringerLink
Skip to main content

An Interpretable Two-Layered Neural Network Structure–Based on Component-Wise Reasoning

  • Conference paper
  • First Online:
Artificial Intelligence and Soft Computing (ICAISC 2023)

Abstract

The success of deep neural networks have been compromised by their lack of interpretability. On the other hand, most interpretable models do not offer same accuracy as deep neural networks or the former depends on the latter. Inspired by Classification-by-Components networks, in this paper, we present a novel approach into designing a two-layered perceptron network, that offers a level of interpetability. Hence, we use the prediction power of a multi-layer perceptron, while a class of the adapted parameters make fair sense to human. We will visualize the weights, between input layer and the hidden layer, and show that Matching the right objective function with activation function of the output layer, is the key to interpreting the weights and their influence on component-wise classification.

M.M.B. is supported by an ESF PhD grant.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 10295
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 12869
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  2. Vellido, A.: The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Netw. Appli. 32(24), 18069–18083 (2020)

    Google Scholar 

  3. Sato, A., Yamada, K.: Generalized learning vector quantization. In: Advances in Neural Information Processing Systems 8. Proceedings of the 1995 Conference, pp. 423–429 (1996)

    Google Scholar 

  4. Seo, S., Obermayer, K.: Soft learning vector quantization. Neural Comput. 15(7), 1589–1604 (2003)

    Article  MATH  Google Scholar 

  5. Kohonen, T.: Self-Organizing Maps, 2nd edn. Springer, Berlin, Heidelberg (1995). https://doi.org/10.1007/978-3-642-56927-2

  6. Schneider, P., Hammer, B., Biehl, M.: Adaptive relevance matrices in learning vector quantization. Neural Comput. 21(12), 3532–3561 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  7. Villmann, T., Bohnsack, A., Kaden, M.: Can learning vector quantization be an alternative to SVM and deep learning? J. Artifi. Intell. Soft Comput. Res. 7(1), 65–81 (2017)

    Article  Google Scholar 

  8. Zhang, Y., Tiňo, P., Leonardis, A., Tang, K.: A survey on neural network interpretability. IEEE Trans. Emerging Topics Comput. Intell. (2021)

    Google Scholar 

  9. De, T., Giri, P., Mevawala, A., Nemani, R., Deo, A.: Explainable AI: a hybrid approach to generate human-interpretable explanation for deep learning prediction. Procedia Comput. Sci. 168, 40–48 (2020)

    Article  Google Scholar 

  10. Molnar, C.: Interpretable machine learning. http://www.Lulu.com (2020)

  11. Parekh, J., Mozharovskyi, P., d’Alché-Buc, F.: A framework to learn with interpretation. In: Advances in Neural Information Processing Systems 34 (2021)

    Google Scholar 

  12. Chen, Z., Bei, Y., Rudin, C.: Concept whitening for interpretable image recognition. Nature Mach. Intell. 2(12), 772–782 (2020)

    Article  Google Scholar 

  13. Craven, M., Shavlik, J.: Extracting tree-structured representations of trained networks. Adv. Neural. Inf. Process. Syst. 8, 24–30 (1995)

    Google Scholar 

  14. Bastings, J., Aziz, W., Titov, I.: Interpretable neural predictions with differentiable binary variables. arXiv preprint arXiv:1905.08160 (2019)

  15. Liaskos, C., Tsioliaridou, A., Nie, S., Pitsillides, A., Ioannidis, S., Akyildiz, I.: An interpretable neural network for configuring programmable wireless environments. In: 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 1–5 (2019)

    Google Scholar 

  16. Yan, Y., Zhu, J., Duda, M., Solarz, E., Sripada, C., Koutra, D.: Groupinn: grouping-based interpretable neural network for classification of limited, noisy brain data. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 772–782 (2019)

    Google Scholar 

  17. Zhang, Q., Wu, Y.N., Zhu, S.: Interpretable convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8827–8836 (2018)

    Google Scholar 

  18. Saralajew, S., Holdijk, L., Rees, M., Asan, E., Villmann, T.: Classification-by-components: probabilistic modeling of reasoning over a set of components. In: Advances in Neural Information Processing Systems 32 (2019)

    Google Scholar 

  19. Biederman, I.: Recognition-by-components: a theory of human image understanding. Psychol. Rev. 94(2), 115–147 (1987)

    Article  Google Scholar 

  20. Nauta, M., Jutte, A., Provoost, J., Seifert, C.: This looks like that, because... explaining prototypes for interpretable image recognition. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 441–456 (2021)

    Google Scholar 

  21. Gautam, S., Höhne, M., Hansen, S., Jenssen, R., Kampffmeyer, M.: This looks more like that: enhancing self-explaining models by prototypical relevance propagation. arXiv preprint arXiv:2108.12204 (2021)

  22. Nauta, M., van Bree, R., Seifert, C.: Neural prototype trees for interpretable fine-grained image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14933–14943 (2021)

    Google Scholar 

  23. Musavishavazi, S., Kaden, M., Villmann, T.: Possibilistic classification learning based on contrastive loss in learning vector quantizer networks. International Conference on Artificial Intelligence and Soft Computing, pp. 156–167 (2021)

    Google Scholar 

  24. Nguyen, A., Yosinski, J., Clune, J.: Understanding neural networks via feature visualization: A survey. In: Explainable AI: Interpreting, Explaining And Visualizing Deep Learning, pp. 55–76 (2019)

    Google Scholar 

  25. Albawi, S., Mohammed, T.A., Al-Zawi, S.: Understanding of a convolutional neural network. In: 2017 International Conference on Engineering And Technology (ICET), pp. 1–6 (2017)

    Google Scholar 

  26. Guo, T., Dong, J., Li, H., Gao, Y.: Simple convolutional neural network on image classification. In: 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA), pp. 721–724 (2017)

    Google Scholar 

  27. Ghiasi-Shirazi, K.: Generalizing the convolution operator in convolutional neural networks. Neural Process. Lett. 50(3), 2627–2646 (2019)

    Article  Google Scholar 

  28. Lisboa, P., et al.: The coming of age of interpretable and explainable machine learning models. In: Proceedings of the 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2021), pp. 547–566 (2021)

    Google Scholar 

  29. Davide, C.: Siamese neural networks: An overview. In: Artificial Neural Networks, pp. 73–94. Springer (2021). https://doi.org/10.1007/978-1-0716-0826-5_3

  30. Melekhov, I., Kannala, J., Rahtu, E.: Siamese network features for image matching. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 378–383. IEEE (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Mohannazadeh Bakhtiari .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mohannazadeh Bakhtiari, M., Villmann, T. (2023). An Interpretable Two-Layered Neural Network Structure–Based on Component-Wise Reasoning. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2023. Lecture Notes in Computer Science(), vol 14125. Springer, Cham. https://doi.org/10.1007/978-3-031-42505-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-42505-9_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42504-2

  • Online ISBN: 978-3-031-42505-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics