Abstract
The success of deep neural networks have been compromised by their lack of interpretability. On the other hand, most interpretable models do not offer same accuracy as deep neural networks or the former depends on the latter. Inspired by Classification-by-Components networks, in this paper, we present a novel approach into designing a two-layered perceptron network, that offers a level of interpetability. Hence, we use the prediction power of a multi-layer perceptron, while a class of the adapted parameters make fair sense to human. We will visualize the weights, between input layer and the hidden layer, and show that Matching the right objective function with activation function of the output layer, is the key to interpreting the weights and their influence on component-wise classification.
M.M.B. is supported by an ESF PhD grant.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)
Vellido, A.: The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Netw. Appli. 32(24), 18069–18083 (2020)
Sato, A., Yamada, K.: Generalized learning vector quantization. In: Advances in Neural Information Processing Systems 8. Proceedings of the 1995 Conference, pp. 423–429 (1996)
Seo, S., Obermayer, K.: Soft learning vector quantization. Neural Comput. 15(7), 1589–1604 (2003)
Kohonen, T.: Self-Organizing Maps, 2nd edn. Springer, Berlin, Heidelberg (1995). https://doi.org/10.1007/978-3-642-56927-2
Schneider, P., Hammer, B., Biehl, M.: Adaptive relevance matrices in learning vector quantization. Neural Comput. 21(12), 3532–3561 (2009)
Villmann, T., Bohnsack, A., Kaden, M.: Can learning vector quantization be an alternative to SVM and deep learning? J. Artifi. Intell. Soft Comput. Res. 7(1), 65–81 (2017)
Zhang, Y., Tiňo, P., Leonardis, A., Tang, K.: A survey on neural network interpretability. IEEE Trans. Emerging Topics Comput. Intell. (2021)
De, T., Giri, P., Mevawala, A., Nemani, R., Deo, A.: Explainable AI: a hybrid approach to generate human-interpretable explanation for deep learning prediction. Procedia Comput. Sci. 168, 40–48 (2020)
Molnar, C.: Interpretable machine learning. http://www.Lulu.com (2020)
Parekh, J., Mozharovskyi, P., d’Alché-Buc, F.: A framework to learn with interpretation. In: Advances in Neural Information Processing Systems 34 (2021)
Chen, Z., Bei, Y., Rudin, C.: Concept whitening for interpretable image recognition. Nature Mach. Intell. 2(12), 772–782 (2020)
Craven, M., Shavlik, J.: Extracting tree-structured representations of trained networks. Adv. Neural. Inf. Process. Syst. 8, 24–30 (1995)
Bastings, J., Aziz, W., Titov, I.: Interpretable neural predictions with differentiable binary variables. arXiv preprint arXiv:1905.08160 (2019)
Liaskos, C., Tsioliaridou, A., Nie, S., Pitsillides, A., Ioannidis, S., Akyildiz, I.: An interpretable neural network for configuring programmable wireless environments. In: 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 1–5 (2019)
Yan, Y., Zhu, J., Duda, M., Solarz, E., Sripada, C., Koutra, D.: Groupinn: grouping-based interpretable neural network for classification of limited, noisy brain data. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 772–782 (2019)
Zhang, Q., Wu, Y.N., Zhu, S.: Interpretable convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8827–8836 (2018)
Saralajew, S., Holdijk, L., Rees, M., Asan, E., Villmann, T.: Classification-by-components: probabilistic modeling of reasoning over a set of components. In: Advances in Neural Information Processing Systems 32 (2019)
Biederman, I.: Recognition-by-components: a theory of human image understanding. Psychol. Rev. 94(2), 115–147 (1987)
Nauta, M., Jutte, A., Provoost, J., Seifert, C.: This looks like that, because... explaining prototypes for interpretable image recognition. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 441–456 (2021)
Gautam, S., Höhne, M., Hansen, S., Jenssen, R., Kampffmeyer, M.: This looks more like that: enhancing self-explaining models by prototypical relevance propagation. arXiv preprint arXiv:2108.12204 (2021)
Nauta, M., van Bree, R., Seifert, C.: Neural prototype trees for interpretable fine-grained image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14933–14943 (2021)
Musavishavazi, S., Kaden, M., Villmann, T.: Possibilistic classification learning based on contrastive loss in learning vector quantizer networks. International Conference on Artificial Intelligence and Soft Computing, pp. 156–167 (2021)
Nguyen, A., Yosinski, J., Clune, J.: Understanding neural networks via feature visualization: A survey. In: Explainable AI: Interpreting, Explaining And Visualizing Deep Learning, pp. 55–76 (2019)
Albawi, S., Mohammed, T.A., Al-Zawi, S.: Understanding of a convolutional neural network. In: 2017 International Conference on Engineering And Technology (ICET), pp. 1–6 (2017)
Guo, T., Dong, J., Li, H., Gao, Y.: Simple convolutional neural network on image classification. In: 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA), pp. 721–724 (2017)
Ghiasi-Shirazi, K.: Generalizing the convolution operator in convolutional neural networks. Neural Process. Lett. 50(3), 2627–2646 (2019)
Lisboa, P., et al.: The coming of age of interpretable and explainable machine learning models. In: Proceedings of the 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2021), pp. 547–566 (2021)
Davide, C.: Siamese neural networks: An overview. In: Artificial Neural Networks, pp. 73–94. Springer (2021). https://doi.org/10.1007/978-1-0716-0826-5_3
Melekhov, I., Kannala, J., Rahtu, E.: Siamese network features for image matching. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 378–383. IEEE (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mohannazadeh Bakhtiari, M., Villmann, T. (2023). An Interpretable Two-Layered Neural Network Structure–Based on Component-Wise Reasoning. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2023. Lecture Notes in Computer Science(), vol 14125. Springer, Cham. https://doi.org/10.1007/978-3-031-42505-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-42505-9_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-42504-2
Online ISBN: 978-3-031-42505-9
eBook Packages: Computer ScienceComputer Science (R0)