Abstract
With the pervasive integration of Artificial Intelligence (AI) into various facets of society, concerns regarding its transparency and interpretability have gained prominence, particularly in critical or citizen-facing applications. The field of Explainable Artificial Intelligence (XAI) has witnessed rapid growth in response to these concerns, with recent research endeavors focusing on elucidating the inner workings of AI systems. We present a bibliometric study encompassing recent developments in XAI since 2020. We identify seven distinct areas of application where XAI methodologies have been applied. Furthermore, we propose a multidimensional taxonomy that categorizes these approaches and applications, aiming to contribute to the ongoing efforts towards the standardization and uniformization of XAI practices. By shedding light on the current landscape of research and offering a structured taxonomy for analysis, we expose under and over-explored techniques, and encourage the employment and development of diversified approaches for XAI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
References
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6, 52138–52160 (2018)
Bastani, O., Kim, C., Bastani, H.: Interpretability via Model Extraction. arXiv preprint arXiv:1706.09773 (2017)
Bolukbasi, T., Chang, K.W., Zou, J.Y., Saligrama, V., Kalai, A.T.: Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
Chowdhury, S., Joel-Edgar, S., Dey, P.K., Bhattacharya, S., Kharlamov, A.: Embedding transparency in artificial intelligence machine learning models: managerial implications on predicting and explaining employee turnover. Int. J. Hum. Res. Manag. 34(14), 2732–2764 (2023)
Dahal, A., Lombardo, L.: Explainable artificial intelligence in geoscience: a glimpse into the future of landslide susceptibility modeling. Comput. Geosci. 176, 105364 (2023)
Dong, J., Chen, S., Miralinaghi, M., Chen, T., Li, P., Labi, S.: Why did the AI make that decision? Towards an explainable artificial intelligence (XAI) for autonomous driving systems. Transp. Res. Part C Emerg. Technol. 156, 104358 (2023)
Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 0210–0215. IEEE (2018)
Dwivedi, R., et al.: Explainable AI (XAI): core ideas, techniques, and solutions. ACM Comput. Surv. 55(9), 1–33 (2023)
DAngelo, G., Della-Morte, D., Pastore, D., Donadel, G., De Stefano, A., Palmieri, F.: Identifying patterns in multiple biomarkers to diagnose diabetic foot using an explainable genetic programming-based approach. Future Gener. Comput. Syst. 140, 138–150 (2023)
Friedman, J.H.: Multivariate adaptive regression splines. Ann. Stat. 19(1), 1–67 (1991)
Friedman, J.H., Popescu, B.E.: Predictive learning via rule ensembles. Ann. Appl. Stat. 916–954 (2008)
Futia, G., Vetrò, A.: On the integration of knowledge graphs into deep learning models for a more comprehensible AI three challenges for future research. Information 11(2), 122 (2020)
Gurumoorthy, K.S., Dhurandhar, A., Cecchi, G., Aggarwal, C.: Efficient data representation by selecting prototypes with importance weights. In: 2019 IEEE International Conference on Data Mining (ICDM), pp. 260–269. IEEE (2019)
Hickling, T., Aouf, N., Spencer, P.: Robust adversarial attacks detection based on explainable deep reinforcement learning for UAV guidance and planning. IEEE Trans. Intell. Veh. (2023)
Inácio, M., Wick-pedro, G., Gonçalo Oliveira, H.: What do humor classifiers learn? An attempt to explain humor recognition models. In: Procs of 7th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage. Social Sciences, Humanities and Literature, pp. 88–98. ACL, Dubrovnik, Croatia (2023)
Islam, S.R., Eberle, W., Ghafoor, S.K., Ahmed, M.: Explainable artificial intelligence approaches: a survey. arXiv preprint arXiv:2101.09429 (2021)
Keshk, M., Koroniotis, N., Pham, N., Moustafa, N., Turnbull, B., Zomaya, A.Y.: An explainable deep learning-enabled intrusion detection framework in IoT networks. Inf. Sci. 639, 119000 (2023)
Krüger, J.G.C., de Souza Britto Jr., A., Barddal, J.P.: An explainable machine learning approach for student dropout prediction. Expert Syst. Appl. 233, 120933 (2023)
Letham, B., Rudin, C., McCormick, T.H., Madigan, D.: Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann. Appl. Stat. 1350–1371 (2015)
Lomazzi, L., Fabiano, S., Parziale, M., Giglio, M., Cadini, F.: On the explainability of convolutional neural networks processing ultrasonic guided waves for damage diagnosis. Mech. Syst. Signal Process. 183, 109642 (2023)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Molnar, C.: Interpretable Machine Learning. Lulu. com, 2nd edn. (2020). https://christophm.github.io/interpretable-ml-book/
Narang, S., Raffel, C., Lee, K., Roberts, A., Fiedel, N., Malkan, K.: WT5?! training text-to-text models to explain their predictions. arXiv preprint arXiv:2004.14546 (2020)
O’Neil, C.: Weapons of math destruction: how big data increases inequality and threatens democracy. Crown (2017)
Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Shapley, L.S.: A Value for n-Person Games, pp. 307–318. Princeton University Press, Princeton (1953). https://doi.org/10.1515/9781400881970-018
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: Proceedings of International Conference on Learning Representations (ICLR). ICLR (2014)
Van Eck, N.J., Waltman, L.: Text mining and visualization using vosviewer. arXiv preprint arXiv:1109.2058 (2011)
Wang, J., et al.: When, where and how does it fail? A spatial-temporal visual analytics approach for interpretable object detection in autonomous driving. IEEE Trans. Visual Comput. Graphics 29(12), 5033–5049 (2022)
Wang, Y., Wang, Z., Kang, X., Luo, Y.: A novel interpretable model ensemble multivariate fast iterative filtering and temporal fusion transform for carbon price forecasting. Energy Sci. Eng. 11(3), 1148–1179 (2023)
Xing, J., Nagata, T., Zou, X., Neftci, E., Krichmar, J.L.: Achieving efficient interpretability of reinforcement learning via policy distillation and selective input gradient regularization. Neural Netw. 161, 228–241 (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Carvalho, I., Gonçalo Oliveira, H., Silva, C. (2025). A Multidimensional Taxonomy for Recent Trends in Explainable Artificial Intelligence. In: Santos, M.F., Machado, J., Novais, P., Cortez, P., Moreira, P.M. (eds) Progress in Artificial Intelligence. EPIA 2024. Lecture Notes in Computer Science(), vol 14968. Springer, Cham. https://doi.org/10.1007/978-3-031-73500-4_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-73500-4_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-73499-1
Online ISBN: 978-3-031-73500-4
eBook Packages: Computer ScienceComputer Science (R0)