Abstract
This paper profiles the recent research work on eXplainable AI (XAI), at the Insight Centre for Data Analytics. This work concentrates on post-hoc explanation-by-example solutions to XAI as one approach to explaining black box deep-learning systems. Three different methods of post-hoc explanation are outlined for image and time-series datasets: that is, factual, counterfactual, and semi-factual methods). The future landscape for XAI solutions is discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Here, we consider factual examples as explanations; but LIME [36] gives factual information about the current test instance via feature importance scores also.
References
Ala-Pietilä, P.: Landline - 10/10/20: High-Level Expert Group on Artificial Intelligence. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence
Ates, E., et al.: Counterfactual explanations for machine learning on multivariate time series data. arXiv:2008.10781 (2020)
Bagnall, A., et al.: The great time series classification bake off: an experimental evaluation of recently proposed algorithms. Extended Version. arXiv:1602.01711 (2016)
Byrne, R.M.J.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019)
Chen, C., et al.: This looks like that. In: NeurIPS (2020)
Dau, H.A., et al.: The UCR time series archive. arXiv:1810.07758 (2019)
Delaney, E., et al.: Instance-based counterfactual explanations for time series classification. arXiv:2009.13211 (2020)
Ford, C., et al.: Play MNIST for me! User studies on the effects of post-hoc, example-based explanations & error rates on debugging a deep learning, black-box classifier. In: IJCAI 2020 XAI Workshop (2020)
Forestier, G., et al.: Generating synthetic time series to augment sparse datasets. In: 2017 IEEE International Conference on Data Mining (2017)
Frosst, N., Hinton, G.: Distilling a neural network into a soft decision tree. arXiv:1711.09784 (2017)
Gilpin, L.H., et al.: Explaining explanations: an approach to evaluating interpretability of machine learning. arXiv:1806.00069 (2018)
Hahn, T.: Landline - 10/10/20: Strategic Research, Innovation and Deployment Agenda. https://ai-data-robotics-partnership.eu/wp-content/uploads/2020/09/AI-Data-Robotics-Partnership-SRIDA-V3.0.pdf
Karlsson, I., et al.: Explainable time series tweaking via irreversible and reversible temporal transformations. arXiv:1809.05183 (2018)
Keane, M., Kenny, E.: How case-based reasoning explains neural networks: a theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 155–171. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_11
Keane, M.T., Kenny, E.M.: The twin-system approach as one generic solution for XAI. In: IJCAI 2019 XAI Workshop (2019)
Keane, M.T., Smyth, B.: Good counterfactuals and where to find them: a case-based technique for generating counterfactuals for explainable AI (XAI). In: Watson, I., Weber, R. (eds.) ICCBR 2020. LNCS (LNAI), vol. 12311, pp. 163–178. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58342-2_11
Kenny, E.M., et al.: Bayesian case-exclusion and personalized explanations for sustainable dairy farming. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI 2020) (2020)
Kenny, E., et al.: Predicting grass growth for sustainable dairy farming: a CBR system using Bayesian case-exclusion and post-hoc, personalized explanation-by-example (XAI). In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 172–187. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_12
Kenny, E.M., Keane, M.T.: On generating plausible counterfactual and semi-factual explanations for deep learning. arXiv:2009.06399 (2020)
Kenny, E.M., Keane, M.T.: Twin-systems to explain artificial neural networks using case-based reasoning. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019)
Labaien, J., Zugasti, E., De Carlos, X.: Contrastive explanations for a deep learning model on time-series data. In: Song, M., Song, I.-Y., Kotsis, G., Tjoa, A.M., Khalil, I. (eds.) DaWaK 2020. LNCS, vol. 12393, pp. 235–244. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59065-9_19
Laugel, T., et al.: Defining locality for surrogates in post-hoc interpretablity. arXiv:1806.07498 (2018)
Laugel, T., et al.: The dangers of post-hoc interpretability: unjustified counterfactual explanations. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019)
Leavy, S., et al.: Data, power and bias in artificial intelligence. arXiv:2008.0734 (2020)
Leavy, S., Meaney, G., Wade, K., Greene, D.: Mitigating gender bias in machine learning data sets. In: Boratto, L., Faralli, S., Marras, M., Stilo, G. (eds.) BIAS 2020. CCIS, vol. 1245, pp. 12–26. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52485-2_2
Linyi, Y., et al.: Generating plausible counterfactual explanations for deep transformers in financial text classification. In: Proceedings of the 28th International Conference on Computational Linguistics (2020)
Lipton, Z.C.: The mythos of model interpretability. arXiv:1606.03490 (2017)
Mittelstadt, B., et al.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (2019)
Mueen, A., Keogh, E.: Extracting optimal performance from dynamic time warping. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Nguyen, T.T., Le Nguyen, T., Ifrim, G.: A model-agnostic approach to quantifying the informativeness of explanation methods for time series classification. In: Lemaire, V., Malinowski, S., Bagnall, A., Guyet, T., Tavenard, R., Ifrim, G. (eds.) AALTD 2020. LNCS (LNAI), vol. 12588, pp. 77–94. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65742-0_6
Nugent, C., et al.: Gaining insight through case-based explanation. J. Intell. Inf. Syst. 32(3), 267–295 (2009). https://doi.org/10.1007/s10844-008-0069-0
O’Sullivan, B.: Landline - 10/10/20: Towards a Magna Carta for Data: Expert Opinion Piece: Engineering and Computer Science Committee. https://www.ria.ie/sites/default/files/ria_magna_carta_data.pdf
Papernot, N., McDaniel, P.: Deep k-Nearest neighbors: towards confident, interpretable and robust deep learning. arXiv:1803.04765 (2018)
Petitjean, F., et al.: A global averaging method for dynamic time warping, with applications to clustering. Pattern Recogn. 44, 678–693 (2011)
Prabhu, V.U., Birhane, A.: Large image datasets: a pyrrhic win for computer vision? arXiv:2006.16923 (2020)
Ribeiro, M.T., et al.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD 2016 (2016)
Rudin, C.: Please stop explaining black box models for high stakes decisions. arXiv:1811.10154 (2018)
Seah, J.C.Y., et al.: Chest radiographs in congestive heart failure: visualizing neural network learning. Radiology 290(2), 514–522 (2019)
Sørmo, F., et al.: Explanation in case-based reasoning-perspectives and goals. Artif. Intell. Rev. 24, 109–143 (2005). https://doi.org/10.1007/s10462-005-4607-7
Wachter, S., et al.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. SSRN J. 31 (2017)
Horta, V.A.C., Mileo, A.: Towards explaining deep neural networks through graph analysis. In: Anderst-Kotsis, G., et al. (eds.) DEXA 2019. CCIS, vol. 1062, pp. 155–165. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-27684-3_20
Hohman, F., Kahng, M., Pienta, R., Chau, D.H.: Visual analytics in deep learning. IEEE Trans. Visual. Comput. Graphics 25, 2674–2693 (2018)
Acknowledgements
This paper emanated from research funded by (i) Science Foundation Ireland (SFI) to the Insight Centre for Data Analytics (12/RC/2289-P2), (ii) SFI and DAFM on behalf of the Government of Ireland to the VistaMilk SFI Research Centre (16/RC/3835).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kenny, E.M., Delaney, E.D., Greene, D., Keane, M.T. (2021). Post-hoc Explanation Options for XAI in Deep Learning: The Insight Centre for Data Analytics Perspective. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12663. Springer, Cham. https://doi.org/10.1007/978-3-030-68796-0_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-68796-0_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-68795-3
Online ISBN: 978-3-030-68796-0
eBook Packages: Computer ScienceComputer Science (R0)