Zusammenfassung
On the one hand, Deep Neural Networks (DNNs) create new opportunities in many digitisation applications such as complex Object Detection (OD) in autonomous driving. On the other hand, the black-box properties of DNNs pose a serious challenge for their application in safety and security-critical domains, where liability and trustworthiness are key requirements. Because formal verification of DNNs is only feasible in very restricted settings, one generally has to rely on empirical robustness tests. Here we investigate the use of explainable artificial intelligence (XAI) to improve such robustness evaluations. Integrated gradients (IG) is an attribution-based XAI method and provides heatmaps that indicate the relevance of inputs for predictions. In contrast to many contributions on improving and comparing XAI techniques, this paper proposes two interpretability metrics to add XAI methods to empirical trustworthiness evaluations of DNNs. In this way XAI becomes usable in real-time applications. In another contribution, we introduce the integration of these metrics into a robustness testing framework where a model trained on the Audi Autonomous Driving Dataset (A2D2) is evaluated with perturbed images that mimic naturally occurring inputs.
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Christian Berghoff et al.: “Towards Auditable AI Systems – From principles to practice”, May 2022.
Christian Berghoff et al.: “Towards Auditable AI Systems – Current status and future directions”, May 2021.
Christian Berghoff, Matthias Neu, and Arndt von Twickel: “Vulnerabilities of Connectionist AI Applications: Evaluation and Defense”, Frontiers in Big Data, vol. 3, pp. 23, 2020.
Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller: “Explainable AI: Interpreting, Explaining and Visualizing Deep Learning”, vol. 11700, Springer Nature, 2019.
Christopher J. Anders, Leander Weber, David Neumann, Wojciech Samek, Klaus-Robert Müller, and Sebastian Lapuschkin: “Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models”, Information Fusion, vol. 77, pp. 261–295, 2022.
Christian Berghoff, Pavol Bielik, Matthias Neu, Petar Tsankov, and Arndt von Twickel: “Robustness Testing of AI Systems: A Case Study for Traffic Sign Recognition”, in Artificial Intelligence Applications and Innovations, Cham, 2021, pp. 256–267, Springer International Publishing.
Saumitra Mishra, Sanghamitra Dutta, Jason Long, and Daniele Magazzeni: “A Survey on the Robustness of Feature Importance and Counterfactual Explanations”, 2021.
Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, and Wieland Brendel: “Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming”, arXiv preprint arXiv:1907.07484, 2019.
Thomas Ponn, Thomas Kröger, and Frank Diermeyer: “Identification and Explanation of Challenging Conditions for Camera-Based Object Detection of Automated Vehicles”, Sensors, vol. 20, no. 13, 2020.
Mukund Sundararajan, Ankur Taly, and Qiqi Yan: “Axiomatic Attribution for Deep Networks”, 2017.
Jakob Geyer, Yohannes Kassahun, Mentar Mahmudi, Xavier Ricou, Rupesh Durgesh, Andrew S. Chung, Lorenz Hauswald, Viet Hoang Pham, Maximilian Mühlegg, Sebastian Dorn, Tiffany Fernandez, Martin Jänicke, Sudesh Mirashi, Chiragkumar Savani, Martin Sturm, Oleksandr Vorobiov, Martin Oelker, Sebastian Garreis, and Peter Schuberth: “A2D2: Audi Autonomous Driving Dataset”, 2020.
Yingfeng Cai, Tianyu Luan, Hongbo Gao, Hai Wang, Long Chen, Yicheng Li, Miguel Angel Sotelo, and Zhixiong Li: “YOLOv4-5D: An Effective and Efficient Object Detector for Autonomous Driving”, IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–13, 2021.
Rick Wilming, Céline Budding, Klaus-Robert Müller, and Stefan Haufe: “Scrutinizing XAI Using Linear Ground-Truth Data with Suppressor Variables”, 2021.
Leila Arras, Ahmed Osman, and Wojciech Samek: “CLEVR-XAI: A Benchmark Dataset for the Ground Truth Evaluation of Neural Network Explanations”, Information Fusion, vol. 81, pp. 14–40, 2022.
Leila Arras, Ahmed Osman, and Wojciech Samek: “Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI”, CoRR, vol. abs/2003.07258, 2021.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Wilms, L., von Twickel, A., Neu, M. et al. Quantifying Attribution-based Explainable AI for Robustness Evaluations. Datenschutz Datensich 47, 492–496 (2023). https://doi.org/10.1007/s11623-023-1805-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11623-023-1805-x