default search action
xAI 2023: Lisbon, Portugal
- Luca Longo:
Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lisbon, Portugal, July 26-28, 2023. CEUR Workshop Proceedings 3554, CEUR-WS.org 2023 - Luca Longo:
Preface.
Late-Breaking Work
- Szymon Bobek, Slawomir Nowaczyk, João Gama, Sepideh Pashami, Rita P. Ribeiro, Zahra Taghiyarrenani, Bruno Veloso, Lala H. Rajaoarisoa, Maciej Szelazek, Grzegorz J. Nalepa:
Why Industry 5.0 Needs XAI 2.0? 1-6 - Kristýna Sirka Kacafírková, Sara Polak, Myriam Sillevis Smitt, Shirley A. Elprama, An Jacobs:
Trustworthy Enough? Evaluation of an AI Decision Support System for Healthcare Professionals. 7-11 - Anubhav Bhatti, Naveen Thangavelu, Marium Hassan, Choongmin Kim, San Lee, Yonghwan Kim, Jang Yong Kim:
Interpreting Forecasted Vital Signs Using N-BEATS in Sepsis Patients. 12-17 - Tessel Haagen, Heysem Kaya, Joop Snijder, Melchior Nierman:
AutoXplain: Towards Automated Interpretable Model Selection. 18-23 - José Diogo Marques dos Santos, José Paulo Marques dos Santos:
Explaining ANN-modeled fMRI Data with Path-Weights and Layer-Wise Relevance Propagation. 24-29 - Sofia Morandini, Federico Fraboni, Gabriele Puzzo, Davide Giusino, Lucia Volpi, Hannah Brendel, Enzo Balatti, Marco de Angelis, Andrea De Cesarei, Luca Pietrantoni:
Examining the Nexus between Explainability of AI Systems and User's Trust: A Preliminary Scoping Review. 30-35 - Felix Liedeker, Philipp Cimiano:
A Prototype of an Interactive Clinical Decision Support System with Counterfactual Explanations. 36-41 - Ana Bucchi, Gabriel M. Fonseca:
Is the Common Approach used to Identify Social Biases in Artificial Intelligence also Biased? 42-46 - Kira Vinogradova, Gene Myers:
Local Interpretable Model-Agnostic Explanations for Multitarget Image Regression. 47-52 - Giulia Vilone, Luca Longo:
An Examination of the Effect of the Inconsistency Budget in Weighted Argumentation Frameworks and their Impact on the Interpretation of Deep Neural Networks. 53-58 - Alberto Termine, Alessandro Antonucci, Alessandro Facchini:
Machine Learning Explanations by Surrogate Causal Models (MaLESCaMo). 59-64 - Taufique Ahmed, Luca Longo:
Latent Space Interpretation and Visualisation for Understanding the Decisions of Convolutional Variational Autoencoders Trained with EEG Topographic Maps. 65-70 - Ephrem Tibebe Mekonnen, Pierpaolo Dondio, Luca Longo:
Explaining Deep Learning Time Series Classification Models using a Decision Tree-Based Post-Hoc XAI Method. 71-76 - Björn Forcher, Patrick Menold, Moritz Weixler, Jörg Schmitt, Samuel Wagner:
Evaluation of Explainable AI methods for Classification Tasks in Visual Inspection. 77-82 - Isacco Beretta, Eleonora Cappuccio, Marta Marchiori Manerba:
User-Driven Counterfactual Generator: A Human Centered Exploration. 83-88 - Robert S. Sullivan, Luca Longo:
Optimizing Deep Q-Learning Experience Replay with SHAP Explanations: Exploring Minimum Experience Replay Buffer Sizes in Reinforcement Learning. 89-94 - Marija Kopanja, Sanja Brdar, Stefan Hacko:
Uncovering Decision-making Process of Cost-sensitive Tree-based Classifiers using the Adaptation of TreeSHAP. 95-100 - German Magai, Artem Soroka:
Explaining the Transfer Learning Ability of a Deep Neural Networks by Means of Representations. 101-106 - Mozhgan Salimiparsa, Surajsinh Parmar, San Lee, Choongmin Kim, Yonghwan Kim, Jang Yong Kim:
Investigating Poor Performance Regions of Black Boxes: LIME-based Exploration in Sepsis Detection. 107-111 - Meng Shi, Celal Savur, Elizabeth Watkins, Ramesh Manuvinakurike, Gesem Gudino Mejia, Richard Beckwith, Giuseppe Raffa:
An Explainable AI User Interface for Facilitating Collaboration between Domain Experts and AI Researchers. 112-116 - Aurelio Barrera-Vicent, Eduardo Paluzo-Hidalgo, Miguel Angel Gutiérrez-Naranjo:
The Metric-aware Kernel-width Choice for LIME. 117-122 - Swati Sachan, Jericho Muwanga:
Integration of Explainable Deep Neural Network with Blockchain Technology: Medical Indemnity Insurance. 123-128 - Ricardo Anibal Matamoros Aragon, Italo Zoppis, Sara Manzoni:
When Attention Turn To Be Explanation. A Case Study in Recommender Systems. 129-134 - Iván Sevillano-García, Julián Luengo, Francisco Herrera:
Low-Impact Feature Reduction Regularization Term: How to Improve Artificial Intelligence with Explainability. 135-139 - Anna Theresa Stüber, Stefan Coors, Michael Ingrisch:
Revitalize the Potential of Radiomics: Interpretation and Feature Stability in Medical Imaging Analyses through Groupwise Feature Importance. 140-145
Demos
- Martin Jullum, Jacob Sjødin, Robindra Prabhu, Anders Løland:
eXplego: An interactive Tool that Helps you Select Appropriate XAI-methods for your Explainability Needs. 146-151 - Florian Osswald, Roman Bartolosch, Torsten Fiolka, Engelbert Hartmann, Bernhard Krach, Jan Feil, Martin Lederer:
FCAS Ethical AI Demonstrator. 152-157 - Nicoletta Prentzas:
Argumentation-based Explainable Machine Learning ArgEML: α-Version Technical Details. 158-163 - José Luis Corcuera Bárcena, Mattia Daole, Pietro Ducange, Francesco Marcelloni, Giovanni Nardini, Alessandro Renda, Giovanni Stea:
Federated Learning of Explainable Artificial Intelligence Models: A Proof-of-Concept for Video-streaming Quality Forecasting in B5G/6G networks. 164-168
Doctoral Consortium
- Franca Corradini:
Probabilistic Modelling for Design and Verification of Trustworthy Autonomous Systems. 169-176 - Arjun Vinayak Chikkankod:
Deep Clustering as a Unified Method for Representation Learning and Clustering of EEG Data for Microstate Theory. 177-184 - Mert Keser:
Real-Time Explainable Plausibility Verification for DNN-based Automotive Perception. 185-192 - Berkant Turan:
Extending Merlin-Arthur Classifiers for Improved Interpretability. 193-200 - Ivania Donoso-Guzmán:
Designing an Evaluation Framework for eXplainable AI in the Healthcare Domain. 201-208 - Bahram Salamat Ravandi:
Personalized Human-Robot Interaction in Companion Social Robots. 209-216 - Luca Heising:
Accelerating Implementation of Artificial Intelligence in Radiotherapy through Explainability. 217-224 - Oleksandr Davydko:
Lung images Classification with Textural Characteristics and Hybrid Classification Schemes. 225-232 - Andrea Fedele:
Explain and Interpret Few-Shot Learning. 233-240 - Marta Marchiori Manerba:
Fairness Auditing, Explanation and Debiasing in Linguistic Data and Language Models. 241-248 - Gargi Gupta:
Post Hoc Explanations for RNNs using State Transition Representations for Time Series Data. 249-255
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.