{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,26]],"date-time":"2024-08-26T17:44:18Z","timestamp":1724694258880},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2020,7]]},"abstract":"Deep neural networks are usually considered black-boxes due to their complex internal architecture, that cannot straightforwardly provide human-understandable explanations on how they behave. Indeed, Deep Learning is still viewed with skepticism in those real-world domains in which incorrect predictions may produce critical effects. This is one of the reasons why in the last few years Explainable Artificial Intelligence (XAI) techniques have gained a lot of attention in the scientific community. In this paper, we focus on the case of multi-label classification, proposing a neural network that learns the relationships among the predictors associated to each class, yielding First-Order Logic (FOL)-based descriptions. Both the explanation-related network and the classification-related network are jointly learned, thus implicitly introducing a latent dependency between the development of the explanation mechanism and the development of the classifiers. Our model can integrate human-driven preferences that guide the learning-to-explain process, and it is presented in a unified framework. Different typologies of explanations are evaluated in distinct experiments, showing that the proposed approach discovers new knowledge and can improve the classifier performance.<\/jats:p>","DOI":"10.24963\/ijcai.2020\/309","type":"proceedings-article","created":{"date-parts":[[2020,7,8]],"date-time":"2020-07-08T12:12:10Z","timestamp":1594210330000},"page":"2234-2240","source":"Crossref","is-referenced-by-count":7,"title":["Human-Driven FOL Explanations of Deep Learning"],"prefix":"10.24963","author":[{"given":"Gabriele","family":"Ciravegna","sequence":"first","affiliation":[{"name":"University of Florence, Florence, Italy"},{"name":"University of Siena, Siena, Italy"}]},{"given":"Francesco","family":"Giannini","sequence":"additional","affiliation":[{"name":"University of Siena, Siena, Italy"}]},{"given":"Marco","family":"Gori","sequence":"additional","affiliation":[{"name":"University of Siena, Siena, Italy"},{"name":"Universit\u00e9 C\u00f4te d'Azur, Nice, France"}]},{"given":"Marco","family":"Maggini","sequence":"additional","affiliation":[{"name":"University of Siena, Siena, Italy"}]},{"given":"Stefano","family":"Melacci","sequence":"additional","affiliation":[{"name":"University of Siena, Siena, Italy"}]}],"member":"10584","event":{"number":"28","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-PRICAI-2020","name":"Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}","start":{"date-parts":[[2020,7,11]]},"theme":"Artificial Intelligence","location":"Yokohama, Japan","end":{"date-parts":[[2020,7,17]]}},"container-title":["Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2020,7,9]],"date-time":"2020-07-09T02:14:24Z","timestamp":1594260864000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2020\/309"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2020,7]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2020\/309","relation":{},"subject":[],"published":{"date-parts":[[2020,7]]}}}