{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,9,20]],"date-time":"2024-09-20T16:50:36Z","timestamp":1726851036946},"reference-count":0,"publisher":"AI Access Foundation","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["jair"],"abstract":"In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user\u2019s expectation. We frame Explainable AI Planning as an iterative plan exploration process, in which the user asks a succession of contrastive questions that lead to the generation and solution of hypothetical planning problems that are restrictions of the original problem. The object of the exploration is for the user to understand the constraints that govern the original plan and, ultimately, to arrive at a satisfactory plan. We present the results of a user study that demonstrates that when users ask questions about plans, those questions are usually contrastive, i.e. \u201cwhy A rather than B?\u201d. We use the data from this study to construct a taxonomy of user questions that often arise during plan exploration. Our approach to iterative plan exploration is a process of successive model restriction. Each contrastive user question imposes a set of constraints on the planning problem, leading to the construction of a new hypothetical planning problem as a restriction of the original. Solving this restricted problem results in a plan that can be compared with the original plan, admitting a contrastive explanation. We formally define model-based compilations in PDDL2.1 for each type of constraint derived from a contrastive user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity. The compilations were implemented as part of an explanation framework supporting iterative model restriction. We demonstrate its benefits in a second user study.<\/jats:p>","DOI":"10.1613\/jair.1.12813","type":"journal-article","created":{"date-parts":[[2021,10,28]],"date-time":"2021-10-28T00:19:37Z","timestamp":1635380377000},"page":"533-612","source":"Crossref","is-referenced-by-count":10,"title":["Contrastive Explanations of Plans through Model Restrictions"],"prefix":"10.1613","volume":"72","author":[{"given":"Benjamin","family":"Krarup","sequence":"first","affiliation":[]},{"given":"Senka","family":"Krivic","sequence":"additional","affiliation":[]},{"given":"Daniele","family":"Magazzeni","sequence":"additional","affiliation":[]},{"given":"Derek","family":"Long","sequence":"additional","affiliation":[]},{"given":"Michael","family":"Cashmore","sequence":"additional","affiliation":[]},{"given":"David E.","family":"Smith","sequence":"additional","affiliation":[]}],"member":"16860","published-online":{"date-parts":[[2021,10,27]]},"container-title":["Journal of Artificial Intelligence Research"],"original-title":[],"link":[{"URL":"http:\/\/www.jair.org\/index.php\/jair\/article\/download\/12813\/26732","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/www.jair.org\/index.php\/jair\/article\/download\/12813\/26732","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,10,28]],"date-time":"2021-10-28T00:19:38Z","timestamp":1635380378000},"score":1,"resource":{"primary":{"URL":"http:\/\/www.jair.org\/index.php\/jair\/article\/view\/12813"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,10,27]]},"references-count":0,"URL":"https:\/\/doi.org\/10.1613\/jair.1.12813","relation":{},"ISSN":["1076-9757"],"issn-type":[{"value":"1076-9757","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,10,27]]}}}