Abstract
The need for AI systems to explain themselves is increasingly recognised as a priority, particularly in domains where incorrect decisions can result in harm and, in the worst cases, death. Explainable Artificial Intelligence (XAI) tries to produce human-understandable explanations for AI decisions. However, most XAI systems prioritize factors such as technical complexities and research-oriented goals over end-user needs, risking information overload. This research attempts to bridge a gap in current understanding and provide insights for assisting users in comprehending the rule-based system’s reasoning through dialogue. The hypothesis is that employing dialogue as a mechanism can be effective in constructing explanations. A dialogue framework for rule-based AI systems is presented, allowing the system to explain its decisions by engaging in “Why?” and “Why not?” questions and answers. We establish formal properties of this framework and present a small user study with encouraging results that compares dialogue-based explanations with proof trees produced by the AI System.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We don’t need to label rules for our system to work, but labels are a useful convenience when referring to rules.
References
Apt, K.R., Van Emden, M.H.: Contributions to the theory of logic programming. J. ACM (JACM) 29(3), 841–862 (1982)
Arioua, A., Tamani, N., Croitoru, M.: Query answering explanation in inconsistent datalog\(+/-\) knowledge bases. In: Chen, Q., Hameurlain, A., Toumani, F., Wagner, R., Decker, H. (eds.) DEXA 2015. LNCS, vol. 9261, pp. 203–219. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22849-5_15
Bex, F., Walton, D.: Combining explanation and argumentation in dialogue. Argument Comput. 7(1), 55–68 (2016)
Cavedon, L., Lloyd, J.: A completeness theorem for SLDNF resolution. J. Logic Program. 7(3), 177–191 (1989). https://www.sciencedirect.com/science/article/pii/0743106689900204
Clancey, W.J.: The epistemology of a rule-based expert system-a framework for explanation. Artif. Intell. 20(3), 215–251 (1983)
Clocksin, W.F., Mellish, C.S.: Programming in Prolog, 5 edn. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-642-55481-0
Cocarascu, O., Stylianou, A., Čyras, K., Toni, F.: Data-empowered argumentation for dialectically explainable predictions. In: ECAI 2020, pp. 2449–2456. IOS Press (2020)
Dennis, L.A., Oren, N.: Explaining BDI agent behaviour through dialogue. In: Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021). International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) (2021)
Fiedler, A.: Dialog-driven adaptation of explanations of proofs. In: International Joint Conference on Artificial Intelligence, vol. 17, pp. 1295–1300. Citeseer (2001)
Huth, M., Ryan, M.: Logic in Computer Science: Modelling and Reasoning about Systems. Cambridge University Press, Cambridge (2004)
Johnson-Laird, P.N.: Mental models in cognitive science. Cogn. Sci. 4(1), 71–115 (1980)
Kass, R., Finin, T., et al.: The need for user models in generating expert system explanations. Int. J. Expert Syst. 1(4) (1988)
Lacave, C., Díez, F.J.: A review of explanation methods for Bayesian networks. Knowl. Eng. Rev. 17(2), 107–127 (2002)
Lacave, C., Diez, F.J.: A review of explanation methods for heuristic expert systems. Knowl. Eng. Rev. 19(2), 133–146 (2004)
Madumal, P., Miller, T., Sonenberg, L., Vetere, F.: A grounded interaction protocol for explainable artificial intelligence. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1033–1041 (2019)
Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how i learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1712.00547 (2017)
Moore, J.D., Paris, C.L.: Requirements for an expert system explanation facility. Comput. Intell. 7(4), 367–370 (1991)
Oren, N., van Deemter, K., Vasconcelos, W.W.: Argument-based plan explanation. In: Vallati, M., Kitchin, D. (eds.) Knowledge Engineering Tools and Techniques for AI Planning. LNCS, pp. 173–188. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-38561-3_9
Reggia, J.A., Perricone, B.T.: Answer justification in medical decision support systems based on Bayesian classification. Comput. Biol. Med. 15(4), 161–167 (1985)
Sendi, N., Abchiche-Mimouni, N., Zehraoui, F.: A new transparent ensemble method based on deep learning. Procedia Comput. Sci. 159, 271–280 (2019)
Shams, Z., et al.: REM: an integrative rule extraction methodology for explainable data analysis in healthcare (2021)
Shortliffe, E.H., Axline, S.G., Buchanan, B.G., Merigan, T.C., Cohen, S.N.: An artificial intelligence program to advise physicians regarding antimicrobial therapy. Comput. Biomed. Res. 6(6), 544–560 (1973)
Singh, R., Miller, T., Newn, J., Sonenberg, L., Velloso, E., Vetere, F.: Combining planning with gaze for online human intention recognition. In: Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, pp. 488–496 (2018)
Studer, R., Benjamins, V.R., Fensel, D.: Knowledge engineering: principles and methods. Data Knowl. Eng. 25(1–2), 161–197 (1998)
Swartout, W.R.: XPLAIN: a system for creating and explaining expert consulting programs. Artif. Intell. 21(3), 285–325 (1983)
Vassiliades, A., Bassiliades, N., Patkos, T.: Argumentation and explainable artificial intelligence: a survey. Knowl. Eng. Review 36, e5 (2021)
Walton, D.: A dialogue system specification for explanation. Synthese 182, 349–374 (2011)
Walton, D.: A Dialogue System for Evaluating Explanations, pp. 69–116. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-19626-8_3
Wick, M.R., Thompson, W.B.: Reconstructive expert system explanation. Artif. Intell. 54(1–2), 33–70 (1992)
Winikoff, M., Sidorenko, G., Dignum, V., Dignum, F.: Why bad coffee? Explaining BDI agent behaviour with valuings. Artif. Intell. 300, 103554 (2021)
Zarlenga, M.E., Shams, Z., Jamnik, M.: Efficient decompositional rule extraction for deep neural networks. arXiv preprint arXiv:2111.12628 (2021)
Acknowledgement
This work is supported by EPSRC, through EP/W01081X (Computational Agent Responsibility).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Data Access Statement
The code and data supporting the findings reported in this paper are available for open access at https://github.com/xuLily9/RBS_TheoryI (Code) and https://doi.org/10.6084/m9.figshare.22220494.v3 (User Evaluation).
Ethical Approval
We performed a light-touch ethical review for the user evaluation, using a tool provided by our university. This tool advised that since the only personal data gathered was names on consent forms and these were stored in a locked cabinet separate from the rest of the gathered data, further ethical approval was not required.
Open Access
For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, Y., Collenette, J., Dennis, L., Dixon, C. (2023). Dialogue Explanations for Rule-Based AI Systems. In: Calvaresi, D., et al. Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2023. Lecture Notes in Computer Science(), vol 14127. Springer, Cham. https://doi.org/10.1007/978-3-031-40878-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-40878-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40877-9
Online ISBN: 978-3-031-40878-6
eBook Packages: Computer ScienceComputer Science (R0)