{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T18:59:38Z","timestamp":1732042778378},"reference-count":211,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,5,27]],"date-time":"2023-05-27T00:00:00Z","timestamp":1685145600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,5,27]],"date-time":"2023-05-27T00:00:00Z","timestamp":1685145600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100008977","name":"Universit\u00e4t Ulm","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100008977","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Electron Markets"],"published-print":{"date-parts":[[2023,12]]},"abstract":"Abstract<\/jats:title>The quest to open black box artificial intelligence (AI) systems evolved into an emerging phenomenon of global interest for academia, business, and society and brought about the rise of the research field of explainable artificial intelligence (XAI). With its pluralistic view, information systems (IS) research is predestined to contribute to this emerging field; thus, it is not surprising that the number of publications on XAI has been rising significantly in IS research. This paper aims to provide a comprehensive overview of XAI research in IS in general and electronic markets in particular using a structured literature review. Based on a literature search resulting in 180 research papers, this work provides an overview of the most receptive outlets, the development of the academic discussion, and the most relevant underlying concepts and methodologies. Furthermore, eight research areas with varying maturity in electronic markets are carved out. Finally, directions for a research agenda of XAI in IS are presented.<\/jats:p>","DOI":"10.1007\/s12525-023-00644-5","type":"journal-article","created":{"date-parts":[[2023,5,27]],"date-time":"2023-05-27T09:02:32Z","timestamp":1685178152000},"update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":27,"title":["Explainable artificial intelligence in information systems: A review of the status quo and future research directions"],"prefix":"10.1007","volume":"33","author":[{"given":"Julia","family":"Brasse","sequence":"first","affiliation":[]},{"given":"Hanna Rebecca","family":"Broder","sequence":"additional","affiliation":[]},{"given":"Maximilian","family":"F\u00f6rster","sequence":"additional","affiliation":[]},{"ORCID":"http:\/\/orcid.org\/0000-0001-7109-0339","authenticated-orcid":false,"given":"Mathias","family":"Klier","sequence":"additional","affiliation":[]},{"given":"Irina","family":"Sigler","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,5,27]]},"reference":[{"key":"644_CR1","doi-asserted-by":"crossref","unstructured":"Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI) (pp. 1\u201318). http:\/\/dl.acm.org\/citation.cfm?doid=3173574.3174156","DOI":"10.1145\/3173574.3174156"},{"key":"644_CR2","doi-asserted-by":"publisher","unstructured":"Abdul, A., Weth, C. von der, Kankanhalli, M., & Lim, B. Y. (2020). COGAM: Measuring and moderating cognitive load in machine learning model explanations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI) (pp. 1\u201314). https:\/\/doi.org\/10.1145\/3313831.3376615","DOI":"10.1145\/3313831.3376615"},{"key":"644_CR3","doi-asserted-by":"publisher","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","volume":"6","author":"A Adadi","year":"2018","unstructured":"Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138\u201352160. https:\/\/doi.org\/10.1109\/ACCESS.2018.2870052","journal-title":"IEEE Access"},{"issue":"2","key":"644_CR4","doi-asserted-by":"publisher","first-page":"427","DOI":"10.1007\/s12525-020-00414-7","volume":"31","author":"M Adam","year":"2021","unstructured":"Adam, M., Wessel, M., & Benlian, A. (2021). AI-based chatbots in customer service and their effects on user compliance. Electronic Markets, 31(2), 427\u2013445. https:\/\/doi.org\/10.1007\/s12525-020-00414-7","journal-title":"Electronic Markets"},{"key":"644_CR5","doi-asserted-by":"publisher","unstructured":"Aghaeipoor,\u00a0F., Javidi,\u00a0M.\u00a0M., & Fernandez,\u00a0A. (2021). IFC-BD: An interpretable fuzzy classifier for boosting explainable artificial intelligence in big data. IEEE Transactions on Fuzzy Systems. Advance online publication.https:\/\/doi.org\/10.1109\/TFUZZ.2021.3049911","DOI":"10.1109\/TFUZZ.2021.3049911"},{"key":"644_CR6","doi-asserted-by":"publisher","first-page":"102387","DOI":"10.1016\/j.ijinfomgt.2021.102387","volume":"60","author":"S Akter","year":"2021","unstructured":"Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D\u2019Ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60, 102387. https:\/\/doi.org\/10.1016\/j.ijinfomgt.2021.102387","journal-title":"International Journal of Information Management"},{"issue":"5","key":"644_CR7","doi-asserted-by":"publisher","first-page":"927","DOI":"10.1108\/IMR-11-2020-0256","volume":"38","author":"S Akter","year":"2021","unstructured":"Akter, S., Hossain, M. A., Lu, Q. S., & Shams, S. R. (2021b). Big data-driven strategic orientation in international marketing. International Marketing Review, 38(5), 927\u2013947. https:\/\/doi.org\/10.1108\/IMR-11-2020-0256","journal-title":"International Marketing Review"},{"key":"644_CR8","doi-asserted-by":"publisher","unstructured":"Alam, L., & Mueller, S. (2021). Examining the effect of explanation on satisfaction and trust in AI diagnostic systems. BMC Medical Informatics and Decision Making, 21(1), 1\u201315.\u00a0https:\/\/doi.org\/10.1186\/s12911-021-01542-6","DOI":"10.1186\/s12911-021-01542-6"},{"issue":"1","key":"644_CR9","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12911-020-01332-6","volume":"20","author":"J Amann","year":"2020","unstructured":"Amann, J., Blasimme, A., Vayena, E., Frey, D., & Madai, V. I. (2020). Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1), 1\u20139. https:\/\/doi.org\/10.1186\/s12911-020-01332-6","journal-title":"BMC Medical Informatics and Decision Making"},{"key":"644_CR10","doi-asserted-by":"publisher","first-page":"473","DOI":"10.1007\/978-3-030-30244-3_39","volume-title":"Lecture notes in computer science. Progress in artificial intelligence","author":"I Areosa","year":"2019","unstructured":"Areosa, I., & Torgo, L. (2019). Visual interpretation of regression error. In P. Moura Oliveira, P. Novais, & L. P. Reis (Eds.), Lecture notes in computer science. Progress in artificial intelligence (pp. 473\u2013485). Springer International Publishing. https:\/\/doi.org\/10.1007\/978-3-030-30244-3_39"},{"key":"644_CR11","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1016\/j.inffus.2019.12.012","volume":"58","author":"AB Arrieta","year":"2020","unstructured":"Arrieta, A. B., D\u00edaz-Rodr\u00edguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garc\u00eda, S., Gil-L\u00f3pez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable rtificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82\u2013115. https:\/\/doi.org\/10.1016\/j.inffus.2019.12.012","journal-title":"Information Fusion"},{"key":"644_CR12","doi-asserted-by":"crossref","unstructured":"Asatiani, A., Malo, P., Nagb\u00f8l, P. R., Penttinen, E., Rinta-Kahila, T. & Salovaara, A. (2021). Sociotechnical envelopment of artificial intelligence: An approach to organizational deployment of inscrutable artificial intelligence systems. Journal of the Association for Information Systems, 22(2). https:\/\/aisel.aisnet.org\/jais\/vol22\/iss2\/8","DOI":"10.17705\/1jais.00664"},{"key":"644_CR13","unstructured":"Australian Broadcasting Corporation. (2022). Robodebt inquiry: Royal commission on unlawful debt scheme begins. ABC News. https:\/\/www.youtube.com\/results?search_query=robodebt+royal+commission. Accessed 02 Feb 2023"},{"key":"644_CR14","doi-asserted-by":"publisher","unstructured":"Baird,\u00a0A., & Maruping,\u00a0L.\u00a0M. (2021). The next generation of research on IS use: A theoretical framework of delegation to and from agentic IS artifacts.\u00a0MIS Quarterly,\u00a045(1). https:\/\/doi.org\/10.25300\/MISQ\/2021\/15882","DOI":"10.25300\/MISQ\/2021\/15882"},{"issue":"5","key":"644_CR15","doi-asserted-by":"publisher","first-page":"375","DOI":"10.17705\/1jais.00266","volume":"12","author":"V Balijepally","year":"2011","unstructured":"Balijepally, V., Mangalaraj, G., & Iyengar, K. (2011). Are we wielding this hammer correctly? A reflective review of the application of cluster analysis in information systems research. Journal of the Association for Information Systems, 12(5), 375\u2013413. https:\/\/doi.org\/10.17705\/1jais.00266","journal-title":"Journal of the Association for Information Systems"},{"key":"644_CR16","unstructured":"Bandara, W., Miskon, S., & Fielt, E. (2011). A systematic, tool-supported method for conducting literature reviews in information systems.\u00a0\u00a0Proceedings of the 19th European Conference on Information Systems (ECIS 2011) (p. 221). Helsinki, Finland. https:\/\/eprints.qut.edu.au\/42184\/1\/42184c.pdf"},{"issue":"4","key":"644_CR17","doi-asserted-by":"publisher","first-page":"1114","DOI":"10.1109\/TITB.2009.2039485","volume":"14","author":"NH Barakat","year":"2010","unstructured":"Barakat, N. H., Bradley, A. P., & Barakat, M. N. H. (2010). Intelligible support vector machines for diagnosis of diabetes mellitus. IEEE Transactions on Information Technology in Biomedicine, 14(4), 1114\u20131120. https:\/\/doi.org\/10.1109\/TITB.2009.2039485","journal-title":"IEEE Transactions on Information Technology in Biomedicine"},{"issue":"1","key":"644_CR18","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12911-020-01276-x","volume":"20","author":"AJ Barda","year":"2020","unstructured":"Barda, A. J., Horvat, C. M., & Hochheiser, H. (2020). A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare. BMC Medical Informatics and Decision Making, 20(1), 1\u201316. https:\/\/doi.org\/10.1186\/s12911-020-01276-x","journal-title":"BMC Medical Informatics and Decision Making"},{"key":"644_CR19","doi-asserted-by":"publisher","unstructured":"Barrera Ferro,\u00a0D., Brailsford,\u00a0S., Bravo,\u00a0C., & Smith,\u00a0H. (2020). Improving healthcare access management by predicting patient no-show behaviour. Decision Support Systems, 138(113398). https:\/\/doi.org\/10.1016\/j.dss.2020.113398","DOI":"10.1016\/j.dss.2020.113398"},{"issue":"1","key":"644_CR20","doi-asserted-by":"publisher","first-page":"386","DOI":"10.1016\/j.ejor.2021.11.009","volume":"301","author":"JA Bastos","year":"2021","unstructured":"Bastos, J. A., & Matos, S. M. (2021). Explainable models of credit losses. European Journal of Operational Research, 301(1), 386\u2013394. https:\/\/doi.org\/10.1016\/j.ejor.2021.11.009","journal-title":"European Journal of Operational Research"},{"issue":"2","key":"644_CR21","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s12525-019-00368-5","volume":"30","author":"I Bauer","year":"2020","unstructured":"Bauer, I., Zavolokina, L., & Schwabe, G. (2020). Is there a market for trusted car data? Electronic Markets, 30(2), 211\u2013225. https:\/\/doi.org\/10.1007\/s12525-019-00368-5","journal-title":"Electronic Markets"},{"key":"644_CR22","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1007\/s12599-021-00683-2","volume":"63","author":"K Bauer","year":"2021","unstructured":"Bauer, K., Hinz, O., van der Aalst, W., & Weinhardt, C. (2021). Expl(AI)n it to me \u2013 Explainable AI and information systems research. Business & Information Systems Engineering, 63, 79\u201382. https:\/\/doi.org\/10.1007\/s12599-021-00683-2","journal-title":"Business & Information Systems Engineering"},{"key":"644_CR23","doi-asserted-by":"publisher","unstructured":"Bayer,\u00a0S., Gimpel,\u00a0H., & Markgraf,\u00a0M. (2021). The role of domain expertise in trusting and following explainable AI decision support systems. Journal of Decision Systems, 1\u201329. https:\/\/doi.org\/10.1080\/12460125.2021.1958505","DOI":"10.1080\/12460125.2021.1958505"},{"issue":"4","key":"644_CR24","doi-asserted-by":"publisher","first-page":"503","DOI":"10.1007\/s12599-018-0529-1","volume":"61","author":"J Beese","year":"2019","unstructured":"Beese, J., Haki, M. K., Aier, S., & Winter, R. (2019). Simulation-based research in information systems. Business & Information Systems Engineering, 61(4), 503\u2013521. https:\/\/doi.org\/10.1007\/s12599-018-0529-1","journal-title":"Business & Information Systems Engineering"},{"issue":"3","key":"644_CR25","doi-asserted-by":"publisher","first-page":"1433","DOI":"10.25300\/MISQ\/2021\/16274","volume":"45","author":"N Berente","year":"2021","unstructured":"Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS Quarterly, 45(3), 1433\u20131450. https:\/\/doi.org\/10.25300\/MISQ\/2021\/16274","journal-title":"MIS Quarterly"},{"key":"644_CR26","doi-asserted-by":"crossref","unstructured":"Bertrand, A., Belloum, R., Eagan, J. R., & Maxwell, W. (2022). How cognitive biases affect XAI-assisted decision-making: A systematic review. Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society (pp. 78\u201391). https:\/\/hal.telecom-paris.fr\/hal-03684457","DOI":"10.1145\/3514094.3534164"},{"issue":"5","key":"644_CR27","doi-asserted-by":"publisher","first-page":"105532","DOI":"10.1016\/j.knosys.2020.105532","volume":"194","author":"A Blanco-Justicia","year":"2020","unstructured":"Blanco-Justicia, A., Domingo-Ferrer, J., Martinez, S., & Sanchez, D. (2020). Machine learning explainability via microaggregation and shallow decision trees. Knowledge-Based Systems, 194(5), 105532. https:\/\/doi.org\/10.1016\/j.knosys.2020.105532","journal-title":"Knowledge-Based Systems"},{"issue":"0957\u20134174","key":"644_CR28","doi-asserted-by":"publisher","first-page":"416","DOI":"10.1016\/j.eswa.2016.11.010","volume":"71","author":"M Bohanec","year":"2017","unstructured":"Bohanec, M., Kljaji\u0107 Bor\u0161tnar, M., & Robnik-\u0160ikonja, M. (2017). Explaining machine learning models in sales predictions. Expert Systems with Applications, 71(0957\u20134174), 416\u2013428. https:\/\/doi.org\/10.1016\/j.eswa.2016.11.010","journal-title":"Expert Systems with Applications"},{"key":"644_CR29","doi-asserted-by":"publisher","unstructured":"Bresso, E., Monnin, P., Bousquet, C., Calvier, F.-E., Ndiaye, N.-C., Petitpain, N., Sma\u00efl-Tabbone, M., & Coulet, A. (2021). Investigating ADR mechanisms with explainable AI: A feasibility study with knowledge graph mining. BMC Medical Informatics and Decision Making, 21(1), 1\u201314.\u00a0https:\/\/doi.org\/10.1186\/s12911-021-01518-6","DOI":"10.1186\/s12911-021-01518-6"},{"key":"644_CR30","unstructured":"Bughin,\u00a0J., Seong,\u00a0J., Manyika,\u00a0J., Chui,\u00a0M., & Joshi,\u00a0R. (2018). Notes from the AI frontier: Modeling the impact of AI on the world economy. https:\/\/www.mckinsey.com\/featured-insights\/artificial-intelligence\/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy"},{"key":"644_CR31","doi-asserted-by":"publisher","unstructured":"Bunde, E. (2021). AI-assisted and explainable hate speech detection for social media moderators \u2013 A design science approach. Proceedings of the 2021 Annual Hawaii International Conference on System Sciences (HICSS) (pp. 1264\u20131274). https:\/\/doi.org\/10.24251\/HICSS.2021.154","DOI":"10.24251\/HICSS.2021.154"},{"key":"644_CR32","doi-asserted-by":"publisher","first-page":"182","DOI":"10.1016\/j.eswa.2019.05.023","volume":"133","author":"SG Burdisso","year":"2019","unstructured":"Burdisso, S. G., Errecalde, M., & Montes-y-G\u00f3mez, M. (2019). A text classification framework for simple and effective early depression detection over social media streams. Expert Systems with Applications, 133, 182\u2013197. https:\/\/doi.org\/10.1016\/j.eswa.2019.05.023","journal-title":"Expert Systems with Applications"},{"issue":"1","key":"644_CR33","doi-asserted-by":"publisher","first-page":"e12577","DOI":"10.1111\/exsy.12577","volume":"38","author":"N Burkart","year":"2021","unstructured":"Burkart, N., Robert, S., & Huber, M. F. (2021). Are you sure? Prediction revision in automated decision-making. Expert Systems, 38(1), e12577. https:\/\/doi.org\/10.1111\/exsy.12577","journal-title":"Expert Systems"},{"key":"644_CR34","doi-asserted-by":"publisher","unstructured":"Chakraborty,\u00a0D., Ba\u015fa\u011fao\u011flu,\u00a0H., & Winterle,\u00a0J. (2021). Interpretable vs. noninterpretable machine learning models for data-driven hydro-climatological process modeling. Expert Systems with Applications, 170(114498). https:\/\/doi.org\/10.1016\/j.eswa.2020.114498","DOI":"10.1016\/j.eswa.2020.114498"},{"key":"644_CR35","unstructured":"Chakrobartty, S., & El-Gayar, O. (2021). Explainable artificial intelligence in the medical domain: a systematic review.\u00a0AMCIS 2021 Proceedings (p. 1). https:\/\/scholar.dsu.edu\/cgi\/viewcontent.cgi?article=1265&context=bispapers"},{"issue":"8","key":"644_CR36","doi-asserted-by":"publisher","first-page":"2696","DOI":"10.1109\/TVCG.2020.2986996","volume":"26","author":"A Chatzimparmpas","year":"2020","unstructured":"Chatzimparmpas, A., Martins, R. M., & Kerren, A. (2020). T-viSNE: Interactive assessment and interpretation of t-SNE projections. IEEE Transactions on Visualization and Computer Graphics, 26(8), 2696\u20132714. https:\/\/doi.org\/10.1109\/TVCG.2020.2986996","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"issue":"2","key":"644_CR37","doi-asserted-by":"publisher","first-page":"1438","DOI":"10.1109\/TVCG.2020.3030342","volume":"27","author":"F Cheng","year":"2021","unstructured":"Cheng, F., Ming, Y., & Qu, H. (2021). Dece: Decision explorer with counterfactual explanations for machine learning models. IEEE Transactions on Visualization and Computer Graphics, 27(2), 1438\u20131447. https:\/\/doi.org\/10.1109\/TVCG.2020.3030342","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"key":"644_CR38","doi-asserted-by":"publisher","unstructured":"Cheng, H.\u2011F., Wang, R., Zhang, Z., O\u2018Connell, F., Gray, T., Harper, F. M., & Zhu, H. (2019). Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI) (pp. 1\u201312). https:\/\/doi.org\/10.1145\/3290605.3300789","DOI":"10.1145\/3290605.3300789"},{"key":"644_CR39","doi-asserted-by":"publisher","unstructured":"Chromik, M., & Butz, A. (2021). Human-XAI interaction: A review and design principles for explanation user interfaces. 2021 IFIP Conference on Human-Computer Interaction (INTERACT) (pp. 619\u2013640). https:\/\/doi.org\/10.1007\/978-3-030-85616-8_36","DOI":"10.1007\/978-3-030-85616-8_36"},{"key":"644_CR40","unstructured":"Chromik, M., & Schuessler, M. (2020). A taxonomy for human subject evaluation of black-box explanations in XAI.\u00a0Proceedings of the IUI workshop on explainable smart systems and algorithmic transparency in emerging technologies (ExSS-ATEC\u201920) (p. 7). Cagliari, Italy. https:\/\/ceur-ws.org\/Vol-2582\/paper9.pdf"},{"key":"644_CR41","doi-asserted-by":"publisher","first-page":"102383","DOI":"10.1016\/j.ijinfomgt.2021.102383","volume":"60","author":"C Collins","year":"2021","unstructured":"Collins, C., Dennehy, D., Conboy, K., & Mikalef, P. (2021). Artificial intelligence in information systems research: A systematic literature review and research agenda. International Journal of Information Management, 60, 102383. https:\/\/doi.org\/10.1016\/j.ijinfomgt.2021.102383","journal-title":"International Journal of Information Management"},{"key":"644_CR42","doi-asserted-by":"publisher","unstructured":"Conati, C., Barral, O., Putnam, V., & Rieger, L. (2021). Toward personalized XAI: A case study in intelligent tutoring systems. Artificial Intelligence, 298, 1\u201323.\u00a0https:\/\/doi.org\/10.1016\/j.artint.2021.103503","DOI":"10.1016\/j.artint.2021.103503"},{"issue":"1","key":"644_CR43","doi-asserted-by":"publisher","first-page":"104","DOI":"10.1007\/BF03177550","volume":"1","author":"HM Cooper","year":"1988","unstructured":"Cooper, H. M. (1988). Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society, 1(1), 104\u2013126. https:\/\/doi.org\/10.1007\/BF03177550","journal-title":"Knowledge in Society"},{"key":"644_CR44","unstructured":"Cooper, A. (2004). The inmates are running the asylum. Why high-tech products drive us crazy and how to restore the sanity (2nd ed.). Sams Publishing."},{"key":"644_CR45","unstructured":"Cui, X., Lee, J. M., & Hsieh, J. P. A. (2019). An integrative 3C evaluation framework for explainable artificial intelligence.\u00a0Proceedings of the twenty-fifth Americas conference on information systems (AMCIS), Cancun, 2019. https:\/\/aisel.aisnet.org\/amcis2019\/ai_semantic_for_intelligent_info_systems\/ai_semantic_for_intelligent_info_systems\/10"},{"key":"644_CR46","unstructured":"DARPA. (2018). Explainable artificial intelligence. https:\/\/www.darpa.mil\/program\/explainable-artificial-intelligence. Accessed 02 Feb 2023"},{"issue":"2","key":"644_CR47","doi-asserted-by":"publisher","first-page":"101666","DOI":"10.1016\/j.giq.2021.101666","volume":"39","author":"H de Bruijn","year":"2021","unstructured":"de Bruijn, H., Warnier, M., & Janssen, M. (2021). The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making. Government Information Quarterly, 39(2), 101666. https:\/\/doi.org\/10.1016\/j.giq.2021.101666","journal-title":"Government Information Quarterly"},{"key":"644_CR48","doi-asserted-by":"publisher","first-page":"91","DOI":"10.1016\/j.datak.2006.10.005","volume":"63","author":"\u00c1L de Santana","year":"2007","unstructured":"de Santana, \u00c1. L., Franc\u00eas, C. R., Rocha, C. A., Carvalho, S. V., Vijaykumar, N. L., Rego, L. P., & Costa, J. C. (2007). Strategies for improving the modeling and interpretability of Bayesian networks. Data & Knowledge Engineering, 63, 91\u2013107. https:\/\/doi.org\/10.1016\/j.datak.2006.10.005","journal-title":"Data & Knowledge Engineering"},{"key":"644_CR49","doi-asserted-by":"publisher","unstructured":"Dodge, J., Penney, S., Hilderbrand, C., Anderson, A., & Burnett, M. (2018). How the experts do it: Assessing and explaining agent behaviors in real-time strategy games. Proceedings of the 36th International Conference on Human Factors in Computing Systems (CHI) (pp. 1\u201312). Association for Computing. https:\/\/doi.org\/10.1145\/3173574.3174136","DOI":"10.1145\/3173574.3174136"},{"key":"644_CR50","unstructured":"Doran, D., Schulz, S., & Besold, T. R. (2018). What does explainable AI really mean? A new conceptualization of perspectives. In T. R. Besold & O. Kutz (Chairs), Proceedings of the first international workshop on comprehensibility and explanation in AI and ML 2017. https:\/\/ceur-ws.org\/Vol-2071\/CExAIIA_2017_paper_2.pdf"},{"key":"644_CR51","doi-asserted-by":"publisher","unstructured":"Doshi-Velez, F., & Kim, B. (2018). Considerations for evaluation and generalization in interpretable machine learning. In Explainable and Interpretable Models in Computer Vision and Machine Learning (pp. 3\u201317). Springer. https:\/\/doi.org\/10.1007\/978-3-319-98131-4_1","DOI":"10.1007\/978-3-319-98131-4_1"},{"key":"644_CR52","doi-asserted-by":"publisher","unstructured":"Eiras-Franco,\u00a0C., Guijarro-Berdi\u00f1as,\u00a0B., Alonso-Betanzos,\u00a0A., & Bahamonde,\u00a0A. (2019). A scalable decision-tree-based method to explain interactions in dyadic data. Decision Support Systems, 127(113141). https:\/\/doi.org\/10.1016\/j.dss.2019.113141","DOI":"10.1016\/j.dss.2019.113141"},{"key":"644_CR53","doi-asserted-by":"publisher","unstructured":"Elshawi,\u00a0R., Al-Mallah,\u00a0M.\u00a0H., & Sakr,\u00a0S. (2019). On the interpretability of machine learning-based model for predicting hypertension. BMC Medical Informatics and Decision Making, 19(146). https:\/\/doi.org\/10.1186\/s12911-019-0874-0","DOI":"10.1186\/s12911-019-0874-0"},{"key":"644_CR54","unstructured":"European Commission (Ed.). (2021). Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/HTML\/?uri=CELEX:52021PC0206&from=EN. Accessed 02 Feb 2023"},{"key":"644_CR55","doi-asserted-by":"publisher","unstructured":"Fang, H. S. A., Tan, N. C., Tan, W. Y., Oei, R. W., Lee, M. L., & Hsu, W. (2021). Patient similarity analytics for explainable clinical risk prediction. BMC Medical Informatics and Decision Making, 21(1), 1\u201312.\u00a0https:\/\/doi.org\/10.1186\/s12911-021-01566-y","DOI":"10.1186\/s12911-021-01566-y"},{"key":"644_CR56","unstructured":"Fernandez, C., Provost, F., & Han, X. (2019). Counterfactual explanations for data-driven decisions.\u00a0Proceedings of the fortieth international conference on information systems (ICIS). https:\/\/aisel.aisnet.org\/icis2019\/data_science\/data_science\/8"},{"key":"644_CR57","doi-asserted-by":"publisher","unstructured":"Ferreira, J. J., & Monteiro, M. S. (2020). What are people doing about XAI user experience? A survey on AI explainability research and practice. 2020 International Conference on Human-Computer Interaction (HCII) (pp. 56\u201373). https:\/\/doi.org\/10.1007\/978-3-030-49760-6_4","DOI":"10.1007\/978-3-030-49760-6_4"},{"key":"644_CR58","unstructured":"Flei\u00df, J., B\u00e4ck, E., & Thalmann, S. (2020). Explainability and the intention to use AI-based conversational agents. An empirical investigation for the case of recruiting. CEUR Workshop Proceedings (CEUR-WS.Org) (vol 2796, pp. 1\u20135). https:\/\/ceur-ws.org\/Vol-2796\/xi-ml-2020_fleiss.pdf"},{"issue":"13","key":"644_CR59","doi-asserted-by":"publisher","first-page":"5737","DOI":"10.1016\/j.eswa.2015.02.042","volume":"42","author":"R Florez-Lopez","year":"2015","unstructured":"Florez-Lopez, R., & Ramon-Jeronimo, J. M. (2015). Enhancing accuracy and interpretability of ensemble strategies in credit risk assessment. A correlated-adjusted decision forest proposal. Expert Systems with Applications, 42(13), 5737\u20135753. https:\/\/doi.org\/10.1016\/j.eswa.2015.02.042","journal-title":"Expert Systems with Applications"},{"key":"644_CR60","unstructured":"F\u00f6rster, M., Klier, M., Kluge, K., & Sigler, I. (2020a). Evaluating explainable artificial intelligence \u2013 what users really appreciate.\u00a0Proceedings of the 2020 European Conference on Information Systems (ECIS). A Virtual AIS Conference. https:\/\/web.archive.org\/web\/20220803134652id_\/https:\/\/aisel.aisnet.org\/cgi\/viewcontent.cgi?article=1194&context=ecis2020_rp"},{"key":"644_CR61","unstructured":"F\u00f6rster, M., Klier, M., Kluge, K., & Sigler, I. (2020b). Fostering human agency: a process for the design of user-centric XAI systems. In Proceedings of the Forty-First International Conference on Information Systems (ICIS). A Virtual AIS Conference. https:\/\/aisel.aisnet.org\/icis2020\/hci_artintel\/hci_artintel\/12"},{"key":"644_CR62","unstructured":"F\u00f6rster, M., H\u00fchn, P., Klier, M., & Kluge, K. (2021). Capturing users\u2019 reality: a novel approach to generate coherent counterfactual explanations.\u00a0Proceedings of the 54th Hawaii International Conference on System Sciences (HICSS). A Virtual AIS Conference. https:\/\/scholarspace.manoa.hawaii.edu\/server\/api\/core\/bitstreams\/947e7f6b-c7b0-4dba-afcc-95c4edef0a27\/content"},{"key":"644_CR63","unstructured":"Ganeshkumar, M., Ravi, V., Sowmya, V., Gopalakrishnan, E. A., & Soman, K. P. (2021). Explainable deep learning-based approach for multilabel classification of electrocardiogram. IEEE Transactions on Engineering Management, 1\u201313. https:\/\/ieeexplore.ieee.org\/stamp\/stamp.jsp?arnumber=9537612&casa_token=6VeV8vXBRT0AAAAA:cVhYpdlNbD1BgRH_9GBDQofEVy38quzW6zs3v3doJzJ2Fx2MP02wy0YqLcoAeC8y2GekDshY0bg&tag=1"},{"key":"644_CR64","doi-asserted-by":"publisher","unstructured":"Gerlings, J., Shollo, A., & Constantiou, I. (2021). Reviewing the need for explainable artificial intelligence (XAI). Proceedings of the 54th Hawaii International Conference on System Sciences (HICSS) (pp. 1284\u20131293). https:\/\/doi.org\/10.48550\/arXiv.2012.01007","DOI":"10.48550\/arXiv.2012.01007"},{"issue":"11","key":"644_CR65","doi-asserted-by":"publisher","first-page":"1544","DOI":"10.1001\/jamainternmed.2018.3763","volume":"178","author":"MA Gianfrancesco","year":"2018","unstructured":"Gianfrancesco, M. A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018). Potential biases in machine learning algorithms using electronic health record data. JAMA Internal Medicine, 178(11), 1544\u20131547. https:\/\/doi.org\/10.1001\/jamainternmed.2018.3763","journal-title":"JAMA Internal Medicine"},{"key":"644_CR66","doi-asserted-by":"publisher","unstructured":"Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) (pp. 80\u201389). https:\/\/doi.org\/10.48550\/arXiv.1806.00069","DOI":"10.48550\/arXiv.1806.00069"},{"key":"644_CR67","doi-asserted-by":"publisher","unstructured":"Giudici,\u00a0P., & Raffinetti,\u00a0E. (2021). Shapley-Lorenz eXplainable Artificial Intelligence. Expert Systems with Applications, 167(114104). https:\/\/doi.org\/10.1016\/j.eswa.2020.114104","DOI":"10.1016\/j.eswa.2020.114104"},{"key":"644_CR68","unstructured":"Gonzalez, G. (2018). How Amazon accidentally invented a sexist hiring algorithm: A company experiment to use artificial intelligence in hiring inadvertently favored male candidates. https:\/\/www.inc.com\/guadalupe-gonzalez\/amazon-artificial-intelligence-ai-hiring-tool-hr.html"},{"key":"644_CR69","unstructured":"Google (Ed.). (2022). Explainable AI. https:\/\/cloud.google.com\/explainable-ai. Accessed 02 Feb 2023"},{"issue":"2","key":"644_CR70","doi-asserted-by":"publisher","first-page":"207","DOI":"10.1287\/isre.1090.0249","volume":"21","author":"N Granados","year":"2010","unstructured":"Granados, N., Gupta, A., & Kauffman, R. J. (2010). Information transparency in business-to-consumer markets: Concepts, framework, and research agenda. Information Systems Research, 21(2), 207\u2013226. https:\/\/doi.org\/10.1287\/isre.1090.0249","journal-title":"Information Systems Research"},{"issue":"4","key":"644_CR71","doi-asserted-by":"publisher","first-page":"497","DOI":"10.2307\/249487","volume":"23","author":"S Gregor","year":"1999","unstructured":"Gregor, S., & Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly, 23(4), 497\u2013530. https:\/\/doi.org\/10.2307\/249487","journal-title":"MIS Quarterly"},{"key":"644_CR72","doi-asserted-by":"publisher","first-page":"111","DOI":"10.1016\/j.ins.2021.01.052","volume":"559","author":"BI Grisci","year":"2021","unstructured":"Grisci, B. I., Krause, M. J., & Dorn, M. (2021). Relevance aggregation for neural networks interpretability and knowledge discovery on tabular data. Information Sciences, 559, 111\u2013129. https:\/\/doi.org\/10.1016\/j.ins.2021.01.052","journal-title":"Information Sciences"},{"issue":"6","key":"644_CR73","doi-asserted-by":"publisher","first-page":"205","DOI":"10.1016\/j.ipl.2007.07.002","volume":"104","author":"I Gronau","year":"2007","unstructured":"Gronau, I., & Moran, S. (2007). Optimal implementations of UPGMA and other common clustering algorithms. Information Processing Letters, 104(6), 205\u2013210. https:\/\/doi.org\/10.1016\/j.ipl.2007.07.002","journal-title":"Information Processing Letters"},{"issue":"7","key":"644_CR74","doi-asserted-by":"publisher","first-page":"1720","DOI":"10.1109\/TMM.2020.2971170","volume":"22","author":"D Gu","year":"2020","unstructured":"Gu, D., Li, Y., Jiang, F., Wen, Z., Liu, S., Shi, W., Lu, G., & Zhou, C. (2020). VINet: A visually interpretable image diagnosis network. IEEE Transactions on Multimedia, 22(7), 1720\u20131729. https:\/\/doi.org\/10.1109\/TMM.2020.2971170","journal-title":"IEEE Transactions on Multimedia"},{"issue":"5","key":"644_CR75","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3236009","volume":"51","author":"R Guidotti","year":"2019","unstructured":"Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1\u201342. https:\/\/doi.org\/10.1145\/3236009","journal-title":"ACM Computing Surveys"},{"issue":"3","key":"644_CR76","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3487048","volume":"16","author":"M Guo","year":"2021","unstructured":"Guo, M., Xu, Z., Zhang, Q., Liao, X., & Liu, J. (2021). Deciphering feature effects on decision-making in ordinal regression problems: An explainable ordinal factorization model. ACM Transactions on Knowledge Discovery from Data (TKDD), 16(3), 1\u201326. https:\/\/doi.org\/10.1145\/3487048","journal-title":"ACM Transactions on Knowledge Discovery from Data (TKDD)"},{"issue":"5","key":"644_CR77","doi-asserted-by":"publisher","first-page":"946","DOI":"10.1080\/0144929X.2020.1846789","volume":"41","author":"T Ha","year":"2022","unstructured":"Ha, T., Sah, Y. J., Park, Y., & Lee, S. (2022). Examining the effects of power status of an explainable artificial intelligence system on users\u2019 perceptions. Behaviour & Information Technology, 41(5), 946\u2013958. https:\/\/doi.org\/10.1080\/0144929X.2020.1846789","journal-title":"Behaviour & Information Technology"},{"key":"644_CR78","first-page":"1","volume":"9","author":"P Hamm","year":"2021","unstructured":"Hamm, P., Wittmann, H. F., & Klesel, M. (2021). Explain it to me and I will use it: A proposal on the impact of explainable AI on use behavior. ICIS 2021 Proceedings, 9, 1\u20139.","journal-title":"ICIS 2021 Proceedings"},{"key":"644_CR79","doi-asserted-by":"crossref","unstructured":"Hardt, M., Chen, X., Cheng, X., Donini, M., Gelman, J., Gollaprolu, S., He, J., Larroy, P., Liu, X., McCarthy, N., Rathi, A., Rees, S., Siva, A., Tsai, E., Vasist, K., Yilmaz, P., Zafar, M. B., Das, S., Haas, K., Hill, T., Kenthapadi, K. (2021). Amazon SageMaker clarify: machine learning bias detection and explainability in the cloud. In 2021 ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) (pp. 2974\u20132983). https:\/\/arxiv.org\/pdf\/2109.03285.pdf","DOI":"10.1145\/3447548.3467177"},{"issue":"250","key":"644_CR80","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12911-020-01201-2","volume":"20","author":"J Hatwell","year":"2020","unstructured":"Hatwell, J., Gaber, M. M., & Atif Azad, R. M. (2020). Ada-WHIPS: Explaining AdaBoost classification with applications in the health sciences. BMC Medical Informatics and Decision Making, 20(250), 1\u201325. https:\/\/doi.org\/10.1186\/s12911-020-01201-2","journal-title":"BMC Medical Informatics and Decision Making"},{"key":"644_CR81","doi-asserted-by":"publisher","first-page":"64","DOI":"10.1016\/j.eswa.2005.09.045","volume":"30","author":"J He","year":"2006","unstructured":"He, J., Hu, H.-J., Harrison, R., Tai, P. C., & Pan, Y. (2006). Transmembrane segments prediction and understanding using support vector machine and decision tree. Expert Systems with Applications, 30, 64\u201372. https:\/\/doi.org\/10.1016\/j.eswa.2005.09.045","journal-title":"Expert Systems with Applications"},{"issue":"3\u20134","key":"644_CR82","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3444369","volume":"11","author":"S Hepenstal","year":"2021","unstructured":"Hepenstal, S., Zhang, L., Kodagoda, N., Wong, B., & l. w. (2021). Developing conversational agents for use in criminal investigations. ACM Transactions on Interactive Intelligent Systems (TiiS), 11(3\u20134), 1\u201335. https:\/\/doi.org\/10.1145\/3444369","journal-title":"ACM Transactions on Interactive Intelligent Systems (TiiS)"},{"key":"644_CR83","doi-asserted-by":"crossref","unstructured":"Herse, S., Vitale, J., Tonkin, M., Ebrahimian, D., Ojha, S., Johnston, B., Judge, W., & Williams, M. (2018). Do you trust me, blindly? Factors influencing trust towards a robot recommender system. Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). https:\/\/ieeexplore.ieee.org\/document\/8525581\/","DOI":"10.1109\/ROMAN.2018.8525581"},{"key":"644_CR84","doi-asserted-by":"publisher","unstructured":"Heuillet, A., Couthouis, F., & D\u00edaz-Rodr\u00edguez, N. (2021). Explainability in deep reinforcement learning. Knowledge-Based Systems, 214, 106685.\u00a0https:\/\/doi.org\/10.1016\/j.knosys.2020.106685","DOI":"10.1016\/j.knosys.2020.106685"},{"issue":"1","key":"644_CR85","doi-asserted-by":"publisher","first-page":"75","DOI":"10.2307\/25148625","volume":"28","author":"AR Hevner","year":"2004","unstructured":"Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75\u2013105. https:\/\/doi.org\/10.2307\/25148625","journal-title":"MIS Quarterly"},{"key":"644_CR86","doi-asserted-by":"publisher","unstructured":"Hong, S. R., Hullman, J., & Bertini, E. (2020). Human factors in model interpretability: Industry practices, challenges, and needs. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW1, Article 68). https:\/\/doi.org\/10.1145\/3392878","DOI":"10.1145\/3392878"},{"issue":"1","key":"644_CR87","doi-asserted-by":"publisher","first-page":"141","DOI":"10.1016\/j.dss.2010.12.003","volume":"51","author":"J Huysmans","year":"2011","unstructured":"Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., & Baesens, B. (2011). An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems, 51(1), 141\u2013154. https:\/\/doi.org\/10.1016\/j.dss.2010.12.003","journal-title":"Decision Support Systems"},{"key":"644_CR88","doi-asserted-by":"publisher","unstructured":"Iadarola, G., Martinelli, F., Mercaldo, F., & Santone, A. (2021). Towards an interpretable deep learning model for mobile malware detection and family identification. Computers & Security, 105, 1\u201315.\u00a0https:\/\/doi.org\/10.1016\/j.cose.2021.102198","DOI":"10.1016\/j.cose.2021.102198"},{"key":"644_CR89","unstructured":"IBM (Ed.). (2022). IBM Watson OpenScale - Overview. https:\/\/www.ibm.com\/docs\/en\/cloud-paks\/cp-data\/3.5.0?topic=services-watson-openscale"},{"key":"644_CR90","doi-asserted-by":"publisher","unstructured":"Irarr\u00e1zaval, M. E., Maldonado, S., P\u00e9rez, J., & Vairetti, C. (2021). Telecom traffic pumping analytics via explainable data science. Decision Support Systems, 150, 1\u201314.\u00a0https:\/\/doi.org\/10.1016\/j.dss.2021.113559","DOI":"10.1016\/j.dss.2021.113559"},{"issue":"7","key":"644_CR91","doi-asserted-by":"publisher","first-page":"1291","DOI":"10.1109\/TFUZZ.2019.2917124","volume":"28","author":"MA Islam","year":"2020","unstructured":"Islam, M. A., Anderson, D. T., Pinar, A., Havens, T. C., Scott, G., & Keller, J. M. (2020). Enabling explainable fusion in deep learning with fuzzy integral neural networks. IEEE Transactions on Fuzzy Systems, 28(7), 1291\u20131300. https:\/\/doi.org\/10.1109\/TFUZZ.2019.2917124","journal-title":"IEEE Transactions on Fuzzy Systems"},{"key":"644_CR92","doi-asserted-by":"publisher","unstructured":"Jakulin, A., Mo\u017eina, M., Dem\u0161ar, J., Bratko, I., & Zupan, B. (2005). Nomograms for visualizing support vector machines. Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining (KDD) (pp. 108\u2013117). https:\/\/doi.org\/10.1145\/1081870.1081886","DOI":"10.1145\/1081870.1081886"},{"issue":"1","key":"644_CR93","doi-asserted-by":"publisher","first-page":"451","DOI":"10.25300\/MISQ\/2020\/15108","volume":"44","author":"J Jiang","year":"2020","unstructured":"Jiang, J., & Cameron, A.-F. (2020). IT-enabled self-monitoring for chronic disease self-management: An interdisciplinary review. MIS Quarterly, 44(1), 451\u2013508. https:\/\/doi.org\/10.25300\/MISQ\/2020\/15108","journal-title":"MIS Quarterly"},{"key":"644_CR94","doi-asserted-by":"publisher","DOI":"10.1080\/10447318.2022.2093863","author":"J Jiang","year":"2022","unstructured":"Jiang, J., Karran, A. J., Coursaris, C. K., L\u00e9ger, P. M., & Beringer, J. (2022). A situation awareness perspective on human-AI interaction: Tensions and opportunities. International Journal of Human-Computer Interaction. https:\/\/doi.org\/10.1080\/10447318.2022.2093863","journal-title":"International Journal of Human-Computer Interaction"},{"key":"644_CR95","unstructured":"Jussupow, E., Meza Mart\u00ednez, M. A., M\u00e4dche, A., & Heinzl, A. (2021). Is this system biased? \u2013 How users react to gender bias in an explainable AI System.\u00a0Proceedings of the 42nd International Conference on Information Systems (ICIS) (pp. 1\u201317). https:\/\/aisel.aisnet.org\/icis2021\/hci_robot\/hci_robot\/11"},{"issue":"3\u20134","key":"644_CR96","first-page":"1","volume":"11","author":"C Kim","year":"2021","unstructured":"Kim, C., Lin, X., Collins, C., Taylor, G. W., & Amer, M. R. (2021). Learn, generate, rank, explain: A case study of visual explanation by generative machine learning. ACM Transactions on Interactive Intelligent Systems (TiiS), 11(3\u20134), 1\u201334.","journal-title":"ACM Transactions on Interactive Intelligent Systems (TiiS)"},{"key":"644_CR97","doi-asserted-by":"publisher","unstructured":"Kim,\u00a0B., Park,\u00a0J., & Suh,\u00a0J. (2020a). Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information. Decision Support Systems, 134(113302). https:\/\/doi.org\/10.1016\/j.dss.2020.113302","DOI":"10.1016\/j.dss.2020.113302"},{"key":"644_CR98","doi-asserted-by":"publisher","unstructured":"Kim,\u00a0J., Lee,\u00a0S., Hwang,\u00a0E., Ryu,\u00a0K.\u00a0S., Jeong,\u00a0H., Lee,\u00a0J.\u00a0W., Hwangbo,\u00a0Y., Choi,\u00a0K.\u00a0S., & Cha,\u00a0H.\u00a0S. (2020b). Limitations of deep learning attention mechanisms in clinical research: Empirical case study based on the Korean diabetic disease setting. Journal of Medical Internet Research, 22(12). https:\/\/doi.org\/10.2196\/18418","DOI":"10.2196\/18418"},{"key":"644_CR99","doi-asserted-by":"publisher","first-page":"103458","DOI":"10.1016\/j.artint.2021.103458","volume":"295","author":"T Kliegr","year":"2021","unstructured":"Kliegr, T., Bahn\u00edk, \u0160, & F\u00fcrnkranz, J. (2021). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. Artificial Intelligence, 295, 103458. https:\/\/doi.org\/10.1016\/j.artint.2021.103458","journal-title":"Artificial Intelligence"},{"key":"644_CR100","doi-asserted-by":"publisher","unstructured":"Kline,\u00a0A., Kline,\u00a0T., Shakeri Hossein Abad,\u00a0Z., & Lee,\u00a0J. (2020). Using item response theory for explainable machine learning in predicting mortality in the intensive care unit: Case-based approach. Journal of Medical Internet Research, 22(9). https:\/\/doi.org\/10.2196\/20268","DOI":"10.2196\/20268"},{"key":"644_CR101","unstructured":"Knowles,\u00a0T. (2021). AI will have a bigger impact than fire, says Google boss Sundar Pichai. https:\/\/www.thetimes.co.uk\/article\/ai-will-have-a-bigger-impact-than-fire-says-google-boss-sundar-pichai-rk8bdst7r"},{"key":"644_CR102","doi-asserted-by":"publisher","unstructured":"Kou, Y., & Gui, X. (2020). Mediating community-AI interaction through situated explanation. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2, Article 102). https:\/\/doi.org\/10.1145\/3415173","DOI":"10.1145\/3415173"},{"issue":"4","key":"644_CR103","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3365843","volume":"10","author":"P Kouki","year":"2020","unstructured":"Kouki, P., Schaffer, J., Pujara, J., O\u2019Donovan, J., & Getoor, L. (2020). Generating and understanding personalized explanations in hybrid recommender systems. ACM Transactions on Interactive Intelligent Systems (TiiS), 10(4), 1\u201340.","journal-title":"ACM Transactions on Interactive Intelligent Systems (TiiS)"},{"issue":"3s","key":"644_CR104","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3457187","volume":"17","author":"A Kumar","year":"2021","unstructured":"Kumar, A., Manikandan, R., Kose, U., Gupta, D., & Satapathy, S. C. (2021). Doctor\u2019s dilemma: Evaluating an explainable subtractive spatial lightweight convolutional neural network for brain tumor diagnosis. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 17(3s), 1\u201326.","journal-title":"ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)"},{"key":"644_CR105","doi-asserted-by":"publisher","first-page":"82300","DOI":"10.1109\/ACCESS.2021.3086230","volume":"9","author":"DV Kute","year":"2021","unstructured":"Kute, D. V., Pradhan, B., Shukla, N., & Alamri, A. (2021). Deep learning and explainable artificial intelligence techniques applied for detecting money laundering \u2013 A critical review. IEEE Access, 9, 82300\u201382317.","journal-title":"IEEE Access"},{"key":"644_CR106","doi-asserted-by":"publisher","unstructured":"Kwon,\u00a0B.\u00a0C., Choi,\u00a0M.\u2011J., Kim,\u00a0J.\u00a0T., Choi,\u00a0E., Kim,\u00a0Y.\u00a0B., Kwon,\u00a0S., Sun,\u00a0J., & Choo,\u00a0J. (2019). Retainvis: Visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. IEEE Transactions on Visualization and Computer Graphics, 25(1). https:\/\/doi.org\/10.1109\/TVCG.2018.2865027","DOI":"10.1109\/TVCG.2018.2865027"},{"issue":"1","key":"644_CR107","doi-asserted-by":"publisher","first-page":"159","DOI":"10.2307\/2529310","volume":"33","author":"JR Landis","year":"1977","unstructured":"Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159\u2013174. https:\/\/doi.org\/10.2307\/2529310","journal-title":"Biometrics"},{"key":"644_CR108","doi-asserted-by":"publisher","unstructured":"Langer, M., Oster, D., Speith, T., Hermanns, H., K\u00e4stner, L., Schmidt, E., Seeing, A., & Baum, K. (2021). What do we want from explainable artificial intelligence (XAI)?\u2013A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296. https:\/\/doi.org\/10.1016\/j.artint.2021.103473","DOI":"10.1016\/j.artint.2021.103473"},{"key":"644_CR109","doi-asserted-by":"publisher","unstructured":"Levy,\u00a0Y., & Ellis,\u00a0T.\u00a0J. (2006). A systems approach to conduct an effective literature review in support of information systems research. Informing Science, 9. https:\/\/doi.org\/10.28945\/479","DOI":"10.28945\/479"},{"key":"644_CR110","doi-asserted-by":"publisher","unstructured":"Li, J., Shi, H., & Hwang, K. S. (2021). An explainable ensemble feedforward method with Gaussian convolutional filter. Knowledge-Based Systems, 225. https:\/\/doi.org\/10.1016\/j.knosys.2021.107103","DOI":"10.1016\/j.knosys.2021.107103"},{"key":"644_CR111","doi-asserted-by":"publisher","unstructured":"Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing design practices for explainable AI user experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI) (pp. 1\u201315) https:\/\/doi.org\/10.1145\/3313831.3376590","DOI":"10.1145\/3313831.3376590"},{"key":"644_CR112","doi-asserted-by":"publisher","unstructured":"Lim, B. Y., Dey, A. K., & Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. Proceedings of the 2009 SIGCHI Conference on Human Factors in Computing Systems (CHI) (pp. 2119\u20132128). https:\/\/doi.org\/10.1145\/1518701.1519023","DOI":"10.1145\/1518701.1519023"},{"key":"644_CR113","doi-asserted-by":"publisher","first-page":"186","DOI":"10.1016\/j.knosys.2016.12.013","volume":"119","author":"I Lopez-Gazpio","year":"2017","unstructured":"Lopez-Gazpio, I., Maritxalar, M., Gonzalez-Agirre, A., Rigau, G., Uria, L., & Agirre, E. (2017). Interpretable semantic textual similarity: Finding and explaining differences between sentences. Knowledge-Based Systems, 119, 186\u2013199. https:\/\/doi.org\/10.1016\/j.knosys.2016.12.013","journal-title":"Knowledge-Based Systems"},{"key":"644_CR114","doi-asserted-by":"publisher","unstructured":"Lukyanenko,\u00a0R., Castellanos,\u00a0A., Storey,\u00a0V.\u00a0C., Castillo,\u00a0A., Tremblay,\u00a0M.\u00a0C., & Parsons,\u00a0J. (2020). Superimposition: Augmenting machine learning outputs with conceptual models for explainable AI. In G. Grossmann & S. Ram (Eds.), Lecture notes in computer science. Advances in conceptual modeling (pp.\u00a026\u201334). Springer International Publishing. https:\/\/doi.org\/10.1007\/978-3-030-65847-2_3","DOI":"10.1007\/978-3-030-65847-2_3"},{"key":"644_CR115","doi-asserted-by":"publisher","first-page":"46","DOI":"10.1016\/j.futures.2017.03.006","volume":"90","author":"S Makridakis","year":"2017","unstructured":"Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46\u201360.\u00a0https:\/\/doi.org\/10.1016\/j.futures.2017.03.006","journal-title":"Futures"},{"key":"644_CR116","unstructured":"Malle, B. F. (2006). How the mind explains behavior: Folk explanations, meaning, and social interaction. MIT press."},{"issue":"2","key":"644_CR117","doi-asserted-by":"publisher","first-page":"259","DOI":"10.1007\/s12525-019-00392-5","volume":"30","author":"V Marella","year":"2020","unstructured":"Marella, V., Upreti, B., Merikivi, J., & Tuunainen, V. K. (2020). Understanding the creation of trust in cryptocurrencies: The case of Bitcoin. Electronic Markets, 30(2), 259\u2013271. https:\/\/doi.org\/10.1007\/s12525-019-00392-5","journal-title":"Electronic Markets"},{"issue":"1","key":"644_CR118","doi-asserted-by":"publisher","first-page":"73","DOI":"10.25300\/MISQ\/2014\/38.1.04","volume":"38","author":"D Martens","year":"2014","unstructured":"Martens, D., & Provost, F. (2014). Explaining data-driven document classifications. MIS Quarterly, 38(1), 73\u201399. https:\/\/doi.org\/10.25300\/MISQ\/2014\/38.1.04","journal-title":"MIS Quarterly"},{"issue":"2","key":"644_CR119","doi-asserted-by":"publisher","first-page":"178","DOI":"10.1109\/TKDE.2008.131","volume":"21","author":"D Martens","year":"2009","unstructured":"Martens, D., Baesens, B., & van Gestel, T. (2009). Decompositional rule extraction from support vector machines by active learning. IEEE Transactions on Knowledge and Data Engineering, 21(2), 178\u2013191. https:\/\/doi.org\/10.1109\/TKDE.2008.131","journal-title":"IEEE Transactions on Knowledge and Data Engineering"},{"key":"644_CR120","doi-asserted-by":"publisher","unstructured":"Martens,\u00a0D., Baesens,\u00a0B., van Gestel,\u00a0T., & Vanthienen,\u00a0J. (2007). Comprehensible credit scoring models using rule extraction from support vector machines. SSRN Electronic Journal. Advance online publication.https:\/\/doi.org\/10.2139\/ssrn.878283","DOI":"10.2139\/ssrn.878283"},{"issue":"7788","key":"644_CR121","doi-asserted-by":"publisher","first-page":"89","DOI":"10.1038\/s41586-019-1799-6","volume":"577","author":"SM McKinney","year":"2020","unstructured":"McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., Back, T., Chesus, M.,\u00a0Corrado, G. S., Darzi, A., Etemadi, M., Garcia-Vicente, F., Gilbert, F. J., Halling-Brown, M., Hassabis, D., Jansen, S., Karthikesalingam, A., Kelly, C. J., King, D., Ledsam, J. R., Melnick, D., Mostofi, H., Peng, L., Reicher, J. J., Romera-Paredes, B., Sidebottom, R., Suleyman, M., Tse, D., Young, K. C., De Fauw, J. & Shetty, S. (2020). International evaluation of an AI system for breast cancer screening. Nature,\u00a0577\u00a0(7788), 89\u201394. https:\/\/doi.org\/10.1038\/s41586-019-1799-6","journal-title":"Nature"},{"key":"644_CR122","unstructured":"Mehdiyev, N., & Fettke, P. (2020). Prescriptive process analytics with deep learning and explainable artificial intelligence.\u00a0Proceedings of the 28th European Conference on Information Systems (ECIS). An Online AIS Conference. https:\/\/aisel.aisnet.org\/ecis2020_rp\/122"},{"key":"644_CR123","doi-asserted-by":"publisher","unstructured":"Mensa,\u00a0E., Colla,\u00a0D., Dalmasso,\u00a0M., Giustini,\u00a0M., Mamo,\u00a0C., Pitidis,\u00a0A., & Radicioni,\u00a0D.\u00a0P. (2020). Violence detection explanation via semantic roles embeddings. BMC Medical Informatics and Decision Making, 20(263). https:\/\/doi.org\/10.1186\/s12911-020-01237-4","DOI":"10.1186\/s12911-020-01237-4"},{"issue":"1","key":"644_CR124","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12911-021-01703-7","volume":"21","author":"M Merry","year":"2021","unstructured":"Merry, M., Riddle, P., & Warren, J. (2021). A mental models approach for defining explainable artificial intelligence. BMC Medical Informatics and Decision Making, 21(1), 1\u201312. https:\/\/doi.org\/10.1186\/s12911-021-01703-7","journal-title":"BMC Medical Informatics and Decision Making"},{"issue":"1","key":"644_CR125","doi-asserted-by":"publisher","first-page":"53","DOI":"10.1080\/10580530.2020.1849465","volume":"39","author":"C Meske","year":"2020","unstructured":"Meske, C., Bunde, E., Schneider, J., & Gersch, M. (2020). Explainable artificial intelligence: Objectives, stakeholders, and future research opportunities. Information Systems Management, 39(1), 53\u201363. https:\/\/doi.org\/10.1080\/10580530.2020.1849465","journal-title":"Information Systems Management"},{"key":"644_CR126","doi-asserted-by":"publisher","first-page":"2103","DOI":"10.1007\/s12525-022-00607-2","volume":"32","author":"C Meske","year":"2022","unstructured":"Meske, C., Abedin, B., Klier, M., & Rabhi, F. (2022). Explainable and responsible artificial intelligence. Electronic Markets, 32(4), 2103\u20132106. https:\/\/doi.org\/10.1007\/s12525-022-00607-2","journal-title":"Electronic Markets"},{"key":"644_CR127","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","volume":"267","author":"T Miller","year":"2019","unstructured":"Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1\u201338. https:\/\/doi.org\/10.1016\/j.artint.2018.07.007","journal-title":"Artificial Intelligence"},{"key":"644_CR128","unstructured":"Miller, T., Howe, P., & Sonenberg, L. (2017). Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. ArXiv. arXiv:1712.00547. https:\/\/arxiv.org\/pdf\/1712.00547.pdf"},{"issue":"1","key":"644_CR129","doi-asserted-by":"publisher","first-page":"342","DOI":"10.1109\/TVCG.2018.2864812","volume":"25","author":"Y Ming","year":"2019","unstructured":"Ming, Y., Huamin, Qu., & Bertini, E. (2019). RuleMatrix: Visualizing and understanding classifiers with rules. IEEE Transactions on Visualization and Computer Graphics, 25(1), 342\u2013352. https:\/\/doi.org\/10.1109\/TVCG.2018.2864812","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"issue":"1","key":"644_CR130","doi-asserted-by":"publisher","first-page":"38","DOI":"10.17705\/1CAIS.05034","volume":"50","author":"M Mirbabaie","year":"2022","unstructured":"Mirbabaie, M., Brendel, A. B., & Hofeditz, L. (2022). Ethics and AI in information systems research. Communications of the Association for Information Systems, 50(1), 38. https:\/\/doi.org\/10.17705\/1CAIS.05034","journal-title":"Communications of the Association for Information Systems"},{"key":"644_CR131","doi-asserted-by":"publisher","unstructured":"Mitra, S., & Hayashi, Y. (2000). Neuro-fuzzy rule generation: Survey in soft computing framework. IEEE Transactions on Neural Networks, 11(3), 748\u2013768.\u00a0https:\/\/doi.org\/10.1109\/72.846746","DOI":"10.1109\/72.846746"},{"key":"644_CR132","doi-asserted-by":"publisher","unstructured":"Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (FAT) (pp. 279\u2013288). https:\/\/doi.org\/10.1145\/3287560.3287574","DOI":"10.1145\/3287560.3287574"},{"key":"644_CR133","first-page":"1","volume":"18","author":"H Mombini","year":"2021","unstructured":"Mombini, H., Tulu, B., Strong, D., Agu, E. O., Lindsay, C., Loretz, L., Pedersen, P., & Dunn, R. (2021). An explainable machine learning model for chronic wound management decisions. AMCIS 2021 Proceedings, 18, 1\u201310.","journal-title":"AMCIS 2021 Proceedings"},{"key":"644_CR134","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.dsp.2017.10.011","volume":"73","author":"G Montavon","year":"2018","unstructured":"Montavon, G., Samek, W., & M\u00fcller, K. R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing: A Review Journal, 73, 1\u201315. https:\/\/doi.org\/10.1016\/j.dsp.2017.10.011","journal-title":"Digital Signal Processing: A Review Journal"},{"key":"644_CR135","doi-asserted-by":"publisher","unstructured":"Moradi,\u00a0M., & Samwald,\u00a0M. (2021). Post-hoc explanation of black-box classifiers using confident itemsets. Expert Systems with Applications, 165(113941). https:\/\/doi.org\/10.1016\/j.eswa.2020.113941","DOI":"10.1016\/j.eswa.2020.113941"},{"key":"644_CR136","doi-asserted-by":"publisher","unstructured":"Moreira, C., Chou, Y.-L., Velmurugan, M., Ouyang, C., Sindhgatta, R., & Bruza, P. (2021). LINDA-BN: An interpretable probabilistic approach for demystifying black-box predictive models. Decision Support Systems, 150, 1\u201316.\u00a0https:\/\/doi.org\/10.1016\/j.dss.2021.113561","DOI":"10.1016\/j.dss.2021.113561"},{"key":"644_CR137","doi-asserted-by":"publisher","unstructured":"Moscato, V., Picariello, A., & Sperl\u00ed, G. (2021). A benchmark of machine learning approaches for credit score prediction. Expert Systems with Applications, 165, 1\u20138.\u00a0https:\/\/doi.org\/10.1016\/j.eswa.2020.113986","DOI":"10.1016\/j.eswa.2020.113986"},{"key":"644_CR138","unstructured":"Mueller,\u00a0S.\u00a0T., Hoffman,\u00a0R.\u00a0R., Clancey,\u00a0W., Emrey,\u00a0A., & Klein,\u00a0G. (2019). Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. ArXiv. https:\/\/arxiv.org\/pdf\/1902.01876"},{"issue":"4","key":"644_CR139","doi-asserted-by":"publisher","first-page":"520","DOI":"10.1109\/TETCI.2020.3005682","volume":"5","author":"BJ Murray","year":"2021","unstructured":"Murray, B. J., Islam, M. A., Pinar, A. J., Anderson, D. T., Scott, G. J., Havens, T. C., & Keller, J. M. (2021). Explainable AI for the Choquet integral. IEEE Transactions on Emerging Topics in Computational Intelligence, 5(4), 520\u2013529. https:\/\/doi.org\/10.1109\/TETCI.2020.3005682","journal-title":"IEEE Transactions on Emerging Topics in Computational Intelligence"},{"key":"644_CR140","doi-asserted-by":"publisher","unstructured":"Narayanan, M., Chen, E., He, J, Kim, B, Gershman, S., & Doshi-Velez, F. (2018). How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation. ArXiv, 1802.00682. https:\/\/doi.org\/10.48550\/arXiv.1802.00682","DOI":"10.48550\/arXiv.1802.00682"},{"issue":"4","key":"644_CR141","doi-asserted-by":"publisher","first-page":"4225","DOI":"10.1109\/TNSM.2021.3098157","volume":"18","author":"A Nascita","year":"2021","unstructured":"Nascita, A., Montieri, A., Aceto, G., Ciuonzo, D., Persico, V., & Pescap\u00e9, A. (2021). XAI meets mobile traffic classification: Understanding and improving multimodal deep learning architectures. IEEE Transactions on Network and Service Management, 18(4), 4225\u20134246. https:\/\/doi.org\/10.1109\/TNSM.2021.3098157","journal-title":"IEEE Transactions on Network and Service Management"},{"issue":"2","key":"644_CR142","doi-asserted-by":"publisher","first-page":"1427","DOI":"10.1109\/TVCG.2020.3030354","volume":"27","author":"MP Neto","year":"2021","unstructured":"Neto, M. P., & Paulovich, F. V. (2021). Explainable matrix - visualization for global and local interpretability of random forest classification ensembles. IEEE Transactions on Visualization and Computer Graphics, 27(2), 1427\u20131437. https:\/\/doi.org\/10.1109\/TVCG.2020.3030354","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"issue":"3","key":"644_CR143","doi-asserted-by":"publisher","first-page":"393","DOI":"10.1007\/s11257-017-9195-0","volume":"27","author":"I Nunes","year":"2017","unstructured":"Nunes, I., & Jannach, D. (2017). A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction, 27(3), 393\u2013444. https:\/\/doi.org\/10.1007\/s11257-017-9195-0","journal-title":"User Modeling and User-Adapted Interaction"},{"key":"644_CR144","unstructured":"Omeiza, D., Webb, H., Jirotka, M., & Kunze, L. (2021). Explanations in autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, 23(8), 10142\u201310162.\u00a0https:\/\/ieeexplore.ieee.org\/stamp\/stamp.jsp?arnumber=9616449&casa_token=pCkvj82hzqwAAAAA:yYPZ8qTUP7U8tLQj793sviDzuwLewzQZCvBPza4SHtG_P-eSlpp0Te5X9aF1OuVt35wT6EMfP1w&tag=1"},{"issue":"7","key":"644_CR145","doi-asserted-by":"publisher","first-page":"1173","DOI":"10.1093\/jamia\/ocaa053","volume":"27","author":"SN Payrovnaziri","year":"2020","unstructured":"Payrovnaziri, S. N., Chen, Z., Rengifo-Moreno, P., Miller, T., Bian, J., Chen, J. H., Liu, X., & He, Z. (2020). Explainable artificial intelligence models using real-world electronic health record data: A systematic scoping review. Journal of the American Medical Informatics Association: JAMIA, 27(7), 1173\u20131185. https:\/\/doi.org\/10.1093\/jamia\/ocaa053","journal-title":"Journal of the American Medical Informatics Association: JAMIA"},{"issue":"113262","key":"644_CR146","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.eswa.2020.113262","volume":"148","author":"S Pe\u00f1afiel","year":"2020","unstructured":"Pe\u00f1afiel, S., Baloian, N., Sanson, H., & Pino, J. A. (2020). Applying Dempster-Shafer theory for developing a flexible, accurate and interpretable classifier. Expert Systems with Applications, 148(113262), 1\u201312. https:\/\/doi.org\/10.1016\/j.eswa.2020.113262","journal-title":"Expert Systems with Applications"},{"key":"644_CR147","doi-asserted-by":"publisher","unstructured":"Pessach,\u00a0D., Singer,\u00a0G., Avrahami,\u00a0D., Chalutz Ben-Gal,\u00a0H., Shmueli,\u00a0E., & Ben-Gal,\u00a0I. (2020). Employees recruitment: A prescriptive analytics approach via machine learning and mathematical programming. Decision Support Systems, 134(113290). https:\/\/doi.org\/10.1016\/j.dss.2020.113290","DOI":"10.1016\/j.dss.2020.113290"},{"key":"644_CR148","doi-asserted-by":"publisher","unstructured":"Pierrard,\u00a0R., Poli,\u00a0J.\u2011P., & Hudelot,\u00a0C. (2021). Spatial relation learning for explainable image classification and annotation in critical applications. Artificial Intelligence, 292(103434). https:\/\/doi.org\/10.1016\/j.artint.2020.103434","DOI":"10.1016\/j.artint.2020.103434"},{"issue":"3","key":"644_CR149","doi-asserted-by":"publisher","first-page":"179","DOI":"10.1007\/s12599-013-0263-7","volume":"5","author":"F Probst","year":"2013","unstructured":"Probst, F., Grosswiele, L., & Pfleger, R. (2013). Who will lead and who will follow: Identifying Influential Users in Online Social Networks. Business & Information Systems Engineering, 5(3), 179\u2013193. https:\/\/doi.org\/10.1007\/s12599-013-0263-7","journal-title":"Business & Information Systems Engineering"},{"key":"644_CR150","doi-asserted-by":"publisher","unstructured":"Rader, E., & Gray, R. (2015). Understanding user beliefs about algorithmic curation in the Facebook news feed. Proceedings of the 33rd International Conference on Human Factors in Computing Systems (CHI) (pp. 173\u2013182). https:\/\/doi.org\/10.1145\/2702123.2702174","DOI":"10.1145\/2702123.2702174"},{"key":"644_CR151","doi-asserted-by":"publisher","first-page":"368","DOI":"10.1016\/j.eswa.2017.11.045","volume":"95","author":"A Ragab","year":"2018","unstructured":"Ragab, A., El-Koujok, M., Poulin, B., Amazouz, M., & Yacout, S. (2018). Fault diagnosis in industrial chemical processes using interpretable patterns based on Logical Analysis of Data. Expert Systems with Applications, 95, 368\u2013383. https:\/\/doi.org\/10.1016\/j.eswa.2017.11.045","journal-title":"Expert Systems with Applications"},{"issue":"3","key":"644_CR152","doi-asserted-by":"publisher","first-page":"364","DOI":"10.1080\/0960085X.2021.1955628","volume":"31","author":"NP Rana","year":"2022","unstructured":"Rana, N. P., Chatterjee, S., Dwivedi, Y. K., & Akter, S. (2022). Understanding dark side of artificial intelligence (AI) integrated business analytics: Assessing firm\u2019s operational inefficiency and competitiveness. European Journal of Information Systems, 31(3), 364\u2013387. https:\/\/doi.org\/10.1080\/0960085X.2021.1955628","journal-title":"European Journal of Information Systems"},{"issue":"01","key":"644_CR153","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/TAI.2021.3133846","volume":"1","author":"A Rawal","year":"2021","unstructured":"Rawal, A., McCoy, J., Rawat, D., Sadler, B., & Amant, R. (2021). Recent advances in trustworthy explainable artificial intelligence: Status, challenges and perspectives. IEEE Transactions on Artificial Intelligence, 1(01), 1\u20131. https:\/\/doi.org\/10.1109\/TAI.2021.3133846","journal-title":"IEEE Transactions on Artificial Intelligence"},{"key":"644_CR154","doi-asserted-by":"publisher","unstructured":"Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). \u201cWhy should I trust you?\u201d: Explaining the predictions of any classifier. Proceedings of the 22nd International Conference on Knowledge Discovery and Data Mining (KDD) (pp. 1135\u20131144). https:\/\/doi.org\/10.1145\/2939672.2939778","DOI":"10.1145\/2939672.2939778"},{"key":"644_CR155","unstructured":"Ribera,\u00a0M., & Lapedriza,\u00a0A. (2019). Can we do better explanations? A proposal of user-centered explainable AI. In C. Trattner, D. Parra, & N. Riche (Chairs), Joint Proceedings of the ACM IUI 2019 Workshops. http:\/\/ceur-ws.org\/Vol-2327\/IUI19WS-ExSS2019-12.pdf"},{"key":"644_CR156","unstructured":"Rissler, R., Nadj, M., Adam, M., & Maedche, A. (2017). Towards an integrative theoretical Framework of IT-Mediated Interruptions.\u00a0Proceedings of the 25th European Conference on Information Systems (ECIS). http:\/\/aisel.aisnet.org\/ecis2017_rp\/125"},{"issue":"2","key":"644_CR157","doi-asserted-by":"publisher","first-page":"96","DOI":"10.17705\/1thci.00130","volume":"12","author":"LP Robert","year":"2020","unstructured":"Robert, L. P., Bansal, G., & L\u00fctge, C. (2020). ICIS 2019 SIGHCI Workshop Panel Report: Human\u2013 computer interaction challenges and opportunities for fair, trustworthy and ethical artificial intelligence. AIS Transactions on Human-Computer Interaction, 12(2), 96\u2013108. https:\/\/doi.org\/10.17705\/1thci.00130","journal-title":"AIS Transactions on Human-Computer Interaction"},{"issue":"3","key":"644_CR158","doi-asserted-by":"publisher","first-page":"241","DOI":"10.1057\/ejis.2014.7","volume":"23","author":"F Rowe","year":"2014","unstructured":"Rowe, F. (2014). What literature review is not: Diversity, boundaries and recommendations. European Journal of Information Systems, 23(3), 241\u2013255. https:\/\/doi.org\/10.1057\/ejis.2014.7","journal-title":"European Journal of Information Systems"},{"key":"644_CR159","unstructured":"Russell, S., & Norvig, P. (2021). Artificial intelligenc: A modern approach (4th). Pearson."},{"key":"644_CR160","unstructured":"Rzepka, C., & Berger, B. (2018). User interaction with AI-enabled systems: A systematic review of IS research.\u00a0Proceedings of the Thirty-Nine International Conference on Information Systems (ICIS). https:\/\/aisel.aisnet.org\/icis2018\/general\/Presentations\/7"},{"issue":"113100","key":"644_CR161","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.eswa.2019.113100","volume":"144","author":"S Sachan","year":"2020","unstructured":"Sachan, S., Yang, J.-B., Xu, D.-L., Benavides, D. E., & Li, Y. (2020). An explainable AI decision-support-system to automate loan underwriting. Expert Systems with Applications, 144(113100), 1\u201349. https:\/\/doi.org\/10.1016\/j.eswa.2019.113100","journal-title":"Expert Systems with Applications"},{"key":"644_CR162","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.chb.2021.106837","volume":"122","author":"N Schlicker","year":"2021","unstructured":"Schlicker, N., Langer, M., \u00d6tting, S. K., Baum, K., K\u00f6nig, C. J., & Wallach, D. (2021). What to expect from opening up \u2018black boxes\u2019? Comparing perceptions of justice between human and automated agents. Computers in Human Behavior, 122, 1\u201316. https:\/\/doi.org\/10.1016\/j.chb.2021.106837","journal-title":"Computers in Human Behavior"},{"key":"644_CR163","doi-asserted-by":"publisher","unstructured":"Schmidt,\u00a0P., Biessmann,\u00a0F., & Teubner,\u00a0T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems. Advance online publication. https:\/\/doi.org\/10.1080\/12460125.2020.1819094","DOI":"10.1080\/12460125.2020.1819094"},{"key":"644_CR164","unstructured":"Schneider, J., & Handali, J. P. (2019). Personalized explanation for machine learning: a conceptualization.\u00a0Proceedings of the Twenty-Seventh European Conference on Information Systems (ECIS 2019). Stockholm-Uppsala, Sweden. https:\/\/arxiv.org\/ftp\/arxiv\/papers\/1901\/1901.00770.pdf"},{"issue":"5","key":"644_CR165","doi-asserted-by":"publisher","first-page":"2239","DOI":"10.1016\/j.eswa.2013.09.022","volume":"41","author":"M Seera","year":"2014","unstructured":"Seera, M., & Lim, C. P. (2014). A hybrid intelligent system for medical data classification. Expert Systems with Applications, 41(5), 2239\u20132249. https:\/\/doi.org\/10.1016\/j.eswa.2013.09.022","journal-title":"Expert Systems with Applications"},{"issue":"4","key":"644_CR166","doi-asserted-by":"publisher","first-page":"233","DOI":"10.1093\/idpl\/ipx022","volume":"7","author":"AD Selbst","year":"2017","unstructured":"Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233\u2013242. https:\/\/doi.org\/10.1093\/idpl\/ipx022","journal-title":"International Data Privacy Law"},{"issue":"3\u20134","key":"644_CR167","first-page":"1","volume":"11","author":"R Sevastjanova","year":"2021","unstructured":"Sevastjanova, R., Jentner, W., Sperrle, F., Kehlbeck, R., Bernard, J., & El-Assady, M. (2021). QuestionComb: A gamification approach for the visual explanation of linguistic phenomena through interactive labeling. ACM Transactions on Interactive Intelligent Systems (TiiS), 11(3\u20134), 1\u201338.","journal-title":"ACM Transactions on Interactive Intelligent Systems (TiiS)"},{"key":"644_CR168","doi-asserted-by":"publisher","unstructured":"Shahapure, K. R., & Nicholas, C. (2020). Cluster quality analysis using silhouette score. 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA) (pp. 747\u2013748). https:\/\/doi.org\/10.1109\/DSAA49011.2020.00096","DOI":"10.1109\/DSAA49011.2020.00096"},{"key":"644_CR169","doi-asserted-by":"publisher","unstructured":"Sharma,\u00a0P., Mirzan,\u00a0S.\u00a0R., Bhandari,\u00a0A., Pimpley,\u00a0A., Eswaran,\u00a0A., Srinivasan,\u00a0S., & Shao,\u00a0L. (2020). Evaluating tree explanation methods for anomaly reasoning: A case study of SHAP TreeExplainer and TreeInterpreter. In G. Grossmann & S. Ram (Eds.), Lecture notes in computer science. Advances in conceptual modeling (pp.\u00a035\u201345). Springer International Publishing. https:\/\/doi.org\/10.1007\/978-3-030-65847-2_4","DOI":"10.1007\/978-3-030-65847-2_4"},{"issue":"CSCW2","key":"644_CR170","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3415224","volume":"4","author":"H Shen","year":"2020","unstructured":"Shen, H., Jin, H., Cabrera, \u00c1. A., Perer, A., Zhu, H., & Hong, J. I. (2020). Designing alternative representations of confusion matrices to support non-expert public understanding of algorithm performance. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1\u201322. https:\/\/doi.org\/10.1145\/3415224","journal-title":"In Proceedings of the ACM on Human-Computer Interaction"},{"key":"644_CR171","doi-asserted-by":"publisher","unstructured":"Shin,\u00a0D. (2021a). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146(102551). https:\/\/doi.org\/10.1016\/j.ijhcs.2020.102551","DOI":"10.1016\/j.ijhcs.2020.102551"},{"key":"644_CR172","doi-asserted-by":"publisher","unstructured":"Shin,\u00a0D. (2021b). Embodying algorithms, enactive artificial intelligence and the extended cognition: You can see as much as you know about algorithm. Journal of Information Science, 1\u201314. https:\/\/doi.org\/10.1177\/0165551520985495","DOI":"10.1177\/0165551520985495"},{"key":"644_CR173","doi-asserted-by":"crossref","unstructured":"Sidorova, A., Evangelopoulos, N., Valacich, J. S., & Ramakrishnan, T. (2008). Uncovering the intellectual core of the information systems discipline. MIS Quarterly, 467\u2013482. https:\/\/www.jstor.org\/stable\/25148852","DOI":"10.2307\/25148852"},{"key":"644_CR174","doi-asserted-by":"publisher","first-page":"188","DOI":"10.1016\/j.eswa.2019.04.029","volume":"130","author":"N Singh","year":"2019","unstructured":"Singh, N., Singh, P., & Bhagat, D. (2019). A rule extraction approach from support vector machines for diagnosing hypertension among diabetics. Expert Systems with Applications, 130, 188\u2013205. https:\/\/doi.org\/10.1016\/j.eswa.2019.04.029","journal-title":"Expert Systems with Applications"},{"key":"644_CR175","doi-asserted-by":"publisher","unstructured":"Soares, E., Angelov, P. P., Costa, B., Castro, M. P. G., Nageshrao, S., & Filev, D. (2021). Explaining deep learning models through rule-based approximation and visualization. IEEE Transactions on Fuzzy Systems, 29(8), 2399\u20132407.\u00a0https:\/\/doi.org\/10.1109\/TFUZZ.2020.2999776","DOI":"10.1109\/TFUZZ.2020.2999776"},{"issue":"1","key":"644_CR176","doi-asserted-by":"publisher","first-page":"1064","DOI":"10.1109\/TVCG.2019.2934629","volume":"26","author":"T Spinner","year":"2020","unstructured":"Spinner, T., Schlegel, U., Schafer, H., & El-Assady, M. (2020). Explainer: A visual analytics framework for interactive and explainable machine learning. IEEE Transactions on Visualization and Computer Graphics, 26(1), 1064\u20131074. https:\/\/doi.org\/10.1109\/TVCG.2019.2934629","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"key":"644_CR177","doi-asserted-by":"publisher","unstructured":"Springer, A., & Whittaker, S. (2020). Progressive disclosure: When, why, and how do users want algorithmic transparency information? ACM Transactions on Interactive Intelligent Systems (TiiS), 10(4), 1\u201332.\u00a0https:\/\/doi.org\/10.1145\/3374218","DOI":"10.1145\/3374218"},{"key":"644_CR178","doi-asserted-by":"publisher","first-page":"2677","DOI":"10.1016\/j.eswa.2012.11.007","volume":"40","author":"R Stoean","year":"2013","unstructured":"Stoean, R., & Stoean, C. (2013). Modeling medical decision making by support vector machines, explaining by rules of evolutionary algorithms with feature selection. Expert Systems with Applications, 40, 2677\u20132686. https:\/\/doi.org\/10.1016\/j.eswa.2012.11.007","journal-title":"Expert Systems with Applications"},{"key":"644_CR179","doi-asserted-by":"publisher","first-page":"647","DOI":"10.1007\/s10115-013-0679-x","volume":"41","author":"E \u0160trumbelj","year":"2014","unstructured":"\u0160trumbelj, E., & Kononenko, I. (2014). Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems, 41, 647\u2013665. https:\/\/doi.org\/10.1007\/s10115-013-0679-x","journal-title":"Knowledge and Information Systems"},{"key":"644_CR180","doi-asserted-by":"publisher","unstructured":"Su, G., Lin, B., Luo, W., Yin, J., Deng, S., Gao, H., & Xu, R. (2021). Hypomimia recognition in Parkinson\u2019s disease with semantic features. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 17(3), 1\u201320.\u00a0https:\/\/doi.org\/10.1145\/3476778","DOI":"10.1145\/3476778"},{"key":"644_CR181","unstructured":"Sultana, T., & Nemati, H. (2021). Impact of explainable AI and task complexity on human-machine symbiosis.\u00a0Proceedings of the Twenty-Seventh Americas Conference on Information Systems (AMCIS). https:\/\/aisel.aisnet.org\/amcis2021\/sig_hci\/sig_hci\/20"},{"issue":"1","key":"644_CR182","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12911-021-01662-z","volume":"21","author":"C Sun","year":"2021","unstructured":"Sun, C., Dui, H., & Li, H. (2021). Interpretable time-aware and co-occurrence-aware network for medical prediction. BMC Medical Informatics and Decision Making, 21(1), 1\u201312.","journal-title":"BMC Medical Informatics and Decision Making"},{"issue":"1","key":"644_CR183","doi-asserted-by":"publisher","first-page":"74","DOI":"10.1093\/jcmc\/zmz026","volume":"25","author":"SS Sundar","year":"2020","unstructured":"Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human\u2013AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74\u201388. https:\/\/doi.org\/10.1093\/jcmc\/zmz026","journal-title":"Journal of Computer-Mediated Communication"},{"key":"644_CR184","unstructured":"Tabankov, S. S., & M\u00f6hlmann, M. (2021). Artificial intelligence for in-flight services: How the Lufthansa group managed explainability and accuracy concerns. Proceedings of the International Conference on Information Systems\u00a0(ICIS), 16, 1\u20139."},{"issue":"3","key":"644_CR185","doi-asserted-by":"publisher","first-page":"448","DOI":"10.1109\/69.774103","volume":"11","author":"IA Taha","year":"1999","unstructured":"Taha, I. A., & Ghosh, J. (1999). Symbolic interpretation of artificial neural networks. IEEE Transactions on Knowledge and Data Engineering, 11(3), 448\u2013463. https:\/\/doi.org\/10.1109\/69.774103","journal-title":"IEEE Transactions on Knowledge and Data Engineering"},{"issue":"2","key":"644_CR186","doi-asserted-by":"publisher","first-page":"447","DOI":"10.1007\/s12525-020-00441-4","volume":"31","author":"S Thiebes","year":"2021","unstructured":"Thiebes, S., Lins, S., & Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31(2), 447\u2013464. https:\/\/doi.org\/10.1007\/s12525-020-00441-4","journal-title":"Electronic Markets"},{"issue":"11","key":"644_CR187","doi-asserted-by":"publisher","first-page":"4793","DOI":"10.1109\/TNNLS.2020.3027314","volume":"32","author":"E Tjoa","year":"2021","unstructured":"Tjoa, E., & Guan, C. (2021). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793\u20134813. https:\/\/doi.org\/10.1109\/TNNLS.2020.3027314","journal-title":"IEEE Transactions on Neural Networks and Learning Systems"},{"key":"644_CR188","doi-asserted-by":"publisher","unstructured":"van der Waa,\u00a0J., Schoonderwoerd,\u00a0T., van Diggelen,\u00a0J., & Neerincx,\u00a0M. (2020). Interpretable confidence measures for decision support systems. International Journal of Human-Computer Studies, 144(102493). https:\/\/doi.org\/10.1016\/j.ijhcs.2020.102493","DOI":"10.1016\/j.ijhcs.2020.102493"},{"key":"644_CR189","unstructured":"Vilone,\u00a0G., & Longo,\u00a0L. (2020). Explainable artificial intelligence: A systematic review. ArXiv. https:\/\/arxiv.org\/pdf\/2006.00093"},{"key":"644_CR190","doi-asserted-by":"publisher","unstructured":"van der Waa,\u00a0J., Nieuwburg,\u00a0E., Cremers,\u00a0A., & Neerincx,\u00a0M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence, 291(103404). https:\/\/doi.org\/10.1016\/j.artint.2020.103404","DOI":"10.1016\/j.artint.2020.103404"},{"issue":"1","key":"644_CR191","doi-asserted-by":"publisher","first-page":"77","DOI":"10.1057\/ejis.2014.36","volume":"25","author":"J Venable","year":"2016","unstructured":"Venable, J., Pries-Heje, J., & Baskerville, R. (2016). FEDS: A framework for evaluation in design science research. European Journal of Information Systems, 25(1), 77\u201389. https:\/\/doi.org\/10.1057\/ejis.2014.36","journal-title":"European Journal of Information Systems"},{"key":"644_CR192","unstructured":"vom Brocke, J., Simons, A., Niehaves, B [Bjoern], Niehaves, B [Bjorn], Reimer, K., Plattfaut, R., & Cleven, A. (2009). Reconstructing the giant: On the importance of rigour in documenting the literature search process. ECIS 2009 Proceedings(161). http:\/\/aisel.aisnet.org\/ecis2009\/161"},{"issue":"2","key":"644_CR193","first-page":"841","volume":"31","author":"S Wachter","year":"2018","unstructured":"Wachter, S., Mittelstadt, B., & Russell, C. (2018). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841\u2013887.","journal-title":"Harvard Journal of Law & Technology"},{"key":"644_CR194","doi-asserted-by":"crossref","unstructured":"Wang, D., Yang, Q., Abdul, A., & Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI). http:\/\/dl.acm.org\/citation.cfm?doid=3290605.3300831","DOI":"10.1145\/3290605.3300831"},{"key":"644_CR195","unstructured":"Wanner, J., Heinrich, K., Janiesch, C., & Zschech, P. (2020a). How much AI do you require decision factors for adopting AI technology.\u00a0Proceedings of the Forty-First International Conference on Information Systems (ICIS). https:\/\/aisel.aisnet.org\/icis2020\/implement_adopt\/implement_adopt\/10"},{"key":"644_CR196","unstructured":"Wanner, J., Herm, L. V., & Janiesch, C. (2020b). How much is the black box? The value of explainability in machine learning models. ECIS 2020 Research-in-Progress. https:\/\/aisel.aisnet.org\/ecis2020_rip\/85"},{"issue":"2","key":"644_CR197","first-page":"xiii","volume":"26","author":"J Webster","year":"2002","unstructured":"Webster, J., & Watson, R. T. (2002). Analyzing the past to prepare for the future: Writing a literature review. MIS Quarterly, 26(2), xiii\u2013xxiii.","journal-title":"MIS Quarterly"},{"key":"644_CR198","unstructured":"Xiong, J., Qureshi, S., & Najjar, L. (2014). A cluster analysis of research in information technology for global development: Where to from here?\u00a0Proceedings of the SIG GlobDev Seventh Annual Workshop. https:\/\/aisel.aisnet.org\/globdev2014\/1"},{"issue":"1","key":"644_CR199","doi-asserted-by":"publisher","first-page":"138","DOI":"10.1108\/FS-04-2018-0034","volume":"21","author":"RV Yampolskiy","year":"2019","unstructured":"Yampolskiy, R. V. (2019). Predicting future AI failures from historic examples. Foresight, 21(1), 138\u2013152. https:\/\/doi.org\/10.1108\/FS-04-2018-0034","journal-title":"Foresight"},{"key":"644_CR200","unstructured":"Yan, A., & Xu, D. (2021). AI for depression treatment: Addressing the paradox of privacy and trust with empathy, accountability, and explainability.\u00a0Proceedings of the Fourty-Second International Conference on Information Systems (ICIS). https:\/\/aisel.aisnet.org\/icis2021\/is_health\/is_health\/15\/"},{"issue":"6","key":"644_CR201","doi-asserted-by":"publisher","first-page":"2610","DOI":"10.1109\/TNNLS.2020.3007259","volume":"32","author":"Z Yang","year":"2021","unstructured":"Yang, Z., Zhang, A., & Sudjianto, A. (2021). Enhancing explainability of neural networks through architecture constraints. IEEE Transactions on Neural Networks and Learning Systems, 32(6), 2610\u20132621. https:\/\/doi.org\/10.1109\/TNNLS.2020.3007259","journal-title":"IEEE Transactions on Neural Networks and Learning Systems"},{"key":"644_CR202","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.eswa.2021.115430","volume":"183","author":"S Yoo","year":"2021","unstructured":"Yoo, S., & Kang, N. (2021). Explainable artificial intelligence for manufacturing cost estimation and machining feature visualization. Expert Systems with Applications, 183, 1\u201314. https:\/\/doi.org\/10.1016\/j.eswa.2021.115430","journal-title":"Expert Systems with Applications"},{"key":"644_CR203","doi-asserted-by":"publisher","unstructured":"Zeltner, D., Schmid, B., Csisz\u00e1r, G., & Csisz\u00e1r, O. (2021). Squashing activation unctions in benchmark tests: Towards a more eXplainable Artificial Intelligence using continuous-valued logic. Knowledge-Based Systems, 218. https:\/\/doi.org\/10.1016\/j.knosys.2021.106779","DOI":"10.1016\/j.knosys.2021.106779"},{"issue":"1","key":"644_CR204","doi-asserted-by":"publisher","first-page":"27","DOI":"10.1631\/FITEE.1700808","volume":"19","author":"QS Zhang","year":"2018","unstructured":"Zhang, Q. S., & Zhu, S. C. (2018). Visual interpretability for deep learning: A survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27\u201339. https:\/\/doi.org\/10.1631\/FITEE.1700808","journal-title":"Frontiers of Information Technology & Electronic Engineering"},{"issue":"11","key":"644_CR205","doi-asserted-by":"publisher","first-page":"1","DOI":"10.2196\/11144","volume":"20","author":"K Zhang","year":"2018","unstructured":"Zhang, K., Liu, X., Liu, F., He, L., Zhang, L., Yang, Y., Li, W., Wang, S., Liu, L., Liu, Z., Wu, X., & Lin, H. (2018). An interpretable and expandable deep learning diagnostic system for multiple ocular diseases: Qualitative study. Journal of Medical Internet Research, 20(11), 1\u201313. https:\/\/doi.org\/10.2196\/11144","journal-title":"Journal of Medical Internet Research"},{"key":"644_CR206","doi-asserted-by":"publisher","first-page":"100572","DOI":"10.1016\/j.accinf.2022.100572","volume":"46","author":"CA Zhang","year":"2022","unstructured":"Zhang, C. A., Cho, S., & Vasarhelyi, M. (2022). Explainable Artificial Intelligence (XAI) in auditing. International Journal of Accounting Information Systems, 46, 100572. https:\/\/doi.org\/10.1016\/j.accinf.2022.100572","journal-title":"International Journal of Accounting Information Systems"},{"issue":"1","key":"644_CR207","doi-asserted-by":"publisher","first-page":"407","DOI":"10.1109\/TVCG.2018.2864475","volume":"25","author":"X Zhao","year":"2019","unstructured":"Zhao, X., Wu, Y., Lee, D. L., & Cui, W. (2019). Iforest: Interpreting random forests via visual analytics. IEEE Transactions on Visualization and Computer Graphics, 25(1), 407\u2013416. https:\/\/doi.org\/10.1109\/TVCG.2018.2864475","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"key":"644_CR208","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.dss.2021.113715","volume":"155","author":"D Zhdanov","year":"2021","unstructured":"Zhdanov, D., Bhattacharjee, S., & Bragin, M. (2021). Incorporating FAT and privacy aware AI modeling approaches into business decision making frameworks. Decision Support Systems, 155, 1\u201312. https:\/\/doi.org\/10.1016\/j.dss.2021.113715","journal-title":"Decision Support Systems"},{"key":"644_CR209","doi-asserted-by":"publisher","first-page":"42","DOI":"10.1016\/j.eswa.2018.09.038","volume":"117","author":"Q Zhong","year":"2019","unstructured":"Zhong, Q., Fan, X., Luo, X., & Toni, F. (2019). An explainable multi-attribute decision model based on argumentation. Expert Systems with Applications, 117, 42\u201361. https:\/\/doi.org\/10.1016\/j.eswa.2018.09.038","journal-title":"Expert Systems with Applications"},{"key":"644_CR210","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/TIM.2021.3084310","volume":"70","author":"C Zhu","year":"2021","unstructured":"Zhu, C., Chen, Z., Zhao, R., Wang, J., & Yan, R. (2021). Decoupled feature-temporal CNN: Explaining deep learning-based machine health monitoring. IEEE Transactions on Instrumentation and Measurement, 70, 1\u201313. https:\/\/doi.org\/10.1109\/TIM.2021.3084310","journal-title":"IEEE Transactions on Instrumentation and Measurement"},{"key":"644_CR211","doi-asserted-by":"publisher","unstructured":"Zytek, A., Liu, D., Vaithianathan, R., & Veeramachaneni, K. (2021). Sibyl: Explaining machine learning models for high-stakes decision making. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI) (pp. 1\u20136). https:\/\/doi.org\/10.1145\/3411763.3451743","DOI":"10.1145\/3411763.3451743"}],"container-title":["Electronic Markets"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s12525-023-00644-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s12525-023-00644-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s12525-023-00644-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,12,20]],"date-time":"2023-12-20T08:21:14Z","timestamp":1703060474000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s12525-023-00644-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,27]]},"references-count":211,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2023,12]]}},"alternative-id":["644"],"URL":"https:\/\/doi.org\/10.1007\/s12525-023-00644-5","relation":{},"ISSN":["1019-6781","1422-8890"],"issn-type":[{"value":"1019-6781","type":"print"},{"value":"1422-8890","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,5,27]]},"assertion":[{"value":"31 July 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 March 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 May 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}],"article-number":"26"}}