Abstract
Machine learning has become almost synonymous with Artificial Intelligence (AI). However, it has many challenges with one of the most important being explainable AI; that is, providing human-understandable accounts of why a machine learning model produces specific outputs. To address this challenge, we propose superimposition as a concept which uses conceptual models to improve explainability by mapping the features that are important to a machine learning model’s decision outcomes to a conceptual model of an application domain. Superimposition is a design method for supplementing machine learning models with structural elements that are used by humans to reason about reality and generate explanations. To illustrate the potential of superimposition, we present the method and apply it to a churn prediction problem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Marr, B.: The top 10 AI and machine learning use cases everyone should know about. Forbes (2016)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)
Maass, W., Parsons, J., Purao, S., Storey, V.C., Woo, C.: Data-driven meets theory-driven research in the era of big data: opportunities and challenges for information systems research. J. Assoc. Inf. Syst. 19, 1253–1273 (2018)
Chen, H., Chiang, R.H., Storey, V.C.: Business intelligence and analytics: from big data to big impact. MIS Q. 36, 1165–1188 (2012)
Davenport, T., Harris, J.: Competing on Analytics: Updated, with a New Introduction: The New Science of Winning. Harvard Business Press, Cambridge (2017)
Khatri, V., Samuel, B.: Analytics for managerial work. Commun. ACM 62, 100–108 (2019)
Akbilgic, O., Davis, R.L.: The promise of machine learning: when will it be delivered? J. Cardiac Fail. 25, 484–485 (2019)
Bailetti, T., Gad, M., Shah, A.: Intrusion learning: an overview of an emergent discipline. Technol. Innov. Manage. Rev. 6 (2016)
Holzinger, A., Kieseberg, P., Weippl, E., Tjoa, A.M.: Current advances, trends and challenges of machine learning and knowledge extraction: from machine learning to explainable AI. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2018. LNCS, vol. 11015, pp. 1–8. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99740-7_1
Ransbotham, S., Kiron, D., Prentice, P.K.: Beyond the hype: the hard work behind analytics success. MIT Sloan Manage. Rev. 57, 3–15 (2016)
Sun, T.Q., Medaglia, R.: Mapping the challenges of artificial intelligence in the public sector: evidence from public healthcare. Govern. Inf. Q. 36, 368–383 (2019)
Castelvecchi, D.: Can we open the black box of AI? Nat. News 538, 20 (2016)
Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects agency. Defense Advanced Research Projects Agency (DARPA), nd Web, 2 (2016)
Gunning, D., Aha, D.W.: DARPA’s explainable artificial intelligence program. AI Mag. 40, 44–58 (2019)
Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7, 76–99 (2017)
Bubenko, J.A.: On the role of ‘understanding models’ in conceptual schema design. In: Presented at the Fifth International Conference on Very Large Data Bases 1979 (1979)
Mylopoulos, J.: Information modeling in the time of the revolution. Inf. Syst. 23, 127–155 (1998)
Pastor, O.: Conceptual modeling of life: beyond the homo sapiens. In: Comyn-Wattiau, I., Tanaka, K., Song, I.-Y., Yamamoto, S., Saeki, M. (eds.) ER 2016. LNCS, vol. 9974, pp. 18–31. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46397-1_2
Wand, Y., Weber, R.: Research commentary: information systems and conceptual modeling - a research agenda. Inf. Syst. Res. 13, 363–376 (2002)
Lukyanenko, R., Castellanos, A., Parsons, J., Chiarini Tremblay, M., Storey, V.C.: Using conceptual modeling to support machine learning. In: Cappiello, C., Ruiz, M. (eds.) Information Systems Engineering in Responsible Information Systems, pp. 170–181. Springer International Publishing, Cham (2019). https://doi.org/10.1007/978-3-030-21297-1_15
Nalchigar, S., Yu, E.: Conceptual modeling for business analytics: a framework and potential benefits. Presented at the 2017 IEEE 19th Conference on Business Informatics (CBI) (2017)
Crevier, D.: AI: The Tumultuous History of the Search for Artificial Intelligence. Basic Books, New York (1993)
Cerf, V.G.: AI is not an excuse! Commun. ACM 62, 7–9 (2019)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Presented at the Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)
Martens, D., Provost, F.: Explaining data-driven document classifications. Mis Q. 38, 73–100 (2014)
Rai, A.: Explainable AI: from black box to glass box. J. Acad. Mark. Sci. 48, 137–141 (2020)
Henelius, A., Puolamäki, K., Boström, H., Asker, L., Papapetrou, P.: A peek into the black box: exploring classifiers by randomization. Data Min. Knowl. Discov. 28, 1503–1529 (2014). https://doi.org/10.1007/s10618-014-0368-8
Harnad, S.: To cognize is to categorize: cognition is categorization. Presented at the, Amsterdam (2005)
Murphy, G.: The Big Book of Concepts. MIT Press, Cambridge (2004)
Palmeri, T.J., Blalock, C.: The role of background knowledge in speeded perceptual categorization. Cognition 77, B45–B57 (2000)
Parsons, J., Wand, Y.: Extending classification principles from information modeling to other disciplines. J. Assoc. Inf. Syst. 14, 2 (2012)
Collins, A.M., Quillian, M.R.: Retrieval time from semantic memory. J. Verbal Learn. Verbal Behav. 8, 240–247 (1969)
Hutchinson, J., Lockhead, G.: Similarity as distance: a structural principle for semantic memory. J. Exp. Psychol. Hum. Learn. Mem. 3, 660 (1977)
Burton-Jones, A., Weber, R.: Building conceptual modeling on the foundation of ontology. In: Computing handbook: information systems and information technology, Boca Raton, FL, USA, pp. 15.1–15.24 (2014)
Borgida, A.: Features of languages for the development of information systems at the conceptual level. IEEE Softw. 2, 63 (1985)
Parsons, J., Wand, Y.: Choosing classes in conceptual modeling. Commun. ACM 40, 63–69 (1997)
Sowa, J.F.: Top-level ontological categories. Int. J. Hum Comput Stud. 43, 669–685 (1995)
Chen, P.: The entity-relationship model - toward a unified view of data. ACM Trans. Database Syst. 1, 9–36 (1976)
Recker, J., Lukyanenko, R., Jabbari, M.A., Samuel, B.M., Castellanos, A.: From representation to mediation: a new agenda for conceptual modeling research in a digital world. MIS Q. (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Lukyanenko, R., Castellanos, A., Storey, V.C., Castillo, A., Tremblay, M.C., Parsons, J. (2020). Superimposition: Augmenting Machine Learning Outputs with Conceptual Models for Explainable AI. In: Grossmann, G., Ram, S. (eds) Advances in Conceptual Modeling. ER 2020. Lecture Notes in Computer Science(), vol 12584. Springer, Cham. https://doi.org/10.1007/978-3-030-65847-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-65847-2_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-65846-5
Online ISBN: 978-3-030-65847-2
eBook Packages: Computer ScienceComputer Science (R0)