Abstract
Breast Cancer (BC) is the most common type of cancer among women. Thankfully early detection and treatment improvements helped decrease its number of deaths. Data Mining techniques (DM), which discover hidden and potentially useful patterns from data, particularly for breast cancer diagnosis, are witnessing a new era, where the main objective is no longer replacing humans or just assisting them in their tasks but enhancing and augmenting their capabilities and this is where interpretability comes into play. This paper aims to investigate the Local Interpretable Model-agnostic Explanations (LIME) technique to interpret a Multilayer perceptron (MLP) trained on the Wisconsin Original Data-set. The results show that LIME explanations are a sort of real-time interpretation that helps understanding how the constructed neural network “thinks” and thus can increase trust and help oncologists, as the domain experts, learn new patterns.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Al-Hajj, M., Wicha, M.S., Benito-Hernandez, A., Morrison, S.J., Clarke, M.F.: Prospective identification of tumorigenic breast cancer cells. Proc. Nat. Acad. Sci. 100(11), 6890 (2003). Correction to 100(7):3983
Solanki, K.: Application of data mining techniques in healthcare data, vol. 148, no. 2, p. 1622 (2016)
Idri, A., Chlioui, I., El Ouassif, B.: A systematic map of data analytics in breast cancer. In: ACM International Conference Proceeding Series. Association for Computing Machinery (2018)
Hosni, M., Abnane, I., Idri, A., de Gea, J.M.C., Fernandez Aleman, J.L.: Reviewing ensemble classification methods in breast cancer. Comput. Methods Programs Biomed. 177, 89–112 (2019)
Idri, A., Hosni, M., Abnane, I., de Gea, J.M.C., Fernandez Aleman, J.L.: Impact of parameter tuning on machine learning based breast cancer classification. In: Advances in Intelligent Systems and Computing, vol. 932, pp. 115–125. Springer (2019)
Chlioui, I., Idri, A., Abnane, I., de Gea, J.M.C., Fernandez Aleman, J.L.:. Breast cancer classification with missing data imputation. In: Advances in Intelligent Systems and Computing, vol. 932, pp. 13–23. Springer (2019)
Aurangzeb, A.M., Eckert, C., Teredesai, A.: Interpretable machine learning in healthcare. In: Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, BCB 2018, pp. 559–560. ACM Press, New York (2018)
Oracle’s unified framework for Model Interpretation. https://github.com/oracle/Skater
Thomas, A.: An introduction to neural networks for beginners. Technical report in Adventures in Machine Learning (2017)
Gardner, M.W., Dorling, S.R.: Artificial neural networks (the multilayer perceptron) - a review of applications in the atmospheric sciences. Atmos. Environ. 32(14–15), 2627–2636 (1998)
Idri, A., Khoshgoftaar, T., Abran, A.: Can neural networks be easily interpreted in software cost estimation? In: 2002 IEEE World Congress on Computational Intelligence. IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2002. Proceedings (Cat. No.02CH37291), vol. 2, pp. 1162–1167. IEEE (2002)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. J. 267, 1–38 (2017)
Kim, B., Khanna, R., Koyejo, O.: Examples are not enough, learn to criticize! Criticism for interpretability. In: Advances in Neural Information Processing Systems (NIPS 2016), vol. 29 (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 13–17 August 2016, pp. 1135–1144. Association for Computing Machinery (2016)
Molnar, C.: Interpretable Machine Learning. A Guide for Making Black Box Models Explainable (2018). https://christophm.github.io/book/
Puri, N., Gupta, P., Agarwal, P., Verma, S., Krishnamurthy, B.: MAGIX: model agnostic globally interpretable explanations (arXiv) (2017)
Lazzeri, F.: Automated and Interpretable Machine Learning - Microsoft Azure - Medium (2019)
Benitez, J.M., Castro, J.L., Requena, I.: Are artificial neural networks black boxes? IEEE Trans. Neural Netw. 8(5), 1156–1164 (1997)
https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(original)
Chawla, N., Bowyer, K., Hall, L., Kegelmeyer, W.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25, no. 2 (2012)
de Borda, J.C.: Mémoire sur les élections au scrutin, Mémoire de l’Académie Royale. Histoire de l’Académie des Sciences, Paris, pp. 657–665 (1781)
Risse, M.: Why the count de Borda cannot beat the Marquis de Condorcet. Soc. Choice Welfare 25(1), 95–113 (2005)
Gupta, P.: Cross-Validation in Machine Learning - Towards Data Science (2017)
Reed, R., MarksII, R.J.: Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks, p. 38 (1999)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hakkoum, H., Idri, A., Abnane, I. (2020). Artificial Neural Networks Interpretation Using LIME for Breast Cancer Diagnosis. In: Rocha, Á., Adeli, H., Reis, L., Costanzo, S., Orovic, I., Moreira, F. (eds) Trends and Innovations in Information Systems and Technologies. WorldCIST 2020. Advances in Intelligent Systems and Computing, vol 1161. Springer, Cham. https://doi.org/10.1007/978-3-030-45697-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-45697-9_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-45696-2
Online ISBN: 978-3-030-45697-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)