Artificial Neural Networks Interpretation Using LIME for Breast Cancer Diagnosis | SpringerLink
Skip to main content

Artificial Neural Networks Interpretation Using LIME for Breast Cancer Diagnosis

  • Conference paper
  • First Online:
Trends and Innovations in Information Systems and Technologies (WorldCIST 2020)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1161))

Included in the following conference series:

Abstract

Breast Cancer (BC) is the most common type of cancer among women. Thankfully early detection and treatment improvements helped decrease its number of deaths. Data Mining techniques (DM), which discover hidden and potentially useful patterns from data, particularly for breast cancer diagnosis, are witnessing a new era, where the main objective is no longer replacing humans or just assisting them in their tasks but enhancing and augmenting their capabilities and this is where interpretability comes into play. This paper aims to investigate the Local Interpretable Model-agnostic Explanations (LIME) technique to interpret a Multilayer perceptron (MLP) trained on the Wisconsin Original Data-set. The results show that LIME explanations are a sort of real-time interpretation that helps understanding how the constructed neural network “thinks” and thus can increase trust and help oncologists, as the domain experts, learn new patterns.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 22879
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 28599
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Al-Hajj, M., Wicha, M.S., Benito-Hernandez, A., Morrison, S.J., Clarke, M.F.: Prospective identification of tumorigenic breast cancer cells. Proc. Nat. Acad. Sci. 100(11), 6890 (2003). Correction to 100(7):3983

    Google Scholar 

  2. Solanki, K.: Application of data mining techniques in healthcare data, vol. 148, no. 2, p. 1622 (2016)

    Google Scholar 

  3. Idri, A., Chlioui, I., El Ouassif, B.: A systematic map of data analytics in breast cancer. In: ACM International Conference Proceeding Series. Association for Computing Machinery (2018)

    Google Scholar 

  4. Hosni, M., Abnane, I., Idri, A., de Gea, J.M.C., Fernandez Aleman, J.L.: Reviewing ensemble classification methods in breast cancer. Comput. Methods Programs Biomed. 177, 89–112 (2019)

    Google Scholar 

  5. Idri, A., Hosni, M., Abnane, I., de Gea, J.M.C., Fernandez Aleman, J.L.: Impact of parameter tuning on machine learning based breast cancer classification. In: Advances in Intelligent Systems and Computing, vol. 932, pp. 115–125. Springer (2019)

    Google Scholar 

  6. Chlioui, I., Idri, A., Abnane, I., de Gea, J.M.C., Fernandez Aleman, J.L.:. Breast cancer classification with missing data imputation. In: Advances in Intelligent Systems and Computing, vol. 932, pp. 13–23. Springer (2019)

    Google Scholar 

  7. Aurangzeb, A.M., Eckert, C., Teredesai, A.: Interpretable machine learning in healthcare. In: Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, BCB 2018, pp. 559–560. ACM Press, New York (2018)

    Google Scholar 

  8. Oracle’s unified framework for Model Interpretation. https://github.com/oracle/Skater

  9. Thomas, A.: An introduction to neural networks for beginners. Technical report in Adventures in Machine Learning (2017)

    Google Scholar 

  10. Gardner, M.W., Dorling, S.R.: Artificial neural networks (the multilayer perceptron) - a review of applications in the atmospheric sciences. Atmos. Environ. 32(14–15), 2627–2636 (1998)

    Article  Google Scholar 

  11. Idri, A., Khoshgoftaar, T., Abran, A.: Can neural networks be easily interpreted in software cost estimation? In: 2002 IEEE World Congress on Computational Intelligence. IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2002. Proceedings (Cat. No.02CH37291), vol. 2, pp. 1162–1167. IEEE (2002)

    Google Scholar 

  12. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. J. 267, 1–38 (2017)

    Article  MathSciNet  Google Scholar 

  13. Kim, B., Khanna, R., Koyejo, O.: Examples are not enough, learn to criticize! Criticism for interpretability. In: Advances in Neural Information Processing Systems (NIPS 2016), vol. 29 (2016)

    Google Scholar 

  14. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 13–17 August 2016, pp. 1135–1144. Association for Computing Machinery (2016)

    Google Scholar 

  15. Molnar, C.: Interpretable Machine Learning. A Guide for Making Black Box Models Explainable (2018). https://christophm.github.io/book/

  16. Puri, N., Gupta, P., Agarwal, P., Verma, S., Krishnamurthy, B.: MAGIX: model agnostic globally interpretable explanations (arXiv) (2017)

    Google Scholar 

  17. Lazzeri, F.: Automated and Interpretable Machine Learning - Microsoft Azure - Medium (2019)

    Google Scholar 

  18. Benitez, J.M., Castro, J.L., Requena, I.: Are artificial neural networks black boxes? IEEE Trans. Neural Netw. 8(5), 1156–1164 (1997)

    Article  Google Scholar 

  19. https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(original)

  20. Chawla, N., Bowyer, K., Hall, L., Kegelmeyer, W.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)

    Article  Google Scholar 

  21. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25, no. 2 (2012)

    Google Scholar 

  22. de Borda, J.C.: Mémoire sur les élections au scrutin, Mémoire de l’Académie Royale. Histoire de l’Académie des Sciences, Paris, pp. 657–665 (1781)

    Google Scholar 

  23. Risse, M.: Why the count de Borda cannot beat the Marquis de Condorcet. Soc. Choice Welfare 25(1), 95–113 (2005)

    Article  Google Scholar 

  24. Gupta, P.: Cross-Validation in Machine Learning - Towards Data Science (2017)

    Google Scholar 

  25. Reed, R., MarksII, R.J.: Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks, p. 38 (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Idri .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hakkoum, H., Idri, A., Abnane, I. (2020). Artificial Neural Networks Interpretation Using LIME for Breast Cancer Diagnosis. In: Rocha, Á., Adeli, H., Reis, L., Costanzo, S., Orovic, I., Moreira, F. (eds) Trends and Innovations in Information Systems and Technologies. WorldCIST 2020. Advances in Intelligent Systems and Computing, vol 1161. Springer, Cham. https://doi.org/10.1007/978-3-030-45697-9_2

Download citation

Publish with us

Policies and ethics