A Generic Approach to Extend Interpretability of Deep Networks | SpringerLink
Skip to main content

A Generic Approach to Extend Interpretability of Deep Networks

  • Conference paper
  • First Online:
Progress in Artificial Intelligence (EPIA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13566))

Included in the following conference series:

Abstract

The recent advent of machine learning as a transforming technology has sparked fears about human inability to comprehend the rational of gradually more complex approaches. Interpretable Machine Learning (IML) was triggered by such concerns, with the purpose of enabling different actors to grasp the application scenarios, including trustworthiness and decision support in highly regulated sectors as those related to health and public services. YOLO (You Only Look Once) models, as other deep Convolutional Neural Network (CNN) approaches, have recently shown remarkable performance in several tasks dealing with object detection. However, interpretability of these models is still an open issue. Therefore, in this work we extend the LIME (Local Interpretable Model-agnostic Explanations) framework to be used with YOLO models. The main contribution is a public add-on to LIME that can effectively improve YOLO interpretability. Results on complex images show the potential improvement.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://fireloc.org/?lang=en.

  2. 2.

    https://github.com/AlexeyAB/darknet.

  3. 3.

    https://github.com/AntMorais/yolime.

  4. 4.

    https://github.com/DeepQuestAI/Fire-Smoke-Dataset.

  5. 5.

    https://github.com/AlexeyAB/darknet.

  6. 6.

    https://github.com/AntMorais/yolime.

References

  1. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: YOLOv4: optimal speed and accuracy of object detection. arXiv:2004.10934 [cs, eess] (2020)

  2. Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019). https://www.mdpi.com/2079-9292/8/8/832

  3. Castelli, M., Vanneschi, L., Popovič, A.: Predicting burned areas of forest fires: an artificial intelligence approach. Fire Ecol 11(1), 106–118 (2015). https://fireecology.springeropen.com/articles/10.4996/fireecology.1101106

  4. Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (XAI): a survey. arXiv:2006.11371 [cs] (2020)

  5. Gade, K., Geyik, S.C., Kenthapadi, K., Mithal, V., Taly, A.: Explainable AI in industry: practical challenges and lessons learned: implications tutorial. In: FAT 2020: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, p. 699 (2020)

    Google Scholar 

  6. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018)

    Google Scholar 

  7. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. Technical report, UC Berkeley (2014). https://arxiv.org/pdf/1311.2524.pdf

  8. Islam, S.R., Eberle, W., Ghafoor, S.K., Ahmed, M.: Explainable artificial intelligence approaches: a survey. arXiv:2101.09429 [cs] (2021)

  9. Jang, E., Kang, Y., Im, J., Lee, D.W., Yoon, J., Kim, S.K.: Detection and monitoring of forest fires using Himawari-8 geostationary satellite data in South Korea. Remote Sensing 11(3), 271 (2019). https://www.mdpi.com/2072-4292/11/3/271

  10. Jiao, L., et al.: A survey of deep learning-based object detection. IEEE Access 7, 128837–128868 (2019). arXiv: 1907.09408

  11. Kinaneva, D., Hristov, G., Raychev, J., Zahariev, P.: Early forest fire detection using drones and artificial intelligence. In: 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1060–1065 (2019). iSSN 2623-8764

    Google Scholar 

  12. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  13. Longo, L., Goebel, R., Lecue, F., Kieseberg, P., Holzinger, A.: Explainable artificial intelligence: concepts, applications, research challenges and visions. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2020. LNCS, vol. 12279, pp. 1–16. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57321-8_1

    Chapter  Google Scholar 

  14. Madeira, A.: Intelligent system for fire detection. Master’s thesis, University of Coimbra, Coimbra, Portugal (2020)

    Google Scholar 

  15. Mateus, P., Fernandes, P.M.: Forest fires in Portugal: dynamics, causes and policies. In: Reboredo, F. (ed.) Forest Context and Policies in Portugal. WF, vol. 19, pp. 97–115. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-08455-8_4

    Chapter  Google Scholar 

  16. Nowicki, M.R., Cwian, K., Skrzypczynski, P.: How to improve object detection in a driver assistance system applying explainable deep learning, pp. 226–231 (2019). iSSN 2642-7214

    Google Scholar 

  17. Petsiuk, V., et al.: Black-box explanation of object detectors via saliency maps. arXiv:2006.03204 [cs] (2020)

  18. Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp. 779–788 (2016)

    Google Scholar 

  19. Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) NIPS, pp. 91–99 (2015)

    Google Scholar 

  20. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. arXiv:1602.04938 [cs, stat] (2016)

  21. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): towards medical XAI. IEEE Trans. Neural Netw. Learning Syst. 1–21 (2020). arXiv: 1907.07374

  22. Wang, M., Zheng, K., Yang, Y., Wang, X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8, 73127–73141 (2020)

    Article  Google Scholar 

  23. Zablocki, E., Ben-Younes, H., Perez, P., Cord, M.: Explainability of vision-based autonomous driving systems: review and challenges. arXiv:2101.05307 [cs] (2021)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Catarina Silva .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Silva, C., Morais, A., Ribeiro, B. (2022). A Generic Approach to Extend Interpretability of Deep Networks. In: Marreiros, G., Martins, B., Paiva, A., Ribeiro, B., Sardinha, A. (eds) Progress in Artificial Intelligence. EPIA 2022. Lecture Notes in Computer Science(), vol 13566. Springer, Cham. https://doi.org/10.1007/978-3-031-16474-3_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16474-3_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16473-6

  • Online ISBN: 978-3-031-16474-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics