Abstract
The recent advent of machine learning as a transforming technology has sparked fears about human inability to comprehend the rational of gradually more complex approaches. Interpretable Machine Learning (IML) was triggered by such concerns, with the purpose of enabling different actors to grasp the application scenarios, including trustworthiness and decision support in highly regulated sectors as those related to health and public services. YOLO (You Only Look Once) models, as other deep Convolutional Neural Network (CNN) approaches, have recently shown remarkable performance in several tasks dealing with object detection. However, interpretability of these models is still an open issue. Therefore, in this work we extend the LIME (Local Interpretable Model-agnostic Explanations) framework to be used with YOLO models. The main contribution is a public add-on to LIME that can effectively improve YOLO interpretability. Results on complex images show the potential improvement.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: YOLOv4: optimal speed and accuracy of object detection. arXiv:2004.10934 [cs, eess] (2020)
Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019). https://www.mdpi.com/2079-9292/8/8/832
Castelli, M., Vanneschi, L., Popovič, A.: Predicting burned areas of forest fires: an artificial intelligence approach. Fire Ecol 11(1), 106–118 (2015). https://fireecology.springeropen.com/articles/10.4996/fireecology.1101106
Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (XAI): a survey. arXiv:2006.11371 [cs] (2020)
Gade, K., Geyik, S.C., Kenthapadi, K., Mithal, V., Taly, A.: Explainable AI in industry: practical challenges and lessons learned: implications tutorial. In: FAT 2020: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, p. 699 (2020)
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. Technical report, UC Berkeley (2014). https://arxiv.org/pdf/1311.2524.pdf
Islam, S.R., Eberle, W., Ghafoor, S.K., Ahmed, M.: Explainable artificial intelligence approaches: a survey. arXiv:2101.09429 [cs] (2021)
Jang, E., Kang, Y., Im, J., Lee, D.W., Yoon, J., Kim, S.K.: Detection and monitoring of forest fires using Himawari-8 geostationary satellite data in South Korea. Remote Sensing 11(3), 271 (2019). https://www.mdpi.com/2072-4292/11/3/271
Jiao, L., et al.: A survey of deep learning-based object detection. IEEE Access 7, 128837–128868 (2019). arXiv: 1907.09408
Kinaneva, D., Hristov, G., Raychev, J., Zahariev, P.: Early forest fire detection using drones and artificial intelligence. In: 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1060–1065 (2019). iSSN 2623-8764
Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Longo, L., Goebel, R., Lecue, F., Kieseberg, P., Holzinger, A.: Explainable artificial intelligence: concepts, applications, research challenges and visions. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2020. LNCS, vol. 12279, pp. 1–16. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57321-8_1
Madeira, A.: Intelligent system for fire detection. Master’s thesis, University of Coimbra, Coimbra, Portugal (2020)
Mateus, P., Fernandes, P.M.: Forest fires in Portugal: dynamics, causes and policies. In: Reboredo, F. (ed.) Forest Context and Policies in Portugal. WF, vol. 19, pp. 97–115. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-08455-8_4
Nowicki, M.R., Cwian, K., Skrzypczynski, P.: How to improve object detection in a driver assistance system applying explainable deep learning, pp. 226–231 (2019). iSSN 2642-7214
Petsiuk, V., et al.: Black-box explanation of object detectors via saliency maps. arXiv:2006.03204 [cs] (2020)
Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp. 779–788 (2016)
Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) NIPS, pp. 91–99 (2015)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. arXiv:1602.04938 [cs, stat] (2016)
Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): towards medical XAI. IEEE Trans. Neural Netw. Learning Syst. 1–21 (2020). arXiv: 1907.07374
Wang, M., Zheng, K., Yang, Y., Wang, X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8, 73127–73141 (2020)
Zablocki, E., Ben-Younes, H., Perez, P., Cord, M.: Explainability of vision-based autonomous driving systems: review and challenges. arXiv:2101.05307 [cs] (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Silva, C., Morais, A., Ribeiro, B. (2022). A Generic Approach to Extend Interpretability of Deep Networks. In: Marreiros, G., Martins, B., Paiva, A., Ribeiro, B., Sardinha, A. (eds) Progress in Artificial Intelligence. EPIA 2022. Lecture Notes in Computer Science(), vol 13566. Springer, Cham. https://doi.org/10.1007/978-3-031-16474-3_40
Download citation
DOI: https://doi.org/10.1007/978-3-031-16474-3_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16473-6
Online ISBN: 978-3-031-16474-3
eBook Packages: Computer ScienceComputer Science (R0)