Abstract
Most machine learning and deep learning models lack a way of explaining and interpreting results. Due to the dynamic nature of deep learning models and increasing state-of-the-art models, the current model evaluation is based on accuracy scores. This makes machine learning and deep learning black-box models. This leads to lack of confidence in applying the model and lack of trust of the generated results. There are multiple libraries that help us explain models of structured data like SHAP and LIME. This chapter explains computer vision model outputs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature
About this chapter
Cite this chapter
Kulkarni, A., Shivananda, A., Sharma, N.R. (2022). Explainable AI for Computer Vision. In: Computer Vision Projects with PyTorch. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-8273-1_10
Download citation
DOI: https://doi.org/10.1007/978-1-4842-8273-1_10
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-8272-4
Online ISBN: 978-1-4842-8273-1
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)