Zusammenfassung
Interpretability is often an essential requirement in medical imaging. Advanced deep learning methods are required to address this need for explainability and high performance. In this work, we investigate whether additional information available during the training process can be used to create an understandable and powerful model. We propose an innovative solution called Proto-Caps that leverages the benefits of capsule networks, prototype learning, and the use of privileged information [1]. This hierarchical architecture establishes a basis for inherent interpretability. The capsule layers allow for mapping human-defined visual attributes onto the encapsulated representation of the high-level features. Furthermore, an active prototype learning algorithm incorporates even more interpretability. As a result, Proto-Caps provides case-based reasoning with attribute-specific prototypes. Applied to the LIDC-IDRI dataset [2], Proto-Caps predicts the malignancy of lung nodules and also provides prototypical samples that are similar in regards to the nodules’ spiculation, calcification, and six more visual features. Besides the additional interpretability, the proposed solution shows an above state-of-the-art prediction performance. Compared to the explainable baseline model, our method achieves more than 6% higher accuracy in predicting both malignancy (93.0 %) and mean characteristic features of lung nodules. Relatively good results can be also achieved when using only 1% of the attribute labels during the training. This result motivates further research as it shows that with Proto-Caps it only requires a few additional annotations of human-defined attributes resulting in an interpretable decision-making process. The code is publicly available at https://github.com/XRad-Ulm/Proto-Caps.
Chapter PDF
Similar content being viewed by others
References
Gallée L, Beer M, Götz M. Interpretable medical image classification using prototype learning and privileged information. Int Conf on Medl Image Comput and Comput-Assist Interv. Springer. 2023:435–45.
Armato III SG, McLennan G, Bidaut L, McNitt-Gray MF, Meyer CR, Reeves AP et al. Data from LIDC-IDRI. The Cancer Imaging Arch. 2015. .
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 Der/die Autor(en), exklusiv lizenziert an Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature
About this paper
Cite this paper
Gallée, L., Beer, M., Götz, M. (2024). Abstract: Interpretable Medical Image Classification Using Prototype Learning and Privileged Information. In: Maier, A., Deserno, T.M., Handels, H., Maier-Hein, K., Palm, C., Tolxdorff, T. (eds) Bildverarbeitung für die Medizin 2024. BVM 2024. Informatik aktuell. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-44037-4_10
Download citation
DOI: https://doi.org/10.1007/978-3-658-44037-4_10
Published:
Publisher Name: Springer Vieweg, Wiesbaden
Print ISBN: 978-3-658-44036-7
Online ISBN: 978-3-658-44037-4
eBook Packages: Computer Science and Engineering (German Language)