{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,12,30]],"date-time":"2024-12-30T19:12:18Z","timestamp":1735585938907},"publisher-location":"Cham","reference-count":47,"publisher":"Springer International Publishing","isbn-type":[{"type":"print","value":"9783031263866"},{"type":"electronic","value":"9783031263873"}],"license":[{"start":{"date-parts":[[2023,1,1]],"date-time":"2023-01-01T00:00:00Z","timestamp":1672531200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,3,17]],"date-time":"2023-03-17T00:00:00Z","timestamp":1679011200000},"content-version":"vor","delay-in-days":75,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2023]]},"abstract":"Abstract<\/jats:title>The rapid development of histopathology scanners allowed the digital transformation of pathology. Current devices fastly and accurately digitize histology slides on many magnifications, resulting in whole slide images (WSI). However, direct application of supervised deep learning methods to WSI highest magnification is impossible due to hardware limitations. That is why WSI classification is usually analyzed using standard Multiple Instance Learning (MIL) approaches, that do not explain their predictions, which is crucial for medical applications. In this work, we fill this gap by introducing ProtoMIL, a novel self-explainable MIL method inspired by the case-based reasoning process that operates on visual prototypes. Thanks to incorporating prototypical features into objects description, ProtoMIL unprecedentedly joins the model accuracy and fine-grained interpretability, as confirmed by the experiments conducted on five recognized whole-slide image datasets.<\/jats:p>","DOI":"10.1007\/978-3-031-26387-3_26","type":"book-chapter","created":{"date-parts":[[2023,3,16]],"date-time":"2023-03-16T15:03:10Z","timestamp":1678978990000},"page":"421-436","update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["ProtoMIL: Multiple Instance Learning with\u00a0Prototypical Parts for\u00a0Whole-Slide Image Classification"],"prefix":"10.1007","author":[{"ORCID":"http:\/\/orcid.org\/0000-0002-8543-5200","authenticated-orcid":false,"given":"Dawid","family":"Rymarczyk","sequence":"first","affiliation":[]},{"ORCID":"http:\/\/orcid.org\/0000-0002-3406-6732","authenticated-orcid":false,"given":"Adam","family":"Pardyl","sequence":"additional","affiliation":[]},{"ORCID":"http:\/\/orcid.org\/0000-0001-6904-1351","authenticated-orcid":false,"given":"Jaros\u0142aw","family":"Kraus","sequence":"additional","affiliation":[]},{"ORCID":"http:\/\/orcid.org\/0000-0001-7571-8357","authenticated-orcid":false,"given":"Aneta","family":"Kaczy\u0144ska","sequence":"additional","affiliation":[]},{"ORCID":"http:\/\/orcid.org\/0000-0002-1215-4379","authenticated-orcid":false,"given":"Marek","family":"Skomorowski","sequence":"additional","affiliation":[]},{"ORCID":"http:\/\/orcid.org\/0000-0002-3063-3621","authenticated-orcid":false,"given":"Bartosz","family":"Zieli\u0144ski","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,3,17]]},"reference":[{"key":"26_CR1","unstructured":"Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Advances in Neural Information Processing Systems, pp. 9505\u20139515 (2018)"},{"key":"26_CR2","unstructured":"Akin, O., et al.: Radiology data from the cancer genome atlas kidney renal clear cell carcinoma [tcga-kirc] collection. Cancer Imaging Arch. (2016)"},{"key":"26_CR3","unstructured":"Andrews, S., Tsochantaridis, I., Hofmann, T.: Support vector machines for multiple-instance learning. In: Advances in neural information processing systems. vol. 2, p. 7 (2002)"},{"key":"26_CR4","unstructured":"Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)"},{"issue":"1","key":"26_CR5","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/sdata.2018.202","volume":"5","author":"S Bakr","year":"2018","unstructured":"Bakr, S., et al.: A radiogenomic dataset of non-small cell lung cancer. Sci. Data 5(1), 1\u20139 (2018)","journal-title":"Sci. Data"},{"key":"26_CR6","doi-asserted-by":"crossref","unstructured":"Barnett, A.J., et al.: Iaia-bl: a case-based interpretable deep learning model for classification of mass lesions in digital mammography. arXiv preprint arXiv:2103.12308 (2021)","DOI":"10.1038\/s42256-021-00423-x"},{"key":"26_CR7","unstructured":"Borowa, A., Rymarczyk, D., Ocho\u0144ska, D., Brzychczy-W\u0142och, M., Zieli\u0144ski, B.: Classifying bacteria clones using attention-based deep multiple instance learning interpreted by persistence homology. In: International Joint Conference on Neural Networks (2021)"},{"key":"26_CR8","unstructured":"Chen, C., Li, O., Tao, C., Barnett, A.J., Su, J., Rudin, C.: This looks like that: deep learning for interpretable image recognition. arXiv preprint arXiv:1806.10574 (2018)"},{"issue":"12","key":"26_CR9","doi-asserted-by":"publisher","first-page":"772","DOI":"10.1038\/s42256-020-00265-z","volume":"2","author":"Z Chen","year":"2020","unstructured":"Chen, Z., Bei, Y., Rudin, C.: Concept whitening for interpretable image recognition. Nat. Mach. Intell. 2(12), 772\u2013782 (2020)","journal-title":"Nat. Mach. Intell."},{"key":"26_CR10","unstructured":"Ciga, O., Martel, A.L., Xu, T.: Self supervised contrastive learning for digital histopathology. arXiv preprint arXiv:2011.13971 (2020)"},{"issue":"3","key":"26_CR11","doi-asserted-by":"publisher","first-page":"231","DOI":"10.5566\/ias.1155","volume":"33","author":"E Decenci\u00e8re","year":"2014","unstructured":"Decenci\u00e8re, E., et al.: Feedback on a publicly distributed image database: the Messidor database. Image Anal. Stereol. 33(3), 231\u2013234 (2014)","journal-title":"Image Anal. Stereol."},{"issue":"1\u20132","key":"26_CR12","doi-asserted-by":"publisher","first-page":"31","DOI":"10.1016\/S0004-3702(96)00034-3","volume":"89","author":"TG Dietterich","year":"1997","unstructured":"Dietterich, T.G., Lathrop, R.H., Lozano-P\u00e9rez, T.: Solving the multiple instance problem with axis-parallel rectangles. Artif. Intell. 89(1\u20132), 31\u201371 (1997)","journal-title":"Artif. Intell."},{"key":"26_CR13","doi-asserted-by":"publisher","unstructured":"Ehteshami Bejnordi, B., et al.: Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318(22), 2199\u20132210 (2017). https:\/\/doi.org\/10.1001\/jama.2017.14585","DOI":"10.1001\/jama.2017.14585"},{"key":"26_CR14","doi-asserted-by":"crossref","unstructured":"Feng, J., Zhou, Z.H.: Deep miml network. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)","DOI":"10.1609\/aaai.v31i1.10890"},{"issue":"1","key":"26_CR15","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1017\/S026988890999035X","volume":"25","author":"J Foulds","year":"2010","unstructured":"Foulds, J., Frank, E.: A review of multi-instance learning assumptions. Knowl. Eng. Rev. 25(1), 1\u201325 (2010)","journal-title":"Knowl. Eng. Rev."},{"key":"26_CR16","unstructured":"Gelasca, E.D., Byun, J., Obara, B., Manjunath, B.: Evaluation and benchmark for biological image segmentation. In: 2008 15th IEEE International Conference on Image Processing, pp. 1816\u20131819. IEEE (2008)"},{"key":"26_CR17","unstructured":"Ghorbani, A., Wexler, J., Zou, J.Y., Kim, B.: Towards automatic concept-based explanations. In: Advances in Neural Information Processing Systems, pp. 9277\u20139286 (2019)"},{"key":"26_CR18","doi-asserted-by":"crossref","unstructured":"Hase, P., Chen, C., Li, O., Rudin, C.: Interpretable image recognition with hierarchical prototypes. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 7, pp. 32\u201340 (2019)","DOI":"10.1609\/hcomp.v7i1.5265"},{"key":"26_CR19","unstructured":"Hoffmann, A., Fanconi, C., Rade, R., Kohler, J.: This looks like that... does it? Shortcomings of latent space prototype interpretability in deep networks. arXiv preprint arXiv:2105.02968 (2021)"},{"key":"26_CR20","unstructured":"Ilse, M., Tomczak, J., Welling, M.: Attention-based deep multiple instance learning. In: International Conference on Machine Learning, pp. 2127\u20132136. PMLR (2018)"},{"key":"26_CR21","unstructured":"Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: International Conference on Machine Learning, pp. 2668\u20132677. PMLR (2018)"},{"key":"26_CR22","doi-asserted-by":"crossref","unstructured":"Kim, E., Kim, S., Seo, M., Yoon, S.: Xprotonet: diagnosis in chest radiography with global and local explanations. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 15719\u201315728 (2021)","DOI":"10.1109\/CVPR46437.2021.01546"},{"key":"26_CR23","unstructured":"Kolodner, J.: Case-Based Reasoning. Morgan Kaufmann, Burlington (2014)"},{"key":"26_CR24","doi-asserted-by":"crossref","unstructured":"Li, B., Li, Y., Eliceiri, K.W.: Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 14318\u201314328 (2021)","DOI":"10.1109\/CVPR46437.2021.01409"},{"key":"26_CR25","doi-asserted-by":"crossref","unstructured":"Li, G., Li, C., Wu, G., Ji, D., Zhang, H.: Multi-view attention-guided multiple instance detection network for interpretable breast cancer histopathological image diagnosis. IEEE Access (2021)","DOI":"10.1109\/ACCESS.2021.3084360"},{"key":"26_CR26","doi-asserted-by":"crossref","unstructured":"Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)","DOI":"10.1609\/aaai.v32i1.11771"},{"issue":"6","key":"26_CR27","doi-asserted-by":"publisher","first-page":"555","DOI":"10.1038\/s41551-020-00682-w","volume":"5","author":"MY Lu","year":"2021","unstructured":"Lu, M.Y., Williamson, D.F., Chen, T.Y., Chen, R.J., Barbieri, M., Mahmood, F.: Data-efficient and weakly supervised computational pathology on whole-slide images. Nat. Biomed. Eng. 5(6), 555\u2013570 (2021)","journal-title":"Nat. Biomed. Eng."},{"key":"26_CR28","doi-asserted-by":"crossref","unstructured":"Nauta, M., van Bree, R., Seifert, C.: Neural prototype trees for interpretable fine-grained image recognition. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 14933\u201314943 (2021)","DOI":"10.1109\/CVPR46437.2021.01469"},{"key":"26_CR29","doi-asserted-by":"publisher","unstructured":"Nauta, M., Jutte, A., Provoost, J., Seifert, C.: This looks like that, because... explaining prototypes for interpretable image recognition. In: Kamp, M., et al. (eds.) ECML PKDD 2021. CCIS, vol. 1524, pp. 441\u2013456. Springer, Cham (2021). https:\/\/doi.org\/10.1007\/978-3-030-93736-2_34","DOI":"10.1007\/978-3-030-93736-2_34"},{"issue":"6","key":"26_CR30","doi-asserted-by":"publisher","first-page":"1228","DOI":"10.1016\/j.media.2012.06.003","volume":"16","author":"G Quellec","year":"2012","unstructured":"Quellec, G., et al.: A multiple-instance learning framework for diabetic retinopathy screening. Med. Image Anal. 16(6), 1228\u20131240 (2012)","journal-title":"Med. Image Anal."},{"key":"26_CR31","first-page":"451","volume":"9","author":"P Rani","year":"2016","unstructured":"Rani, P., Elagiri Ramalingam, R., Rajamani, K.T., Kandemir, M., Singh, D.: Multiple instance learning: robust validation on retinopathy of prematurity. Int. J. Ctrl. Theory Appl. 9, 451\u2013459 (2016)","journal-title":"Int. J. Ctrl. Theory Appl."},{"key":"26_CR32","doi-asserted-by":"crossref","unstructured":"Rebuffi, S.A., Fong, R., Ji, X., Vedaldi, A.: There and back again: revisiting backpropagation saliency methods. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 8839\u20138848 (2020)","DOI":"10.1109\/CVPR42600.2020.00886"},{"issue":"5","key":"26_CR33","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","volume":"1","author":"C Rudin","year":"2019","unstructured":"Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206\u2013215 (2019)","journal-title":"Nat. Mach. Intell."},{"key":"26_CR34","doi-asserted-by":"crossref","unstructured":"Rymarczyk, D., Borowa, A., Tabor, J., Zielinski, B.: Kernel self-attention for weakly-supervised image classification using deep multiple instance learning. In: Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, pp. 1721\u20131730 (2021)","DOI":"10.1109\/WACV48630.2021.00176"},{"key":"26_CR35","doi-asserted-by":"publisher","unstructured":"Rymarczyk, D., Struski, \u0141., Tabor, J., Zieli\u0144ski, B.: Protopshare: prototype sharing for interpretable image classification and similarity discovery. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2021) (2021). https:\/\/doi.org\/10.1145\/3447548.3467245","DOI":"10.1145\/3447548.3467245"},{"key":"26_CR36","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 618\u2013626 (2017)","DOI":"10.1109\/ICCV.2017.74"},{"key":"26_CR37","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., et al.: Taking a hint: leveraging explanations to make vision and language models more grounded. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 2591\u20132600 (2019)","DOI":"10.1109\/ICCV.2019.00268"},{"key":"26_CR38","unstructured":"Shao, Z., et al.: Transmil: transformer based correlated multiple instance learning for whole slide image classication. arXiv preprint arXiv:2106.00908 (2021)"},{"key":"26_CR39","doi-asserted-by":"crossref","unstructured":"Shi, X., Xing, F., Xie, Y., Zhang, Z., Cui, L., Yang, L.: Loss-based attention for deep multiple instance learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 5742\u20135749 (2020)","DOI":"10.1609\/aaai.v34i04.6030"},{"key":"26_CR40","unstructured":"Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv:1312.6034 (2013)"},{"issue":"5","key":"26_CR41","doi-asserted-by":"publisher","first-page":"1196","DOI":"10.1109\/TMI.2016.2525803","volume":"35","author":"K Sirinukunwattana","year":"2016","unstructured":"Sirinukunwattana, K., Raza, S.E.A., Tsang, Y.W., Snead, D.R., Cree, I.A., Rajpoot, N.M.: Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 35(5), 1196\u20131206 (2016)","journal-title":"IEEE Trans. Med. Imaging"},{"key":"26_CR42","doi-asserted-by":"crossref","unstructured":"Straehle, C., Kandemir, M., Koethe, U., Hamprecht, F.A.: Multiple instance learning with response-optimized random forests. In: 2014 22nd International Conference on Pattern Recognition, pp. 3768\u20133773. IEEE (2014)","DOI":"10.1109\/ICPR.2014.647"},{"key":"26_CR43","unstructured":"Tu, M., Huang, J., He, X., Zhou, B.: Multiple instance learning with graph neural networks. arXiv preprint arXiv:1906.04881 (2019)"},{"key":"26_CR44","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1016\/j.patcog.2017.08.026","volume":"74","author":"X Wang","year":"2018","unstructured":"Wang, X., Yan, Y., Tang, P., Bai, X., Liu, W.: Revisiting multiple instance neural networks. Pattern Recogn. 74, 15\u201324 (2018)","journal-title":"Pattern Recogn."},{"key":"26_CR45","unstructured":"Yan, Y., Wang, X., Guo, X., Fang, J., Liu, W., Huang, J.: Deep multi-instance learning with dynamic pooling. In: Asian Conference on Machine Learning, pp. 662\u2013677. PMLR (2018)"},{"key":"26_CR46","unstructured":"Yeh, C.K., Kim, B., Arik, S.O., Li, C.L., Pfister, T., Ravikumar, P.: On completeness-aware concept-based explanations in deep neural networks. In: Advances in Neural Information Processing Systems (2019)"},{"key":"26_CR47","doi-asserted-by":"crossref","unstructured":"Zhao, Z., et al.: Drug activity prediction using multiple-instance learning via joint instance and feature selection. BMC Bioinform. 14, S16 (2013). Springer","DOI":"10.1186\/1471-2105-14-S14-S16"}],"container-title":["Lecture Notes in Computer Science","Machine Learning and Knowledge Discovery in Databases"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-26387-3_26","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,3,16]],"date-time":"2023-03-16T15:15:08Z","timestamp":1678979708000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-26387-3_26"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023]]},"ISBN":["9783031263866","9783031263873"],"references-count":47,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-26387-3_26","relation":{},"ISSN":["0302-9743","1611-3349"],"issn-type":[{"type":"print","value":"0302-9743"},{"type":"electronic","value":"1611-3349"}],"subject":[],"published":{"date-parts":[[2023]]},"assertion":[{"value":"17 March 2023","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"ECML PKDD","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Joint European Conference on Machine Learning and Knowledge Discovery in Databases","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Grenoble","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"France","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2022","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"19 September 2022","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"23 September 2022","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"22","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"ecml2022","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/2022.ecmlpkdd.org\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Double-blind","order":1,"name":"type","label":"Type","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"CMT","order":2,"name":"conference_management_system","label":"Conference Management System","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"1060","order":3,"name":"number_of_submissions_sent_for_review","label":"Number of Submissions Sent for Review","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"236","order":4,"name":"number_of_full_papers_accepted","label":"Number of Full Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"0","order":5,"name":"number_of_short_papers_accepted","label":"Number of Short Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"22% - The value is computed by the equation \"Number of Full Papers Accepted \/ Number of Submissions Sent for Review * 100\" and then rounded to a whole number.","order":6,"name":"acceptance_rate_of_full_papers","label":"Acceptance Rate of Full Papers","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"3-4","order":7,"name":"average_number_of_reviews_per_paper","label":"Average Number of Reviews per Paper","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"3-4","order":8,"name":"average_number_of_papers_per_reviewer","label":"Average Number of Papers per Reviewer","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"No","order":9,"name":"external_reviewers_involved","label":"External Reviewers Involved","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"17 demo track papers have been accepted from 28 submissions","order":10,"name":"additional_info_on_review_process","label":"Additional Info on Review Process","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}}]}}