Enhancing deep learning based classifiers with inpainting anatomical side markers (L/R markers) for multi-center trials
- PMID: 35462346
- DOI: 10.1016/j.cmpb.2022.106705
Enhancing deep learning based classifiers with inpainting anatomical side markers (L/R markers) for multi-center trials
Abstract
Background and objective: The protocol for placing anatomical side markers (L/R markers) in chest radiographs varies from one hospital or department to another. However, the markers have strong signals that can be useful for deep learning-based classifier to predict diseases. We aimed to enhance the performance of a deep learning-based classifiers in multi-center datasets by inpainting the L/R markers.
Methods: The L/R marker was detected with using the EfficientDet detection network; only the detected regions were inpainted using a generative adversarial network (GAN). To analyze the effect of the inpainting in detail, deep learning-based classifiers were trained using original images, marker-inpainted images, and original images clipped using the min-max value of the marker-inpainted images. Binary classification, multi-class classification, and multi-task learning with segmentation and classification were developed and evaluated. Furthermore, the performances of the network on internal and external validation datasets were compared using DeLong's test for two correlated receiver operating characteristic (ROC) curves in binary classification and Stuart-Maxwell test for marginal homogeneity in multi-class classification and multi-task learning. In addition, the qualitative results of activation maps were evaluated using the gradient-class activation map (Grad-CAM).
Results: Marker-inpainting preprocessing improved the classification performances. In the binary classification based on the internal validation, the area under the curves (AUCs) and accuracies were 0.950 and 0.900 for the model trained on the min-max clipped images and 0.911 and 0.850 for the model trained on the original images, respectively (P-value=0.006). In the external validation, the AUCs and accuracies were 0.858 and 0.677 for the model trained using the inpainted images and 0.723 and 0.568 for the model trained using the original images (P-value<0.001), respectively. In addition, the models trained using the marker inpainted images showed the best performance in multi-class classification and multi-task learning. Furthermore, the activation maps obtained using the Grad-CAM improved with the proposed method. The 5-fold validation results also showed improvement trend according to the preprocessing strategies.
Conclusions: Inpainting an L/R marker significantly enhanced the classifier's performance and robustness, especially in internal and external studies, which could be useful in developing a more robust and accurate deep learning-based classifier for multi-center trials. The code for detection is available at: https://github.com/mi2rl/MI2RLNet. And the code for inpainting is available at: https://github.com/mi2rl/L-R-marker-inpainting.
Keywords: Anatomical side marker (L/R marker); Chest radiograph; Deep learning; Generative adversarial network (GAN); Gradient-class activation map (Grad-CAM); Inpainting.
Copyright © 2022. Published by Elsevier B.V.
Conflict of interest statement
Declaration of Competing Interest The authors report no conflicts of interest.
Similar articles
-
A generative adversarial inpainting network to enhance prediction of periodontal clinical attachment level.J Dent. 2022 Aug;123:104211. doi: 10.1016/j.jdent.2022.104211. Epub 2022 Jun 26. J Dent. 2022. PMID: 35760207
-
Deep learning-based X-ray inpainting for improving spinal 2D-3D registration.Int J Med Robot. 2021 Apr;17(2):e2228. doi: 10.1002/rcs.2228. Epub 2021 Feb 15. Int J Med Robot. 2021. PMID: 33462965
-
[Research on multi-class orthodontic image recognition system based on deep learning network model].Zhonghua Kou Qiang Yi Xue Za Zhi. 2023 Jun 9;58(6):561-568. doi: 10.3760/cma.j.cn112144-20230305-00070. Zhonghua Kou Qiang Yi Xue Za Zhi. 2023. PMID: 37272001 Chinese.
-
An Open Medical Platform to Share Source Code and Various Pre-Trained Weights for Models to Use in Deep Learning Research.Korean J Radiol. 2021 Dec;22(12):2073-2081. doi: 10.3348/kjr.2021.0170. Epub 2021 Oct 26. Korean J Radiol. 2021. PMID: 34719891 Free PMC article. Review.
-
PrUb-EL: A hybrid framework based on deep learning for identifying ubiquitination sites in Arabidopsis thaliana using ensemble learning strategy.Anal Biochem. 2022 Dec 1;658:114935. doi: 10.1016/j.ab.2022.114935. Epub 2022 Oct 4. Anal Biochem. 2022. PMID: 36206844 Review.
Cited by
-
Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning.Korean J Radiol. 2023 Nov;24(11):1061-1080. doi: 10.3348/kjr.2023.0393. Epub 2023 Aug 28. Korean J Radiol. 2023. PMID: 37724586 Free PMC article. Review.
-
Convolutional neural network-based classification of craniosynostosis and suture lines from multi-view cranial X-rays.Sci Rep. 2024 Nov 5;14(1):26729. doi: 10.1038/s41598-024-77550-z. Sci Rep. 2024. PMID: 39496759 Free PMC article.
-
Anonymizing Radiographs Using an Object Detection Deep Learning Algorithm.Radiol Artif Intell. 2023 Sep 13;5(6):e230085. doi: 10.1148/ryai.230085. eCollection 2023 Nov. Radiol Artif Intell. 2023. PMID: 38074777 Free PMC article.
-
CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning.J Digit Imaging. 2023 Jun;36(3):902-910. doi: 10.1007/s10278-023-00782-4. Epub 2023 Jan 26. J Digit Imaging. 2023. PMID: 36702988 Free PMC article.
-
Screening Patient Misidentification Errors Using a Deep Learning Model of Chest Radiography: A Seven Reader Study.J Imaging Inform Med. 2024 Sep 11. doi: 10.1007/s10278-024-01245-0. Online ahead of print. J Imaging Inform Med. 2024. PMID: 39261374
MeSH terms
LinkOut - more resources
Full Text Sources