Utilizing Synthetic Nodules for Improving Nodule Detection in Chest Radiographs - PMC Skip to main content
Journal of Digital Imaging logoLink to Journal of Digital Imaging
. 2022 Mar 18;35(4):1061–1068. doi: 10.1007/s10278-022-00608-9

Utilizing Synthetic Nodules for Improving Nodule Detection in Chest Radiographs

Minki Chung 1,#, Seo Taek Kong 1,#, Beomhee Park 1, Younjoon Chung 1, Kyu-Hwan Jung 1,, Joon Beom Seo 2
PMCID: PMC9485384  PMID: 35304676

Abstract

Algorithms that automatically identify nodular patterns in chest X-ray (CXR) images could benefit radiologists by reducing reading time and improving accuracy. A promising approach is to use deep learning, where a deep neural network (DNN) is trained to classify and localize nodular patterns (including mass) in CXR images. Such algorithms, however, require enough abnormal cases to learn representations of nodular patterns arising in practical clinical settings. Obtaining large amounts of high-quality data is impractical in medical imaging where (1) acquiring labeled images is extremely expensive, (2) annotations are subject to inaccuracies due to the inherent difficulty in interpreting images, and (3) normal cases occur far more frequently than abnormal cases. In this work, we devise a framework to generate realistic nodules and demonstrate how they can be used to train a DNN identify and localize nodular patterns in CXR images. While most previous research applying generative models to medical imaging are limited to generating visually plausible abnormalities and using these patterns for augmentation, we go a step further to show how the training algorithm can be adjusted accordingly to maximally benefit from synthetic abnormal patterns. A high-precision detection model was first developed and tested on internal and external datasets, and the proposed method was shown to enhance the model’s recall while retaining the low level of false positives.

Keywords: Generative adversarial networks, Online data augmentation, Chest radiographs, Computer-aided detection

Introduction

Chest X-rays (CXRs) are widely used to screen and diagnose pulmonary abnormalities. Among the various abnormalities that can be identified from CXRs, nodules appear as increased attenuation (opacification) patterns and are difficult to interpret, causing high reader variation [1]. With the success of deep learning (DL), interest has surged in developing algorithms to detect nodular patterns from CXRs [2, 3]. However, annotated radiographs are hard to collect: only expert radiologists can label the data, and deep neural networks (DNNs) that require large amounts of labeled data are either expensive to develop or, having been trained on a small training set, perform subpar to human experts.

Strong and large quantities of appropriate augmentation schemes directly translate to better performance as long as the underlying classes are not perturbed [4, 5]. To preserve objects present in an image, most augmentations transform or distort the background and control the level of noise to the extent that contents are unaffected. In classifying natural images, more sophisticated techniques involving the use of generative algorithms [6] have been devised to augment the training set with synthetic samples [7]. When there is a shortage of certain classes, however, these techniques do not suffice, because of either a lack of diversity of the particular object or class-imbalance. Both issues are prevalent in medical imaging, but additional domain-specific knowledge authorizes the use of targeted augmentation.

In this work, we build upon the above motivations and devise a training scheme to augment the dataset to better localize nodular patterns in CXRs. By incorporating the knowledge that nodule lesions manifest as increased opacity patterns with sizes less than 3cm and are positioned in the interior of lung parenchyma, we develop a synthetic nodular generation algorithm to augment the training set online. While this work is not the first to apply generative algorithms to medical imaging [815], prior works focus on the visual plausibility of abnormalities and have not gone the extra step to investigate how training schemes can be modified accordingly. Our augmentation algorithm is developed without sacrificing the visual plausibility of synthetically generated patterns (see Fig. 1), and we systematically analyze how synthetic nodules can be used to maximize detection performance.

Fig. 1.

Fig. 1

Examples of synthetically generated nodular patterns: (Top) normal image templates, (middle) masks extracted from real nodular patterns, and (bottom) synthetically generated nodular cases

We first developed a standard nodule detection model that achieves high recall at false positives per image (FPPI) comparable to general radiologists when tested on the Japanese Society of Radiological Technology (JSRT) [16] dataset. Other learning-based algorithms yield 25× more FPPI when tested on the same dataset. Then, we show that the augmentation scheme consistently enhances the performance of a detection network across different dataset sizes, as well as levels of class-imbalance within each batch. The latter observation sheds light into how to perform batch sampling for datasets with high class-imbalance. The augmentation scheme developed in this work is then shown to improve recall while maintaining the same level of false positives.

Materials and Method

Chest X-ray Images

The institutional review board for human investigation at the anonymized hospital approved the study protocol, removed all patient identifiers, and waived informed consent requirements because of the retrospective design of this study. Posteroanterior chest X-rays identified with pulmonary nodules and without lesions were retrospectively collected between January 2010 and November 2016 at anonymized hospitals for a total of 1958 nodular and 16, 531 non-nodular images. The dataset was split to obtain an internal validation set with 175 nodular and 2065 non-nodular images without patient overlap with the training set. One-hundred fifty-four nodular and 93 normal high-resolution, digitized film CXR images in the Japanese Society of Radiological Technology (JSRT) dataset [16] were used for external validation. Images in this dataset were collected across 13 medical centers in Japan and one institution in the USA, and every abnormal image contains one nodule object. All images were resized to 1024×1024 using bi-linear interpolation unless stated otherwise, and normalized after applying windowing to remove irrelevant, outlier piexels.

Nodule Synthesizer

Overview

Nodular patterns can be diverse, and the goal is to present a real-time augmentation strategy to generate diverse yet realistic patterns across regions where nodules occur. This section presents our development of a nodule synthesis model that can produce synthetic images in real time, and describes how the network is used to augment the training set online while training, i.e., without inferring and saving synthetic images prior to training.

A lung segmentation model S was first trained to segment the lung parenchyma in CXR images on a linear combination of cross-entropy and dice losses. Its predictions sN=S(IN) on normal images were eroded sNs~N to serve as background templates on which the generator’s outputs lie. We denote a crop centered at a point in pp:s~N(p)=1 as x, and sample a mask MΨ from all nodule masks Ψ corresponding to abnormal images in the training set. The crop is masked xxM which is then processed through the nodule generator to obtain a synthesized patch G(xM) before reverting to a synthesized image. The overall procedure of generating nodular patterns from normal images and real nodular masks is pictorially summarized in Fig. 2.

Fig. 2.

Fig. 2

Nodular pattern generation schematic diagram. After eroding the lung templates extracted from a lung segmentation model S, the nodule synthesizer G produces artificial nodular patterns on a masked input xM. Masks retrieved from real nodular patterns are randomly placed to fit the lung template

Generator Network

The nodule generator architecture operates in two steps, a coarse synthesizer with a ResNet-34 backbone followed by a refine synthesizer. Synthesizing nodular patterns in normal CXR images is extremely difficult because fine details may be key factors in distinguishing nodular patterns from morphological features in the lung. Nodular patterns including mass in CXR images might be non-local, where features may have large spatial dependencies. Therefore, a generator must be able to capture distant patterns, but traditional convolution layers encode only local dependencies and do not suffice. Furthermore, the masking operation xxM loses information, for example vein and bone locations.

The former issue was resolved using contextual attention modules known to capture long-range spatial dependencies dedicated to refine coarse outputs [17]. Contextual attention is computed over the input feature and its masked location as the foreground object, and can therefore incorporate global pathological features in generating nodular patterns. This has been used when capturing non-local background features is pivotal in generating fine-detailed contents. We handled the latter issue by replacing all convolutional layers with gated convolution [18]. Gated convolution mitigates uniform weighting due to binary masks by adaptively updating masks with soft targets and was originally proposed to handle random occlusions. This module was used to realize arbitrary nodular patterns. The resulting nodule synthesizer G:Mxx training diagram is visualized in Fig. 3.

Fig. 3.

Fig. 3

Nodule synthesizer network G and its training procedure

Lung segments sN are extracted from a normal image and eroded, denoted as s~N. A nodule synthesizer was then trained to generate synthetic nodular patterns G(xM) on the desired position indicated by a mask M on the LS-GAN loss [19]. The nodule synthesizer architecture consists of an encoder and two decoders, with feature encoding at convolutional layers of ResNet-34 and decoder being nearest-neighbor up-sampling layers for fast inference.

Online Nodule Augmentation

Synthetic nodular patterns are to be generated to augment the dataset while training. Offline generation of synthetic patterns limits the diversity of patterns and locations of synthetic nodular patterns because the number of synthetic nodules that help train the detection network must be known in advance. This reduces the flexibility of modifying sampling ratios among normal or real and synthetic nodules. An overview of the algorithm is presented in Algorithm 1.

graphic file with name 10278_2022_608_Figa_HTML.jpg

At every SGD step, a batch B={In,Iab} of normal In and abnormal images Iab are sampled and synthetic nodules are generated on top of background images within the batch to obtain an augmented batch Bsyn={In,Iab,Isyn}. The number of objects KGeom(1/μ) to be generated on a template InB is first randomly sampled for each candidate synthetic image. Masks M={Mk}k=1K are then drawn from the set of real nodular masks Ψ uniformly at random with each mask centered at pk randomly drawn within the lung such that masks do not overlap. Lung regions S:Ins~ extracted from a reference template In (image) are eroded E:s~,|Mk|xM recursively using the mask’s size |Mk| to obtain the masked nodule patch xM with objects y=kMk. The nodule synthesizer then generates the synthetic image (s~,xM)G(xM). Together, synthetic patterns are guaranteed to be in actual locations where nodular patterns occur and are diverse enough to help the detection network generalize unseen patterns.

Experiments

Implementation Details

RetinaNet [20] with a ResNet-50 backbone detection network was trained on the focal loss to localize nodular (abnormal) lesions in patches. When training both standard and synthetic augmentation models, rotation ±10, (0.9, 1.1), blurring, and sharpening augmentations were applied. Nodule synthesizer and critic networks were trained using alternating descent with the Adam optimizer with learning rates 10-4 and 4·10-4, respectively, decayed by a factor of 2 every 10 epochs.

When training the synthesizer, we rescaled the images by a factor of 2 on both width and height, applying zero-padding whenever the patterns’ centers were located near image boundaries. The training objective for each sample X was a combination of reconstruction, perceptual, and adversarial losses to solve

G=argminGmaxD:I[0,1]λr||G(MI)-X||2+λp||ϕG(MI)-ϕ(I)||22+λaEIPrealD(G(MI))+(1-D(I))

with coefficients λr=1,λp=10, and λa=0.01, respectively. Here, ϕ is a normalized feature extraction network as in [2123]. This discriminative network was trained using the Adam optimizer with learning rate 10-5 to minimize focal and L1 loss for the classification and regression heads respectively. The mean number of objects in synthetic images was set as μ=1.25.

Results

Here, we compare the detection model we developed without and with synthetic augmentation to several data-driven algorithms that report their performance on the JSRT dataset in Table 1. It is difficult to compare only the performance enhancements due to algorithmic differences because baselines were developed on different train/validation sets and a few were tested only on a subset of the JSRT dataset, but we list them for reference. General and chest radiologists reportedly achieved 64% and 77% recall at 0.072 and 0.076 false positives per image, respectively [16]. In comparison, other baselines [2429] we found report 25× more FPPI than human experts, whereas both our detection models can attain 0.08 FPPI. This low false positive level is necessary for practical deployment of computer-aided diagnostic systems [1, 30]. At this level of false positives, the detection model was observed to detect nodular patterns at 49% and 52% recall with and without synthetic augmentation. This enhancement in recall shows that synthetic augmentation can improve recall while retaining the level of false positives.

Table 1.

Recall and false positives per image reported on the JSRT dataset, listed in order of increasing false positive rates. All data-driven methods other than that developed in this work exceed false positives per image by a factor of 25 of our model and radiologists. Note that our work and others’ used different training datasets and evaluation schemes (e.g., K-fold validation) and should be taken only as references as they are not directly comparable

Methods FP/image Recall (%) Training database
General radiologists [16] 0.072 64 -
Chest adiologists [16] 0.076 77 -
Standard (ours) 0.08 49.4 In-house (w/o JSRT)
Synthetic Aug. (ours) 0.08 52.0 In-house
Schiham et al. [28] 2.0 51 Nodular JSRT
Hardie et al. [26] 2.0 63 140 nodular JSRT
Chen et al. [24] 2.0 64.9 140 normal and nodular JSRT
Li et al. (Single model) [27] 2.1 57 Full JSRT
Coppini et al. [25] 4.3 60 Nodular JSRT
Wei et al. [29] 5.4 80 Full JSRT

Dataset Size

To understand the relationship between dataset size and how synthesized nodules affect training a detection model, we fixed the normal:abnormal:synthetic data ratio within a batch to 2 : 1 : 1 and varied the training set size N used to train both the generative and detection models. The generative capability of the nodule synthesizer is shown in Fig. 4 visualizing how the generator becomes more capable of generating realistic synthesized nodules as the number of abnormal training samples increases.The detection model’s average precision (AP) change when using standard synthetic augmentation on an internal and JSRT validation set are shown in Table 2, where it is shown that synthetic augmentation always helps generalize throughout various dataset sizes and internal/external sets.

Fig. 4.

Fig. 4

Synthesized images when the nodule synthesis network was trained on (left) 250, (middle) 500, and (right) 1958 abnormal images. Synthetic examples become more visually realistic when the size of training image is increased

Table 2.

Average precision improvements as training dataset size varies. Synthetic generation of abnormal data consistently enhances the detection network’s precision in both low-data and large-data regimes

nab Internal validation JSRT
16, 531, 1958 75.177.8 47.052.3
2500, 500 64.370.8 39.847.8
1250, 250 60.564.3 32.936.3

Batch Sampling Ratio

While there are no restrictions on the class of nodular patterns our generator can produce, the number of real data (template and nodular patterns) available in the training set will always remain fixed. To observe the extent to which a discriminative network can benefit from using synthetic nodules, we carefully increased the ratio between the real (ab) to synthetic (syn) abnormal images while keeping the number of normal (n) images equal to their sum. The model’s performances on the in-house dataset are shown in Table 3. We noticed that a DNN trained using the smallest ratio of synthetic images already outperforms the standard model trained using only real images, but the performance starts to degrade as the number of synthetic nodules excessively exceeds the real abnormal samples. This concave performance indicates how a small number of synthetic patterns are helpful, but not particularly effective, and as more synthetic images are merged with the real dataset, it hinders training. The relatively low precision on the internal dataset is due to non-nodular lesions including focal interstitial opacity or consolidation considered normal for our nodule detection task.

Table 3.

Ablation study: performances when trained using different sampling ratios within a batch. Note that recall and precision performances are at 0.08 FPPI. Batches containing synthetic samples enhances performances while batches with only synthetic samples shows drastic performance drop. Different batch ratio strategy should be taken depending on the target metric. Bold numbers denotes the highest performance in each performance metric

n : ab : syn AP Recall Precision
Standard 75.1 82.3 47.7
3 : 2 : 1 75.4 82.3 47.7
2 : 1 : 1 77.8 83.3 48.0
3 : 1 : 2 75.2 84.3 48.3
1 : 0 : 1 23.9 29.8 24.8

We evalate the performance of a model trained using only on real normal and synthetic abnormal images to show a limiting case in Table 3. While synthetic images can aid in learning discriminative features, this experiment shows that synthetic samples could not be used as a replacement to real abnormalities. This is consistent with the observation that a generative model learns a compressed representation of the real dataset.

Conclusion

We proposed a generative framework which can be used for real-time augmentation to generate synthetic nodules. With the use of synthetic nodular patterns, batch sampling can be performed with nearly even classes which was shown to enhance the recall of a highly precise pulmonary nodule detection model. Our experiments illustrate the factors that must be considered in training a detection network to localize nodules using the synthesized images and showed the effectiveness of our approach on internal and external datasets. By controlling various dataset sizes to train the generative and detection networks, we showed the trade-off between the generative capacity and detection performance enhancements resulting from synthesized nodules. We hope that our experiments can guide future research in using synthetic images to aid multi-label classification and localization in CXR images which are known to suffer severely from class imbalance.

Funding

Not applicable.

Availability of Data and Material

Not applicable.

Code Availability

Not applicable.

Declarations

Conflict of Interest

Minki Chung, Seo Taek Kong, Beomhee Park, Younjoon Chung and Kyu-Hwan Jung are employees of VUNO Inc. Kyu-Hwan Jung is an equity holder of VUNO Inc.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Minki Chung and Seo Taek Kong contributed equally to this work.

References

  • 1.Sung J, Park S, Lee SM, Bae W, Park B, Jung E, Seo JB, Jung KH. Added value of deep learning-based detection system for multiple major findings on chest radiographs: A randomized crossover study. Radiology. 2021;299(2):450–459. doi: 10.1148/radiol.2021202818. [DOI] [PubMed] [Google Scholar]
  • 2.Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S., Chute, C., Marklund, H., Haghgoo, B., Ball, R., Shpanskaya, K., et al.: Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 590–597 (2019)
  • 3.Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2097–2106 (2017)
  • 4.Berthelot, D., Carlini, N., Cubuk, E.D., Kurakin, A., Sohn, K., Zhang, H., Raffel, C.: Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In: International Conference on Learning Representations (2020)
  • 5.Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. pp. 702–703 (2020)
  • 6.Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. p. 2672–2680. NIPS’14, MIT Press, Cambridge, MA, USA (2014)
  • 7.Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2107–2116 (2017)
  • 8.Chuquicusma, M.J., Hussein, S., Burt, J., Bagci, U.: How to fool radiologists with generative adversarial networks? a visual turing test for lung cancer diagnosis. In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018). pp. 240–244. IEEE (2018)
  • 9.Frid-Adar, M., Amer, R., Greenspan, H.: Endotracheal tube detection and segmentation in chest radiographs using synthetic data. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 784–792. Springer (2019)
  • 10.Madani, A., Moradi, M., Karargyris, A., Syeda-Mahmood, T.: Chest x-ray generation and data augmentation for cardiovascular abnormality classification. In: Medical Imaging 2018: Image Processing. vol. 10574, p. 105741M. International Society for Optics and Photonics (2018)
  • 11.Madani, A., Moradi, M., Karargyris, A., Syeda-Mahmood, T.: Semi-supervised learning with generative adversarial networks for chest x-ray classification with ability of data domain adaptation. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). pp. 1038–1042. IEEE (2018)
  • 12.Nie, D., Trullo, R., Lian, J., Petitjean, C., Ruan, S., Wang, Q., Shen, D.: Medical image synthesis with context-aware generative adversarial networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 417–425. Springer (2017) [DOI] [PMC free article] [PubMed]
  • 13.Sandfort V, Yan K, Pickhardt PJ, Summers RM. Data augmentation using generative adversarial networks (cyclegan) to improve generalizability in ct segmentation tasks. Scientific reports. 2019;9(1):1–9. doi: 10.1038/s41598-019-52737-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wu, E., Wu, K., Cox, D., Lotter, W.: Conditional infilling gans for data augmentation in mammogram classification. In: Image Analysis for Moving Organ, Breast, and Thoracic Images, pp. 98–106. Springer (2018)
  • 15.Xing, Y., Ge, Z., Zeng, R., Mahapatra, D., Seah, J., Law, M., Drummond, T.: Adversarial pulmonary pathology translation for pairwise chest x-ray data augmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 757–765. Springer (2019)
  • 16.Shiraishi, J., Katsuragawa, S., Ikezoe, J., Matsumoto, T., Kobayashi, T., Komatsu, K.i., Matsui, M., Fujita, H., Kodera, Y., Doi, K.: Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. American Journal of Roentgenology 174(1), 71–74 (2000) [DOI] [PubMed]
  • 17.Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 5505–5514 (2018)
  • 18.Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Free-form image inpainting with gated convolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4471–4480 (2019)
  • 19.Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp. 2794–2802 (2017)
  • 20.Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision. pp. 2980–2988 (2017)
  • 21.Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2016)
  • 22.Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision. pp. 694–711. Springer (2016)
  • 23.Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4681–4690 (2017)
  • 24.Chen S, Suzuki K, MacMahon H. Development and evaluation of a computer-aided diagnostic scheme for lung nodule detection in chest radiographs by means of two-stage nodule enhancement with support vector classification. Medical physics. 2011;38(4):1844–1858. doi: 10.1118/1.3561504. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Coppini G, Diciotti S, Falchini M, Villari N, Valli G. Neural networks for computer-aided diagnosis: detection of lung nodules in chest radiograms. IEEE Transactions on Information Technology in Biomedicine. 2003;7(4):344–357. doi: 10.1109/TITB.2003.821313. [DOI] [PubMed] [Google Scholar]
  • 26.Hardie RC, Rogers SK, Wilson T, Rogers A. Performance analysis of a new computer aided detection system for identifying lung nodules on chest radiographs. Medical Image Analysis. 2008;12(3):240–258. doi: 10.1016/j.media.2007.10.004. [DOI] [PubMed] [Google Scholar]
  • 27.Li C, Zhu G, Wu X, Wang Y. False-positive reduction on lung nodules detection in chest radiographs by ensemble of convolutional neural networks. IEEE Access. 2018;6:16060–16067. doi: 10.1109/ACCESS.2018.2817023. [DOI] [Google Scholar]
  • 28.Schilham AM, Van Ginneken B, Loog M. A computer-aided diagnosis system for detection of lung nodules in chest radiographs with an evaluation on a public database. Medical Image Analysis. 2006;10(2):247–258. doi: 10.1016/j.media.2005.09.003. [DOI] [PubMed] [Google Scholar]
  • 29.Wei, J., Hagihara, Y., Shimizu, A., Kobatake, H.: Optimal image feature set for detecting lung nodules on chest x-ray images. In: CARS 2002 computer assisted radiology and surgery, pp. 706–711. Springer (2002)
  • 30.Park, S., Park, G., Lee, S.M., Kim, W., Park, H., Jung, K., Seo, J.B.: Deep learning–based differentiation of invasive adenocarcinomas from preinvasive or minimally invasive lesions among pulmonary subsolid nodules. European Radiology pp. 1–9 (2021) [DOI] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.

Not applicable.


Articles from Journal of Digital Imaging are provided here courtesy of Springer

RESOURCES