High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Feb;30(1):95-101.
doi: 10.1007/s10278-016-9914-9.

High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks

Affiliations

High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks

Alvin Rajkomar et al. J Digit Imaging. 2017 Feb.

Abstract

The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

Keywords: Artificial neural networks; Chest radiographs; Computer vision; Convolutional neural network; Deep learning; Machine learning; Radiography.

PubMed Disclaimer

Conflict of interest statement

Compliance with Ethical Standards Competing Interests Alvin Rajkomar reports having received fees as a research advisor from Google. Funding This research did not receive any specific grant from funding organizations.

Figures

Fig. 1
Fig. 1
Methodology and data flow of image preparation and model creation. The flowchart outlines the pathway of image collection, selection, processing, and manual labeling (for the chest radiographs) for each image source; division of prepared images into test, training, and validation sets; and training and evaluation of the GoogLeNet model using image sets from sources as indicated in each step. White rectangular boxes indicate numbers of images at collection, selection, division, and post-processing steps, as well as numbers of categories (for ImageNet) and studies (for the UCSF chest radiographs) in which the images were contained. Image processing and labeling steps are in grey rectangular boxes
Fig. 2
Fig. 2
Validation accuracy for nine models. Classification accuracy, based on top prediction, of the UCSF chest radiograph validation set is graphed for the nine different models, grouped by pre-training and fine-tuning methods. Error bars mark 95 % confidence intervals. Labels above the graph and to the right of the legend show pooled comparisons and chi-square test results. For the models fine-tuned on original radiographs, validation sets consisted of 376 original radiographs. For models fine-tuned on augmented radiographs, validation sets consisted of 39,856 augmented radiographs
Fig. 3
Fig. 3
Histogram of test set classification results. Graph shows frequency of images by predicted probability of frontal images, using bins of size 1. Shading indicates actual class of images. Images from both the UCSF test set and publicly available set were included
Fig. 4
Fig. 4
Example test images and classification results. Examples of frontal and lateral images are shown with their frontal and lateral prediction probabilities listed underneath each. Images are all from either the UCSF test set or publicly available set
Fig. 5
Fig. 5
Classification cutoff determination using Youden Index. Youden Index is plotted against cutoffs determined using predicted probability of being a frontal image (PFI), incremented by 0.1 % in the range 0 to 100 %. Images from both the UCSF test set and publicly available set were included. Labels above plot indicate true image labels: true lateral images in the test sets all had PFI < 14.3 %, true frontal images all had PFI ≥ 99.6 %, and no test images had PFI in the intermediate range
Fig. 6
Fig. 6
Transformed images and classification results. One frontal and one lateral image from the test sets and four transformed versions of each are shown with their frontal and lateral prediction probabilities listed underneath each. Transformations are described in the top row

Similar articles

Cited by

References

    1. Peng Y, Jiang Y, Yang C, Brown JB, Antic T, Sethi I, et al. Quantitative analysis of multiparametric prostate Mr images: differentiation between prostate cancer and normal tissue and correlation with gleason score—a computer-aided diagnosis development study. Radiology. 2013;267(3):787–96. doi: 10.1148/radiol.13121454. - DOI - PMC - PubMed
    1. Meyers PH, Nice CM, Jr, Becker HC, Nettleton WJ, Jr, Sweeney JW, Meckstroth GR. Automated computer analysis of radiographic images 1. Radiology. 1964;83(6):1029–34. doi: 10.1148/83.6.1029. - DOI - PubMed
    1. Toriwaki J, Suenaga Y, Negoro T, Fukumura T. Pattern recognition of chest x-ray images. Comput Graph Image Process. 1973;2(3):252–71. doi: 10.1016/0146-664X(73)90005-1. - DOI
    1. Schalekamp S, Van Ginneken B, Karssemeijer N, Schaefer-Prokop C: Chest radiography: New technological developments and their applications. Seminars in respiratory and critical care medicine. Thieme Medical Publishers, 2014 - PubMed
    1. Lecun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. doi: 10.1038/nature14539. - DOI - PubMed

MeSH terms