Abstract
Abstract
A novel algorithm for generating artificial training samples from triangulated three-dimensional (3D) surface models within the context of dental implant recognition is proposed. The proposed algorithm is based on the calculation of two-dimensional (2D) projections (from a number of different angles) of 3D volumetric representations of computer-aided design (CAD) surface models. A fully convolutional network (FCN) is subsequently trained on the artificially generated X-ray images for the purpose of automatically identifying the connection type associated with a specific dental implant in an actual X-ray image. Semi-automated and fully automated systems are proposed for segmenting questioned dental implants from the background in actual X-ray images. Within the context of the semi-automated system, suitable regions of interest (ROIs), which contain the dental implants, are manually specified. However, as part of the fully automated system, suitable ROIs are automatically detected. It is demonstrated that a segmentation/detection accuracy of 94.0% and a classification/recognition accuracy of 71.7% are attainable within the context of the proposed fully automated system. Since the proposed systems utilise an ensemble of techniques that has not been employed for the purpose of dental implant classification/recognition on any previous occasion, the above-mentioned results are very encouraging.
Graphical abstract
Keywords: Dental implant, X-ray image, Classification
Introduction
Due to the powerful ability to learn abstract and complex features, deep learning algorithms have been employed as the underlying architecture to many computer vision applications such as object detection, image segmentation and image classification. Recent advances in machine learning, especially with regard to deep learning, are assisting to identify, classify, and quantify patterns in medical images, therefore helping to diagnose and treat different diseases.
Deep learning-based algorithms in biomedical imaging have produced impressive diagnostic and predictive results in radiology and pathology research [1, 2]. A number of deep learning-based algorithms have also been investigated in various medical image analysis processes involving multiple organs, the brain, pancreas, breast cancer diagnosis and COVID-19 detection and diagnosis [3–8]. The well-documented success of deep learning in medical imaging has the potential for meeting dental implant recognition needs.
Dental implant recognition is crucial to multiple dental specialties, such as forensic identification and dental reconstruction of broken connections. Within the context of implant dentistry, implants provide promising prosthetic restoration alternatives for patients. In clinical practice where the dental records of a patient are not readily available, reliable categorisation of a dental implant previously inserted into the aforementioned patient’s jaw is often challenging. Dentists often consider an X-ray image of the implant in question in order to discern the make, model, and dimensions of the implant. Based on this information, the connection type of the implant can be deduced. The dentist can subsequently order a suitable abutment and artificial tooth to replace the existing ones. Dentists may incur significant costs in scenarios where the wrong abutment or artificial tooth is ordered. A system that automates the classification of a dental implant based on an X-ray image of a patient’s jaw may therefore be of great assistance to dental practitioners.
The proficiency of deep learning for object detection and classification is well documented. However deep learning-based models require a large number of training samples in order to effectively train the model parameters. Although large annotated image sets (like Caltech 256, PASCAL and Imagenet) exist, the generation and annotation of a large number of training images for a variety of new applications is labour intensive, expensive and requires many man-hours. Within the medical field collecting a large amount of image data from medical facilities can be difficult. The limited availability of training data with accurate annotations is one of the challenges faced when using deep learning to create practical clinical applications in medical imaging. Hence in this study a strategy of artificially generating a large number of training samples is investigated.
In this study, a strategy to generate 2D projections (from a number of angles) of 3D volumetric representations of CAD surface models is proposed. The large number of freely available 3D surface models enables the generation of a large number of training samples very efficiently.
Related work
This research investigates the feasibility of deep learning techniques for the purpose of automatically assigning a questioned dental implant within an actual X-ray image to a specific connection type. In order to achieve the aforementioned objective, a deep learning-based model is trained on a very large number of simulated X-ray images. The simulated X-ray images are obtained by generating 2D projections of 3D volumetric representations of dental implants from a number of angles. The 3D volumetric representations are obtained from the triangulated coordinates of CAD surface models of the implants in question.
Generation of simulated data sets from three-dimensional models
The availability of large training sets is crucial in building proficient deep learning-based models. The use of synthetic data in a number of computer vision applications has provided a means of bridging the gap between simulated and actual training data. A number of algorithms for generating training samples from 3D models within the context of object detection have been investigated.
A number of techniques for generating training samples from 3D models have been investigated [9–11] for the problem of object detection. Within the field of biomedical engineering, Teixeira et al. [12] proposed an algorithm for generating synthetic X-ray images of the human anatomy.
Moreira et al. [13] proposed a strategy to determine the pose of a dental implant. The proposed algorithm is accomplished through a three-step approach: (i) a ROI is first manually specified using two operator-defined points at the implant’s main axis, after which (ii) a simulated cone beam computed tomography (CBCT) volume of the known implanted model is generated through Feldkamp-Davis-Kress (FDK) reconstruction and is coarsely aligned to the defined axis and finally (iii) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant’s pose from the optimal transformation.
Although the proposed state-of-the-art synthetic data generation techniques are efficient and accurate, a fast, accurate and fully automated methodology is still lacking. The strategy proposed by Moreira et al. is based on the implementation of the FDK algorithm, which constitutes an approximation of filtered backprojection from cone-beam projections with a circular orbit about the X-ray source.
In this paper the novel algorithm proposed for generating artificial dental implants is based on 2D projections that involve parallel beams. A more detailed description of the proposed strategy is provided in Section 4.1.
Dental implant detection
A number of semi-automated and fully automated systems have been proposed for the purpose of segmenting dental implants. Within the context of semi-automated segmentation various strategies have been investigated.
Morais et al. [14] proposed a dental implant segmentation approach which uses an active contour strategy for optimal definition of the dental implant’s boundaries. In order to evaluate the proposed segmentation strategy, the semi-automatically detected contour is compared to a ground truth generated by a single expert observer. The dice metric, mean absolute distance (MAD) and Hausdorff distance are employed to quantify the differences between the contours (semi-automatic and manual), where a dice metric of 0.97±0.01 pixels, a MAD of 2.24±0.85 pixels and a Hausdorff distance of 11.12±6 pixels are respectively obtained.
Fully automated segmentation algorithms are proposed by Cunha et al. [15] and Pauwels et al. [16]. Cunha et al. employed image preprocessing, followed by adjusted and trained active shape models. Pauwels et al. employed a contour detection technique and particle counting for the segmentation of the implants.
The aforementioned state-of-the-art algorithms for the segmentation of dental implants are mainly based on image processing techniques. The semi-automated segmentation algorithm proposed by Morais et al. is based on an active contour protocol. Within the context of fully automated dental implant segmentation, an active shape model and contour detection technique are respectively investigated by Cunha et al. and Pauwels et al.
The semi-automated segmentation strategy proposed in this paper is based on image processing techniques such as thresholding, connected component analysis and morphological post-processing. The fully automated segmentation strategy proposed in this paper is based on a deep learning algorithm. A detailed description of the implementation strategy within the context of the proposed segmentation algorithms is provided in Section 4.2.
Dental implant recognition
The identification of dental implants in X-ray images is often very challenging due to the large number of different implant models. A certain degree of expertise is required to identify and distinguish between the various dental implant types available on the market. The accurate classification of an implant model is very important in selecting a suitable replacement when the existing abutment and/or artificial tooth has been lost or damaged. Different dental implant recognition systems have been investigated.
Morais et al. [14] and Benakatti et al. [17] employed machine learning-based algorithms for the purpose of classifying dental implants in X-ray images. A k-nearest neighbour (KNN) algorithm is proposed by Morais et al., while Benakatti et al. also investigated support vector machines (SVMs), as well as X boost and logistic regression classifiers for the purpose of identifying dental implants achieving an average accuracy of 67%.
Several studies [18–21] have investigated deep convolutional neural networks (DCNN) for the purpose of classifying dental implants. Lee et al. [18] employed the Neuro-T version 2.0.1 (Neurocle Inc., Seoul, Korea) tool for the purpose of automatically selecting the best performing model with optimal hyper-parameters for implant recognition. An accuracy (area under curve (AUC)) of 95.4% was achieved.
Hadj et al. [19], Sukegawa et al. [20] and Kim et al. [21] investigated DCNN systems with transfer learning strategies for the purpose of classifying different dental implant models. Sukegawa et al. employed VGG networks, Hadj et al. used the GoogLeNet Inception V3 and Kim et al. employed YOLOv3 for transfer learning and fine-tuning purposes. Sukegawa et al. achieved an accuracy of 92.7%, Hadj et al. achieved an accuracy (AUC) of 93.8% and Kim et al. achieved an accuracy of 96.7%.
In the aforementioned systems the classification of a dental implant is based on the type of dental implant model. The protocol proposed in this study delves deeper by investigating the classification of dental implant connection types. The dental implant connection interface is a key feature to consider when choosing an abutment replacement model. The implant connection interface corresponds to the connection site where the dental implant body connects to the abutment. The implant geometry is therefore vital to the successful outcome of the restoration process, since implant connection type classification failure is strongly related to how the restorative phase is managed. The accurate classification of the implant connection type can improve aesthetics and longevity, and provide for a structurally secure joint. The dental implant connection interface can generally be described as either a conical, internal hexagonal or external hexagonal connection. The geometry of the connection can be further characterised as either a narrow, standard or wide platform. A more detailed description of the connection types investigated in this study is provided in Section 5.2.
In this paper two independent FCN models are proposed: (i) the first model (FCN-1) is proposed for the purpose of automatically classifying the connection type associated with a specific dental implant from an X-ray image, while (ii) the second model (FCN-2) is proposed for the detection of suitable ROIs that contain the dental implants in an actual X-ray image.
Contributions
This paper proposes a novel ensemble of techniques within the context of data generation and dental implant recognition. The feasibility of deep learning techniques for the purpose of automatically assigning a questioned dental implant within an actual X-ray image to a specific connection type is investigated. The key contributions of this paper can be summarised as follows:
A novel framework for generating a large number of simulated X-ray images from 3D surface models for the purpose of training and validating the FCN-1 model within the context of dental implant recognition is proposed.
The simulated and actual X-ray images are rendered more similar by implementing a number of data augmentation strategies within the context of the simulated X-ray images, while performing image preprocessing and normalisation on the actual X-ray images.
A novel ensemble of object detection techniques for the purpose of automatically segmenting dental implants within an actual X-ray image is developed.
Novel semi-automated and fully automated end-to-end deep learning-based systems for dental implant recognition are proposed.
System design
The design of the dental implant recognition system developed in this study is conceptualised in Fig. 1. The proposed system can be divided into three parts, that is (i) the proposed strategy for artificially generating simulated X-ray images of dental implants, (ii) the strategy towards dental implant segmentation, and (iii) dental implant classification/recognition through machine learning.
Generation of simulated X-ray images
In this study, a strategy that generates 2D projections (from a number of angles) of 3D volumetric representations of CAD surface models is proposed. The concept of X-ray computed tomography (CT) for the purpose of reconstructing images from a series of projections [22–24], inspires the X-ray data generation technique proposed in this paper. The forward model in analytic X-ray CT reconstruction is based on the Radon transform (RT), which amounts to assuming a monochromatic Beer-Lambert attenuation law [25]. The RT constitutes the projection of the image intensities along a radial line oriented at a specific angle [26]. In this paper the proposed strategy imitates the X-ray emission protocol in a CT scan by projecting parallel-ray beams modelled by a set of lines across the 3D volumetric image at different angles. It is important to note that, although the proposed projection protocol simulates the acquisition of a CT scan that measures the X-ray attenuation along a line between an X-ray source and an X-ray detector, each voxel within the 3D volumetric representation of a CAD surface model associated with a dental implant has a value of one. It is therefore assumed that the material of the dental implant is homogeneous and that all the attenuation coefficients are the same. The proposed protocol is as follows:
Firstly, the triangulated 3D surface coordinates of a specific dental implant are used to construct a 3D volumetric representation of the model in question. Each voxel in the volumetric representation constitutes a cube with a value of one.
Subsequently, 2D projections of the 3D volumetric representation are calculated from a number of angles. Each projection is obtained by calculating a number of parallel-ray sums of the 3D volumetric representation. Each projection profile constitutes a simulated X-ray image. The proposed data generation strategy is depicted in Fig. 2.
During the X-ray simulation process, each 3D volumetric representation of an implant is rotated out of the image plane through a number of different angles, before its projection is generated. An example of the dental implant C1 with a conical narrow platform, an external diameter of 3.30 mm and a length of 10 mm is represented in Fig. 3 for the purpose of illustrating the proposed out-of-plane rotation strategy for generating simulated training samples.
In addition to this, a number of in-plane rotations are conducted during the data augmentation protocol that forms part of training the FCN-1 model.
Semantic segmentation
In this study novel semi-automated and fully automated image segmentation systems are proposed. In the case of the semi-automated system suitable ROIs, which contain the dental implants, are manually specified (selected). Within the context of the fully automated system, suitable ROIs are automatically detected through a deep learning-based technique. In this section, semantic segmentation is performed on the actual X-ray image for the purpose of classifying pixels associated with the dental implants without differentiating implant instances.
Semi-automated detection of the regions of interest
A semi-automated segmentation strategy based on image processing techniques is implemented for the purpose of segmenting an actual X-ray image (see Fig. 4a) into pixels associated with the dental implants (foreground) and those associated with the background. The suitable ROIs that contain the dental implants are manually selected. Polygonal shapes are used to annotate the ROIs within the questioned image (see Fig. 4c). Local adaptive thresholding is applied to the actual X-ray image (depicted in Fig. 4a) for the purpose of converting it from grayscale to binary format (see Fig. 4d). The manually selected ROIs are subsequently employed as a mask image in order to remove the pixels not associated with the dental implants (see Fig. 4e). A set of post-processing techniques including morphological closing, dilation and hole filling are performed to eliminate noise, fill in the holes and enhance the binary mask image (see Fig. 4f).
Automated detection of the regions of interest
In this study, the FCN-2 model which is based on an encoder-decoder architecture is proposed for the detection of suitable ROIs that contain the dental implants in an actual X-ray image. The input images (the actual X-ray images and the corresponding segmentation mapping) are fed through the encoder network so that down-sampled feature maps are generated. The aforementioned encoder network consists of ten convolutional layers, where each of these layers is followed by a rectified linear unit (ReLU), batch normalisation (BN) and max pooling layer. A dropout rate of 5% is implemented. The decoder network is implemented for the purpose of up-sampling the feature maps to the same size as the original image, where the up-sampling layers are followed by convolutional layers so as to generate dense feature maps where the convolutional layer is followed by ReLU and BN. A sigmoid function is applied to the final feature map to compute the probability distribution across the binary classes. The final layer constitutes a classification layer, which also calculates the cross entropy loss function during training. The network is trained by employing the Adam algorithm.
The final binary masks acquired through the proposed semi-automated segmentation system serve as the ground truth for the purpose of training the proposed FCN-2 model. During training, the actual X-ray images and the corresponding ground truth masks are augmented by applying geometric transformations, such as random translations, rotations, variations in scale, as well as horizontal and vertical flipping. The proposed automated ROI detection protocol (architecture of the FCN-2 model) is depicted in Fig. 5.
Selected results illustrating the proficiency of the proposed FCN-2 model for the purpose of segmenting dental implant images into foreground and background regions are presented in Fig. 6. Figure 6a and b depict the probabilities that the pixels belong to the foreground with a shade of red. After a threshold of 0.5 has been applied to the aforementioned probabilities, the acquired binary images are depicted in Fig. 6c and d respectively. Although it is clear that the respective white regions (detected foreground) within the aforementioned binary images contain the dental implants, these images are still characterised by significant levels of noise, while the boundaries of the foreground regions are irregular. In order to reduce the noise and render the shape of the foreground boundaries similar to that of a chevron pattern, the same processes that were implemented for the previously discussed semi-automated segmentation system (see Section 4.2.1) are followed, while small connected components are also removed (see Fig. 7).
Instance segmentation
In this section instance segmentation is applied to the post-processed mask images acquired through the proposed semi-automated or fully automated algorithms. Each detected dental implant is therefore localised and segmented. The mask image is partitioned into its constituent components through connected component analysis [27]. A two-pass algorithm is employed for detecting the connected components and labelling each connected component within the binary image. A different label is therefore assigned to each dental implant. Each component is delimited by a bounding box which is subsequently used to segment the actual X-ray image into its constituent dental implants for the purpose dental implant classification. Figure 8 depicts the proposed dental implant localisation and segmentation protocol. The proposed segmentation strategy facilitates the classification of the connection type associated with a specific dental implant.
Dental implant classification
The proposed FCN-1 model is trained on artificially generated (simulated) X-ray images for the purpose of assigning a questioned dental implant within an actual X-ray image to one of nine different connection types. A detailed description of the proposed FCN-1 model is provided in Experiment 1. The trained model is presented with an actual X-ray image which contains only a single implant to extract features for classification purposes. Each questioned implant image is normalised in order to ensure scale, translational and rotational invariance. The Hotelling transform [28] is applied to each questioned dental implant image for the purpose of eliminating in-plane rotations. In order to suppress noise, a Gaussian filter [29] is employed to smoothen each questioned dental implant image. A suitable grayscale intensity transformation is implemented for the purpose of adjusting the dynamic range of the pixels in such a way that the dark pixels are significantly darkened and the bright pixels are slightly darkened [30]. The aforementioned grayscale intensity transformation and spatial filtering techniques are implemented for the purpose of enhancing contrast and suppressing noise in the questioned image so as to render each actual X-ray image similar to the simulated dental implants. The proposed dental implant classification protocol is depicted in Fig. 9.
Experiments
Data
In this study, the simulated X-ray dental implant data set is generated from triangulated surface models, which is standard triangle language (STL) files, engineered by MIS (Make It Simple). The connection type and corresponding geometrical features associated with each MIS dental implant are specified in Table 1.
Table 1.
Connection type | Dental implant type | Length (mm) |
---|---|---|
(1) Conical narrow platform (V3) | V3: External diameter 3.30 mm | 10, 11, 13, 16 |
Internal diameter 2.75 mm | ||
(2) Conical narrow platform (C1) | C1: External diameter 3.30 mm | 10, 11, 13, 16 |
Internal diameter 2.75 mm | ||
(3) Conical standard platform | V3: External diameter 3.90 mm | 8, 10, 11, 13, 16 |
Internal diameter 3.15 mm | ||
V3: External diameter 4.30 mm | 8, 10, 11, 13, 16 | |
Internal diameter 3.15 mm | ||
V3: External diameter 5.00 mm | 8, 10, 11, 13, 16 | |
Internal diameter 3.15 mm | ||
C1: External diameter 3.75 mm | 8, 10, 11, 13, 16 | |
Internal diameter 3.15 mm | ||
C1: External diameter 4.20 mm | 8, 10, 11, 13, 16 | |
Internal diameter 3.15 mm | ||
(4) Conical wide platform | C1: External diameter 5.00 mm | 8, 10, 11, 13, 16 |
Internal diameter 4.00 mm | ||
(5) Internal hex narrow platform | SEVEN: External diameter 3.30 mm | 10, 11, 13, 16 |
Internal diameter 2.10 - 3.30 mm | ||
M4: External diameter 3.30 mm | 10, 11, 13, 16 | |
Internal diameter 2.10 - 3.30 mm | ||
(6) Internal hex standard platform | SEVEN: External diameter 3.75 mm | 8, 10, 11, 13, 16 |
Internal diameter 2.45 - 3.75 mm | ||
SEVEN: External diameter 4.20 mm | 6, 8, 10, 11, 13, 16 | |
Internal diameter 2.45 - 3.75 mm | ||
M4: External diameter 3.75 mm | 8, 10, 11, 13, 16 | |
Internal diameter 2.45 - 3.75 mm | ||
M4: External diameter 4.20 mm | 6, 8, 10, 11, 13, 16 | |
Internal diameter 2.45 - 3.75 mm | ||
(7) Internal hex wide platform | SEVEN: External diameter 5.00 mm | 6, 8, 10, 11, 13, 16 |
Internal diameter 2.45 - 4.50 mm | ||
SEVEN: External diameter 6.00 mm | 6, 8, 10, 11, 13 | |
Internal diameter 2.45 - 4.50 mm | ||
M4: External diameter 5.00 mm | 6, 8, 10, 11, 13, 16 | |
Internal diameter 2.45 - 4.50 mm | ||
M4: External diameter 6.00 mm | 6, 8, 10, 11, 13 | |
Internal diameter 2.45 - 4.50 mm | ||
(8) External hex standard platform | LANCE: External diameter 3.75 mm | 10, 11.5, 13, 16 |
Internal diameter 2.70 mm | ||
LANCE: External diameter 4.20 mm | 8, 10, 11.5, 13, 16 | |
Internal diameter 2.70 mm | ||
(9) External hex wide platform | LANCE: External diameter 4.20 mm | 8, 10, 11.5, 13, 16 |
The boldfaced phrases are the names of the dental implant models
Within the context of the actual X-ray images, a total of 483 labelled and unlabelled images, which contain implants inserted into either human or pig jaws, are considered (see Fig. 10). The database of X-ray images involving human jaws pertains to anonymous dental patients and was made available to the authors of this paper by Medical Care NV. The database of X-ray images involving pig jaws was generated explicitly for this research by inserting the relevant dental implants into detached pig jaws obtained from butchers, after which the inserted implants were X-rayed with a similar device as the one used within the context of dental patients.
Within this context, labelled dental implant images refer to the X-ray images that are identified according to the dental implant model or brand, while the unlabelled X-ray images consist of dental implants with unknown dental implant models or brands. The labelled dental implant images comprise of four different brands (Anthogyr, Astra, MIS and Nobel Biocare).
The images are captured in grayscale format. Each of these images is resized to 512512 pixels and saved in JPEG format. The data set (both the labelled and unlabelled X-ray images) are annotated for the purpose of training the proposed FCN-2 model to facilitate the automatic detection of the dental implants. The data set is annotated for the purpose of semantic segmentation, where the binary masks separate the dental implants from the background in a pixel-wise fashion. The constructed data set consists of the X-ray images and corresponding set of masks that represent the ground truth of the segmentation.
A semi-automated process is employed for the annotation of the ground truth masks (see Fig. 11). The data set is subsequently partitioned into three sets, that is a training, validation, and test set. A description of the data partitioning protocol within this context is provided in Experiment 2.
Within the context of dental implant recognition, only the MIS dental implants are considered for the purpose of classifying the connection type associated with a specific dental implant in an actual X-ray image. The segmented actual X-ray images serve as the test set used to measure the generalisation performance of the proposed FCN-1 model.
Experimental protocol
In this study, three main experiments are conducted for the purpose of investigating the proficiency of the proposed systems. A k-fold cross-validation experimental protocol is conducted on the proposed deep learning-based algorithms. The experimental protocol is categorised as follows:
Experiment 1. This experiment investigates the proficiency of the proposed strategy of artificially generating simulated X-ray images of dental implants.
Experiment 2. This experiment investigates the proficiency of the proposed automated ROI detection algorithm.
Experiment 3. This experiment investigates the proficiency of the proposed network for the purpose of classifying a questioned dental implant within an actual X-ray image. This experiment is further dichotomised into two sub-experiments, that is Experiment 3A and Experiment 3B, which respectively considers the semi-automated and fully automated systems.
Experiment 1 (Simulated X-ray images)
In this experiment, the proposed FCN-1 model is trained on simulated X-ray images for the purpose of assigning a questioned dental implant within an actual X-ray image to one of nine different connection types. The proposed FCN-1 model consists of twelve convolutional layers, where each of these layers is followed by a ReLU and max pooling layer. A dropout layer with a dropout rate of 50% is added before the final layer. The architecture of the proposed FCN-1 model is depicted in Fig. 12.
The data set of simulated X-ray images is augmented by in-plane rotations of maximally during training (see Fig. 13). The simulated X-ray data is partitioned into training and validation sets for the purpose of assigning a questioned dental implant within an actual X-ray image to one of the nine connection types investigated in this study. A total number of simulated X-ray images (80%) are used for training purposes within the context of the current experiment, while simulated X-ray images (20%) are used for validation purposes. The training set (seen data) is used to learn the model parameters (weights), the validation set is used for avoiding overfitting by enforcing a stopping criterion, and the test set is used to measure the performance of the network.
A k-fold cross-validation protocol is employed during training for data splitting, which implies that the training set is divided into k different folds. One fold is held out as the validation set. The model is trained on the remaining folds and then applied to the validation set, after which the predictive performance is recorded. This process is repeated k times so that each fold has been used as a validation set once. The recorded predictive performances are then averaged. The optimal model parameter is determined as the one associated with the best average predictive performance.
Experiment 2 (Automated ROI detection)
In this experiment, the data set of X-ray images which consists of both labelled and unlabelled images is first partitioned into two independent sets. In set one, the labelled dental implants in pig jaws are employed for test purposes. In set two, the labelled dental implants in human jaws are employed for test purposes. For set one, the X-ray images in human jaws and the unlabelled X-ray images in pig jaws are used for training and validation purposes respectively. For set two, the X-ray images in pig jaws and the unlabelled X-ray images in human jaws are used for training and validation purposes respectively. The aforementioned data partitioning protocol is depicted in Fig. 14.
Experiment 3 (Dental implant recognition)
In this section, experiments are conducted to investigate the proficiency of the proposed systems for the purpose of classifying a questioned dental implant within an actual X-ray image. In this experiment, the dental implants are extracted from actual X-ray images and presented to the trained model for evaluation purposes. In Experiment 3A the suitable ROIs that contain the dental implants, are manually specified through the proposed semi-automated system. Within the context of Experiment 3B the suitable ROIs are automatically detected through the proposed deep learning-based technique.
Results
In this section, the performance of the proposed systems is reported and a comprehensive analysis of the results is presented. The statistical measures employed in this study are listed in Table 2.
Table 2.
Performance measure | Definition |
---|---|
Precision (PRE) | TP/(TP+FP) |
Recall (REC) | TP/(TP+FN) |
Accuracy (ACC) | (TP+TN)/(TP+FN+FP+TN) |
score | 2 * PRE * REC/(PRE+REC) |
The number of true positives, false positives, true negatives, and false negatives are denoted by TP, FP, TN, and FN, respectively
Training results for simulated X-ray images
The proposed FCN-1 model is trained on simulated data that is augmented by in-plane rotations of maximally . The training algorithm is run for a maximum of 1000 epochs and validated across a 5-fold cross-validation protocol. The accuracy of the network is measured after each epoch, by employing the independent validation set. A validation accuracy of 98% is achieved (see Fig. 15).
Results for dental implant detection in actual X-ray images
In order to conduct a robust analysis, a 5-fold cross-validation procedure is carried out. During network training, at the end of each epoch, the validation sets are used to gauge the proficiency of the model. For sets one and two respectively, accuracies of 97.84% and 97.21% are achieved by analysing the segmentation performance in terms of pixel-wise accuracy during training. In order to evaluate the proficiency of the proposed ROI detection system during the test phase, the evaluation is conducted on the predicted segmentation maps before post-processing is carried out. The precision, recall, accuracy, and score are employed as performance measures for both sets. The results achieved during testing are presented in Table 3.
Table 3.
Performance measure | Set one | Set two |
---|---|---|
PRE | 74.38% | 68.31% |
REC | 90.98% | 78.64% |
ACC | 90.43% | 94.06% |
F score | 80.73% | 84.48% |
The results constitute averages from a 5-fold cross-validation protocol
Note that the precision metric is significantly lower than the accuracy and recall metrics. The proposed model therefore incorrectly classifies instances as positive on a number of occasions.
Selected results illustrating the proficiency of the proposed FCN-2 model for the purpose of segmenting dental implants into foreground and background regions are presented in Fig. 16. The true positive, true negative, false positive and false negative pixels are depicted in white, black, green and pink respectively.
Results for dental implant recognition in actual X-ray images
In order to provide a more detailed perspective into the proposed dental implant recognition protocol, confusion matrices are computed for the nine connection types across the five folds. These confusion matrices provide in-depth insight into the classification of each connection type within the actual X-ray images.
Figures 17 and 18 depict the confusion matrices for the proposed semi-automated dental implant classification system when implants inserted into pig jaws and human jaws are respectively considered.
In order to further evaluate the proficiency of the proposed system, the precision, recall, score and accuracy are estimated from the confusion matrices. It is important to note that within the context of dental implant classification, the data employed for dental implant recognition is imbalanced and that certain classes are underrepresented. In order to address the aforementioned data imbalance, weighted average metrics within the context of the precision, recall and score are estimated from the confusion matrices.
The results for Experiment 3A (the semi-automated system) and Experiment 3B (the fully automated system) are presented in Table 4 (within the context of pig jaws) and Table 5 (within the context of human jaws).
Table 4.
Performance measure | Experiment 3A | Experiment 3B |
---|---|---|
PRE | 73.04% | 73.12% |
REC | 74.63% | 71.72% |
score | 72.23% | 70.75% |
ACC | 71.72% |
The results constitute weighted averages across the five folds
Table 5.
Performance measure | Experiment 3A | Experiment 3B |
---|---|---|
PRE | 70.52% | 70.55% |
REC | 69.76% | 68.67% |
score | 69.70% | 67.60% |
ACC | 68.67% |
The results constitute weighted averages across the five folds
Discussion
Within the context of simulated X-ray images, a high accuracy of 98% is achieved during validation, demonstrating that the proposed FCN-1 model (for automated connection type classification) effectively learns the prominent features associated with each artificially generated dental implant. Within this context, the simulated X-ray images are partitioned into a training and validation set. The proposed network uses the training data for learning prominent features, while the network is tested against the validation data after every epoch in order to prevent overfitting. Data augmentation is implemented in order to ensure that the model learns varied samples of the data so as to increase its capability to generalise on unseen data.
The performance of the proposed FCN-2 model (for automated dental implant segmentation) is encouraging. The proposed system is able to classify the pixels associated with the dental implants (foreground) and those associated with the background with accuracies of and within the context of sets one and two respectively. The proposed model is trained to perform semantic segmentation. Morphological post-processing techniques are applied to the output binary masks in order to remove noise and components not associated with the dental implants.
Within the context of implants inserted into pig jaws, accuracies of 74.63% and 71.72% are achieved for the semi-automated and fully automated systems respectively. Within the context of implants inserted into human jaws, accuracies of and are achieved for the semi-automated and fully automated systems respectively. Within the context of the semi-automated system the dental implants are accurately segmented from the actual X-ray images. This system therefore also serves as a benchmark in gauging the performance of the fully automated system.
The aforementioned results clearly demonstrate that the proposed protocol is very proficient at generating artificial (simulated) X-ray images that closely resemble actual X-ray images. The ensemble of algorithms proposed in this paper provides valuable insight into artificial data generation and automatic implant segmentation within the context of dental implant recognition.
A number of studies [14, 17–21] have applied machine learning and especially deep learning algorithms for the purpose of classifying dental implants and achieved accuracies of 0.63 to 0.96. In the aforementioned systems the classification of a dental implant is based on the type (brand or model) of the dental implant. The protocol proposed in this study delved deeper by investigating the classification of dental implant connection types. The dental implant connection interface is a vital feature to consider when choosing an abutment replacement model. The compatibility of the dental implant connection interface varies depending on the model. A number of implant models are incompatible with those of other brands [31]. It is therefore very important to accurately classify the dental implant connection type. This study aims to be complementary to the existing state-of-the-art systems within the context of dental implant recognition.
Software and hardware employed
The proposed algorithm for generating simulated X-ray images is implemented in MATLABTM. The neural network-based experimental protocol for the purpose of image segmentation and classification is implemented in TensorFlow. The experimental protocol for model training is conducted on the Nvidia Tesla V100 processor through the Kraken server. Model inspection and evaluation are performed in the Google collaboratory environment which offers free GPU usage for interactive sessions in a Jupyter Notebook-like environment.
Conclusion and future work
Conclusion
In this paper a novel algorithm for the generation of simulated X-ray images is proposed. The classification/recognition results achieved are very encouraging.
Within the context of dental implant segmentation, the semi-automated and fully automated systems proposed in this paper employ an ensemble of techniques that has not been employed for the purpose of dental implant detection on any previous occasion and may therefore also be considered novel.
The application of data normalisation techniques, geometric transformations (scaling and translation), spatial filtering and grayscale intensity adjustments to the questioned dental implant images significantly improve the results.
The proficiency of the proposed systems is slightly lower for human implants than is the case for pig implants, which may be attributed to the presence of more significant noise levels. The signal-to-noise ratio (SNR) can be employed to measure the noise (e.g. random quantum mottle) in the actual X-ray images. An average SNR of 2.742 is estimated for the pig data set, while an average SNR of 1.273 is estimated for the human data set. The human data set has a lower SNR which is typically associated with grainy images.
Future work
Although the research conducted in this study provides valuable insight into numerous aspects relating to deep learning-based dental implant recognition, the following alternative avenues can also be pursued and may represent interesting future work:
A more in-depth investigation into and development of a model that is also capable of distinguishing between implants with the same external shape, but with different internal connection types. This may be the case in exceptional scenarios within the context of dental implants from Nobel Replace. Once it is established that the predicted implant type is associated with more than one connection type, the ROI that only contains the connection is submitted to a different model that only differentiates between the connection types in question.
- The proposed strategy of generating 2D projections (from a number of angles) of 3D volumetric representations of CAD surface models is also applicable to a wide range of other objects. The CAD models for a variety of objects such as vehicles, aircraft, and animals are either readily available or relatively easy to create within a short time period. Potential applications for this research include areas such as:
-
(i)vehicle detection and classification in traffic scenes,
-
(ii)the identification of aircraft, as well as
-
(iii)the categorisation of animals from aerial cameras.
-
(i)
Within the context of simulated X-ray image generation, an investigation into a more realistic simulated X-ray acquisition process may be conducted. This can be achieved by also considering the physical attenuation process of the X-rays as they propagate through the material.
Acknowledgements
The authors express gratitude to Medical Care NV and Nick Van Dooren for providing the anonymised database of X-ray images and valuable ideas concerning this work.
Biographies
Aviwe Kohlakala
is a Ph.D. candidate at Stellenbosch University. Her research interests include machine learning, deep learning, biometric authentication and pattern recognition.
Johannes Coetzer
is a Senior Lecturer in Applied Mathematics at Stellenbosch University. His research interests include machine learning, biometric authentication and classifier combination.
Jeroen Bertels
is a Ph.D. candidate in the Processing Speech and Image Division at KU Leuven, where he investigates CNN algorithms for infarction predictions following an acute ischemic stroke.
Dirk Vandermeulen
is a Full Professor in the Processing Speech and Image Division at KU Leuven. His research areas involve computer vision, medical image analysis, and biometric authentication.
Funding
This study was funded by the Ball family and the Division for Research Development at Stellenbosch University.
Declarations
Ethics approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committees and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study. All applicable international, national, and institutional guidelines for the care and use of animals were followed.
Contributor Information
Aviwe Kohlakala, Email: avi.kohlakala@gmail.com.
Johannes Coetzer, Email: jcoetzer@sun.ac.za.
Jeroen Bertels, Email: jeroen.bertels@kuleuven.be.
Dirk Vandermeulen, Email: dirk.vandermeulen@kuleuven.be.
References
- 1.Lassau N, Ammari S, Chouzenoux E, Gortais H, Herent P, Devilder M, Soliman S, Meyrignac O, Talabard M-P, Lamarque J-P, et al. Integrating deep learning ct-scan model, biological and clinical variables to predict severity of covid-19 patients. Nature communications. 2021;12(1):1–11. doi: 10.1038/s41467-020-20657-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Aggarwal R, Sounderajah V, Martin G, Ting DS, Karthikesalingam A, King D, Ashrafian H, Darzi A. Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis. NPJ Digital Medicine. 2021;4(1):1–23. doi: 10.1038/s41746-020-00373-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241
- 4.Milletari F, Navab N, Ahmadi S-A (2016) V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV). IEEE, pp 565–571
- 5.Havaei M, Davy A, Warde-Farley D, Biard A, Courville A, Bengio Y, Pal C, Jodoin P-M, Larochelle H. Brain tumor segmentation with deep neural networks. Medical Image Analysis. 2017;35:18–31. doi: 10.1016/j.media.2016.05.004. [DOI] [PubMed] [Google Scholar]
- 6.Fu M, Wu W, Hong X, Liu Q, Jiang J, Ou Y, Zhao Y, Gong X. Hierarchical combinatorial deep learning architecture for pancreas segmentation of medical computed tomography cancer images. BMC Systems Biology. 2018;12(4):119–127. doi: 10.1186/s12918-018-0572-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Jasti V, Zamani AS, Arumugam K, Naved M, Pallathadka H, Sammy F, Raghuvanshi A, Kaliyaperumal K (2022) Computational technique based on machine learning and image processing for medical image analysis of breast cancer diagnosis. Secur Commun Netw 2022
- 8.Luz E, Silva P, Silva R, Silva L, Guimarães J, Miozzo G, Moreira G, Menotti D. Towards an effective and efficient deep learning model for covid-19 patterns detection in x-ray images. Research on Biomedical Engineering. 2022;38(1):149–162. doi: 10.1007/s42600-021-00151-6. [DOI] [Google Scholar]
- 9.Rozantsev A, Lepetit V, Fua P. On rendering synthetic images for training an object detector. Computer Vision and Image Understanding. 2015;137:24–37. doi: 10.1016/j.cviu.2014.12.006. [DOI] [Google Scholar]
- 10.Tremblay J, Prakash A, Acuna D, Brophy M, Jampani V, Anil C, To T, Cameracci E, Boochoon S, Birchfield S (2018) Training deep networks with synthetic data: Bridging the reality gap by domain randomization. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops. pp 969–977
- 11.Yu S, Wu Y, Li W, Song Z, Zeng W. A model for fine-grained vehicle classification based on deep learning. Neurocomputing. 2017;257:97–103. doi: 10.1016/j.neucom.2016.09.116. [DOI] [Google Scholar]
- 12.Teixeira B, Singh V, Chen T, Ma K, Tamersoy B, Wu Y, Balashova E, Comaniciu D (2018) Generating synthetic x-ray images of a person from the surface geometry. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 9059–9067
- 13.Moreira AH, Queirós S, Morais P, Rodrigues NF, Correia AR, Fernandes V, Pinho AC, Fonseca JC, Vilaça JL (2015) Voxel-based registration of simulated and real patient cbct data for accurate dental implant pose estimation. In: Medical Imaging 2015: Computer-aided diagnosis, vol 9414. International Society for Optics and Photonics, p 94143
- 14.Morais P, Queirós S, Moreira AH, Ferreira A, Ferreira E, Duque D, Rodrigues NF, Vilaça JL (2015) Computer-aided recognition of dental implants in x-ray images. In: Medical imaging 2015: Computer-aided diagnosis, vol. 9414. International Society for Optics and Photonics, p 94142
- 15.Cunha P, Guevara MA, Messias A, Rocha S, Reis R, Nicolau PM. A method for segmentation of dental implants and crestal bone. International Journal of Computer Assisted Radiology and Surgery. 2013;8(5):711–721. doi: 10.1007/s11548-012-0802-6. [DOI] [PubMed] [Google Scholar]
- 16.Pauwels R, Jacobs R, Bosmans H, Pittayapat P, Kosalagood P, Silkosessak O, Panmekiate S. Automated implant segmentation in cone-beam ct using edge detection and particle counting. International Journal of Computer Assisted Radiology and Surgery. 2014;9(4):733–743. doi: 10.1007/s11548-013-0946-z. [DOI] [PubMed] [Google Scholar]
- 17.Benakatti VB, Nayakar RP, Anandhalli M, et al. Machine learning for identification of dental implant systems based on shape-a descriptive study. The Journal of Indian Prosthodontic Society. 2021;21(4):405. doi: 10.4103/jips.jips_324_21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Lee J-H, Kim Y-T, Lee J-B, Jeong S-N. A performance comparison between automated deep learning and dental professionals in classification of dental implant systems from dental imaging: A multi-center study. Diagnostics. 2020;10(11):910. doi: 10.3390/diagnostics10110910. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Hadj Saïd M, Le Roux M-K, Catherine J-H, Lan R (2020) Development of an artificial intelligence model to identify a dental implant from a radiograph. Int J Oral Maxillofac Implants 35(6) [DOI] [PubMed]
- 20.Sukegawa S, Yoshii K, Hara T, Yamashita K, Nakano K, Yamamoto N, Nagatsuka H, Furuki Y. Deep neural networks for dental implant system classification. Biomolecules. 2020;10(7):984. doi: 10.3390/biom10070984. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Kim H-S, Ha E-G, Kim YH, Jeon KJ, Lee C, Han S-S (2022) Transfer learning in a deep convolutional neural network for implant fixture classification: A pilot study. Imaging Sci Dent 52 [DOI] [PMC free article] [PubMed]
- 22.Prince JL, Links JM (2006) Medical imaging signals and systems. Upper Saddle River: Pearson Prentice Hall 36
- 23.Gonzalez RC, Woods RE (2009) Digital image processing
- 24.Carmignato S, Dewulf W, Leach R (2018) Industrial X-ray computed tomography
- 25.Busignies V, Leclerc B, Porion P, Evesque P, Couarraze G, Tchoreloff P. Quantitative measurements of localized density variations in cylindrical tablets using X-ray microtomography. European Journal of Pharmaceutics and Biopharmaceutics. 2006;64(1):38–50. doi: 10.1016/j.ejpb.2006.02.007. [DOI] [PubMed] [Google Scholar]
- 26.Coetzer J, Herbst BM, du Preez JA. Offline signature verification using the discrete radon transform and a hidden Markov model. EURASIP Journal on Advances in Signal Processing. 2004;2004(4):1–13. doi: 10.1155/S1110865704309042. [DOI] [Google Scholar]
- 27.Samet H, Tamminen M. Efficient component labeling of images of arbitrary dimension represented by linear bintrees. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1988;10(4):579–586. doi: 10.1109/34.3918. [DOI] [Google Scholar]
- 28.Jain AK. A fast karhunen-loeve transform for a class of random processes. IEEE Transactions on Communications. 1976;24(9):1023–1029. doi: 10.1109/TCOM.1976.1093409. [DOI] [Google Scholar]
- 29.Deng G, Cahill L (1993) An adaptive gaussian filter for noise reduction and edge detection. In: 1993 IEEE conference record nuclear science symposium and medical imaging conference. IEEE, pp 1615–1619
- 30.Maini R, Aggarwal H (2010) A comprehensive review of image enhancement techniques. arXiv:1003.4053
- 31.Karl M, Irastorza-Landa A (2018) In vitro characterization of original and nonoriginal implant abutments. Int J Oral Maxillofac Implants 33(6) [DOI] [PubMed]