ImplantFormer: vision transformer-based implant position regression using dental CBCT data | Neural Computing and Applications Skip to main content

Advertisement

Log in

ImplantFormer: vision transformer-based implant position regression using dental CBCT data

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Implant prosthesis is the most appropriate treatment for dentition defect or dentition loss, which usually involves a surgical guide design process to decide the implant position. However, such design heavily relies on the subjective experiences of dentists. In this paper, a transformer-based Implant Position Regression Network, ImplantFormer, is proposed to automatically predict the implant position based on the oral CBCT data. We creatively propose to predict the implant position using the 2D axial view of the tooth crown area and fit a centerline of the implant to obtain the actual implant position at the tooth root. Convolutional stem and decoder are designed to coarsely extract image features before the operation of patch embedding and integrate multi-level feature maps for robust prediction, respectively. As both long-range relationship and local features are involved, our approach can better represent global information and achieves better location performance. Extensive experiments on a dental implant dataset through fivefold cross-validation demonstrated that the proposed ImplantFormer achieves superior performance than existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Algorithm 1
Algorithm 2
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The dental implant dataset is not publicly available due to privacy protection of the research participants.

References

  1. Elani H, Starr J, Da Silva J, Gallucci G (2018) Trends in dental implant use in the us, 1999–2016, and projections to 2026. J Dent Res 97(13):1424–1430

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Nazir M, Al-Ansari A, Al-Khalifa K, Alhareky M, Gaffar B, Almas K (2020) Global prevalence of periodontal disease and lack of its surveillance. Sci World J 2020

  3. Varga E Jr, Antal M, Major L, Kiscsatári R, Braunitzer G, Piffkó J (2020) Guidance means accuracy: a randomized clinical trial on freehand versus guided dental implantation. Clin Oral Implant Res 31(5):417–430

    Article  Google Scholar 

  4. Vinci R, Manacorda M, Abundo R, Lucchina A, Scarano A, Crocetta C, Lo Muzio L, Gherlone E, Mastrangelo F (2020) Accuracy of edentulous computer-aided implant surgery as compared to virtual planning: a retrospective multicenter study. J Clin Med 9(3):774

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Gargallo-Albiol J, Salomó-Coll O, Lozano-Carrascal N, Wang H-L, Hernández-Alfaro F (2021) Intra-osseous heat generation during implant bed preparation with static navigation: multi-factor in vitro study. Clin Oral Implant Res 32(5):590–597

    Article  Google Scholar 

  6. Amato F, López A, Peña-Méndez EM, Vaňhara P, Hampl A, Havel J (2013) Artificial neural networks in medical diagnosis. Elsevier, New York

    Book  Google Scholar 

  7. Schwendicke F, Singh T, Lee J-H, Gaudin R, Chaurasia A, Wiegand T, Uribe S, Krois J et al (2021) Artificial intelligence in dental research: checklist for authors, reviewers, readers. J Dent 107:103610

    Article  PubMed  Google Scholar 

  8. Müller A, Mertens SM, Göstemeyer G, Krois J, Schwendicke F (2021) Barriers and enablers for artificial intelligence in dental diagnostics: a qualitative study. J Clin Med 10(8):1612

    Article  PubMed  PubMed Central  Google Scholar 

  9. Liu D, Tian Y, Zhang Y, Gelernter J, Wang X (2022) Heterogeneous data fusion and loss function design for tooth point cloud segmentation. Neural Comput Appl 34(20):17371–17380

    Article  Google Scholar 

  10. Lin S, Hao X, Liu Y, Yan D, Liu J, Zhong M (2023) Lightweight deep learning methods for panoramic dental X-ray image segmentation. Neural Comput Appl 35(11):8295–8306

    Article  Google Scholar 

  11. Kurt Bayrakdar S, Orhan K, Bayrakdar IS, Bilgir E, Ezhov M, Gusarev M, Shumilov E (2021) A deep learning approach for dental implant planning in cone-beam computed tomography images. BMC Med Imaging 21(1):86

    Article  PubMed  Google Scholar 

  12. Widiasri M, Arifin AZ, Suciati N, Fatichah C, Astuti ER, Indraswari R, Putra RH, Za’in C (2022) Dental-yolo: alveolar bone and mandibular canal detection on cone beam computed tomography images for dental implant planning. IEEE Access 10:101483–101494

    Article  Google Scholar 

  13. Liu Y, Chen Z-c, Chu C-h, Deng F-L (2021) Transfer learning via artificial intelligence for guiding implant placement in the posterior mandible: an in vitro study

  14. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, et al (2020) An image is worth 16 x 16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929

  15. Schwendicke F, Elhennawy K, Paris S, Friebertshäuser P, Krois J (2020) Deep learning for caries lesion detection in near-infrared light transillumination images: a pilot study. J Dent 92:103260

    Article  CAS  PubMed  Google Scholar 

  16. Casalegno F, Newton T, Daher R, Abdelaziz M, Lodi-Rizzini A, Schürmann F, Krejci I, Markram H (2019) Caries detection with near-infrared transillumination using deep learning. J Dent Res 98(11):1227–1233

    Article  CAS  PubMed  Google Scholar 

  17. Kondo T, Ong SH, Foong KW (2004) Tooth segmentation of dental study models using range images. IEEE Trans Med Imaging 23(3):350–362

    Article  PubMed  Google Scholar 

  18. Xu X, Liu C, Zheng Y (2018) 3d tooth segmentation and labeling using deep convolutional neural networks. IEEE Trans Visual Comput Graphics 25(7):2336–2348

    Article  Google Scholar 

  19. Lian C, Wang L, Wu T-H, Wang F, Yap P-T, Ko C-C, Shen D (2020) Deep multi-scale mesh feature learning for automated labeling of raw dental surfaces from 3d intraoral scanners. IEEE Trans Med Imaging 39(7):2440–2450

    Article  PubMed  Google Scholar 

  20. Cui Z, Li C, Chen N, Wei G, Chen R, Zhou Y, Shen D, Wang W (2021) Tsegnet: an efficient and accurate tooth segmentation network on 3D dental model. Med Image Anal 69:101949

    Article  PubMed  Google Scholar 

  21. Qiu L, Ye C, Chen P, Liu Y, Han X, Cui S (2022) Darch: Dental arch prior-assisted 3d tooth instance segmentation with weak annotations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 20752–20761

  22. Sukegawa S, Yoshii K, Hara T, Yamashita K, Nakano K, Yamamoto N, Nagatsuka H, Furuki Y (2020) Deep neural networks for dental implant system classification. Biomolecules 10(7):984

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Kim J-E, Nam N-E, Shim J-S, Jung Y-H, Cho B-H, Hwang JJ (2020) Transfer learning via deep neural networks for implant fixture system classification using periapical radiographs. J Clin Med 9(4):1117

    Article  PubMed  PubMed Central  Google Scholar 

  24. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788

  25. Bochkovskiy A, Wang C-Y, Liao H-YM (2020) Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934

  26. Li C, Li L, Jiang H, Weng K, Geng Y, Li L, Ke Z, Li Q, Cheng M, Nie W, et al (2022) Yolov6: A single-stage object detection framework for industrial applications. arXiv preprint arXiv:2209.02976

  27. Wang C-Y, Bochkovskiy A, Liao H-YM (2023) Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7464–7475

  28. Ge Z, Liu S, Wang F, Li Z, Sun J (2021) Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430

  29. Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28

  30. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587

  31. Cai Z, Vasconcelos N (2018) Cascade R-CNN: Delving into high quality object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6154–6162

  32. Sun P, Zhang R, Jiang Y, Kong T, Xu C, Zhan W, Tomizuka M, Li L, Yuan Z, Wang C, et al (2021) Sparse r-cnn: End-to-end object detection with learnable proposals. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14454–14463

  33. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-CNN. In: Proceedings of the IEEE international conference on computer vision, pp 2961–2969

  34. Law H, Deng J (2018) Cornernet: Detecting objects as paired keypoints. In: Proceedings of the european conference on computer vision (ECCV), pp 734–750

  35. Duan K, Bai S, Xie L, Qi H, Huang Q, Tian Q (2019) Centernet: Keypoint triplets for object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 6569–6578

  36. Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: Computer vision–ECCV 2020: 16th european conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, pp 213–229. Springer

  37. Zhu X, Su W, Lu L, Li B, Wang X, Dai J (2020) Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159

  38. Polášková A, Feberová J, Dostálová T, Kříž P, Seydlová M et al (2013) Clinical decision support system in dental implantology. MEFANET J 1(1):11–14

    Google Scholar 

  39. Sadighpour L, Rezaei SMM, Paknejad M, Jafary F, Aslani P (2014) The application of an artificial neural network to support decision making in edentulous maxillary implant prostheses. J Res Pract Dentist 2014:1–10

    Google Scholar 

  40. Szejka AL, Rude M, Jnr OC (2011) A reasoning method for determining the suitable dental implant. In: 41st International conference on computers and industrial engineering, Los Angeles

  41. Zhou X, Wang D, Krähenbühl P (2019) Objects as points. arXiv preprint arXiv:1904.07850

  42. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  43. Ranftl R, Bochkovskiy A, Koltun V (2021) Vision transformers for dense prediction. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 12179–12188

  44. Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988

  45. Meng, D., Chen, X., Fan, Z., Zeng, G., Li, H., Yuan, Y., Sun, L., Wang, J.: Conditional detr for fast training convergence. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 3651–3660 (2021)

  46. Wang Y, Zhang X, Yang T, Sun J (2022) Anchor detr: Query design for transformer-based detector. In: Proceedings of the AAAI conference on artificial intelligence, vol 36, pp 2567–2575

  47. Dai Z, Cai B, Lin Y, Chen J (2021) Unsupervised pre-training for object detection with transformers. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1601–1610

  48. Li F, Zhang H, Liu S, Guo J, Ni LM, Zhang L (2022) Dn-detr: accelerate detr training by introducing query denoising. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 13619–13627

  49. Zhang H, Wang Y, Dayoub F, Sunderhauf N (2021) Varifocalnet: An iou-aware dense object detector. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8514–8523

  50. Zhang S, Chi C, Yao Y, Lei Z, Li SZ (2020) Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9759–9768

  51. Yang Z, Liu S, Hu H, Wang L, Lin S (2019) Reppoints: Point set representation for object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9657–9666

  52. Yang X, Li X, Li X, Wu P, Shen L, Li X, Deng Y (2022) Implantformer: vision transformer based implant position regression using dental CBCT data. arXiv preprint arXiv:2210.16467

  53. Xie T, Zhang Z, Tian J, Ma L (2022) Focal detr: target-aware token design for transformer-based object detection. Sensors 22(22):8686

    Article  PubMed  PubMed Central  ADS  Google Scholar 

  54. Ding J, Ye C, Wang H, Huyan J, Yang M, Li W (2023) Foreign bodies detector based on detr for high-resolution x-ray images of textiles. IEEE Trans Instrum Meas 72:1–10

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant 82261138629; Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515010688 and 2021A1515220072; Shenzhen Municipal Science and Technology Innovation Council under Grant JCYJ20220531101412030 and JCYJ20220530155811025.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Linlin Shen.

Ethics declarations

Conflict of interest

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, X., Li, X., Li, X. et al. ImplantFormer: vision transformer-based implant position regression using dental CBCT data. Neural Comput & Applic 36, 6643–6658 (2024). https://doi.org/10.1007/s00521-023-09411-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-09411-1

Keywords

Navigation