Abstract
Recent advances in artificial intelligence and deep learning models are contributing to the development of advanced computer-aided diagnosis (CAD) systems. In the context of medical imaging, Optical Coherence Tomography (OCT) is a valuable technique that is able to provide cross-sectional visualisations of the ocular tissue. However, OCT is constrained by a limitation between the quality of the visualisations that it can produce and the overall amount of tissue that can be analysed at once. This limitation leads to a scarcity of high quality data, a problem that is very prevalent when developing machine learning-based CAD systems intended for medical imaging. To mitigate this problem, we present a novel methodology for the unpaired conversion of OCT images acquired with a low quality extensive scanning preset into the visual style of those taken with a high quality intensive scan and vice versa. This is achieved by employing contrastive unpaired translation generative adversarial networks to convert between the visual styles of the different acquisition presets. The results we obtained in the validation experiments show that these synthetic generated images can mirror the visual features of the original ones while preserving the natural tissue texture, effectively increasing the total number of available samples that can be used to train robust machine learning-based CAD systems.
This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095894-B-I00 research project; Ministerio de Ciencia e Innovación, Government of Spain through the research project with reference PID2019-108435RB-I00; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, Grupos de Referencia Competitiva, grant ref. ED431C 2020/24, predoctoral grant ref. ED481A 2021/161 and postdoctoral grant ref. ED481B 2021/059; Axencia Galega de Innovación (GAIN), Xunta de Galicia, grant ref. IN845D 2020/38; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Apostolopoulos, S., et al.: Automatically enhanced OCT scans of the retina: a proof of concept study. Sci. Rep. 10(1) (2020). https://doi.org/10.1038/s41598-020-64724-8
Chaabouni, A., Gaudeau, Y., Lambert, J., Moureaux, J.M., Gallet, P.: Subjective and objective quality assessment for h264 compressed medical video sequences. In: 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–5 (2014). https://doi.org/10.1109/IPTA.2014.7001922
Cheung, C.Y., Tang, F., Ting, D.S.W., Tan, G.S.W., Wong, T.Y.: Artificial intelligence in diabetic eye disease screening. Asia-Pac. J. Ophthalmol. (2019). https://doi.org/10.22608/apo.201976
Fu, H., et al.: Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans. Med. Imaging 37(7), 1597–1605 (2018). https://doi.org/10.1109/tmi.2018.2791488
Gende, M., De Moura, J., Novo, J., Charlón, P., Ortega, M.: Automatic segmentation and intuitive visualisation of the epiretinal membrane in 3D OCT images using deep convolutional approaches. IEEE Access 9, 75993–76004 (2021). https://doi.org/10.1109/ACCESS.2021.3082638
Gulshan, V., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22), 2402 (2016). https://doi.org/10.1001/jama.2016.17216
Huang, G., Liu, Z., Maaten, L.V.D., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269. IEEE, July 2017. https://doi.org/10.1109/cvpr.2017.243
Huang, Y., et al.: Simultaneous denoising and super-resolution of optical coherence tomography images based on generative adversarial network. Opt. Express 27(9), 12289 (2019). https://doi.org/10.1364/oe.27.012289
Kamalipour, A., Moghimi, S.: Macular optical coherence tomography imaging in glaucoma. J. Ophthalmic Vis. Res. (2021). https://doi.org/10.18502/jovr.v16i3.9442
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015). http://arxiv.org/abs/1412.6980
Lee, J.H., Kim, Y.T., Lee, J.B., Jeong, S.N.: A performance comparison between automated deep learning and dental professionals in classification of dental implant systems from dental imaging: a multi-center study. Diagnostics 10(11), 910 (2020). https://doi.org/10.3390/diagnostics10110910
Li, M., Idoughi, R., Choudhury, B., Heidrich, W.: Statistical model for oct image denoising. Biomed. Opt. Express 8(9), 3903–3917 (2017)
Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017). https://doi.org/10.1016/j.media.2017.07.005
Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012). https://doi.org/10.1109/TIP.2012.2214050
de Moura, J., Novo, J., Ortega, M.: Deep feature analysis in a transfer learning-based approach for the automatic identification of diabetic macular edema. In: 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, July 2019. https://doi.org/10.1109/ijcnn.2019.8852196
Nugroho, K.A.: A comparison of handcrafted and deep neural network feature extraction for classifying optical coherence tomography (OCT) images. In: 2018 2nd International Conference on Informatics and Computational Sciences (ICICoS), pp. 1–6 (2018). https://doi.org/10.1109/ICICOS.2018.8621687
Park, T., Efros, A.A., Zhang, R., Zhu, J.-Y.: Contrastive learning for unpaired image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 319–345. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_19
Puliafito, C.A., et al.: Imaging of macular diseases with optical coherence tomography. Ophthalmology 102(2), 217–229 (1995)
Schmitt, J.: Optical coherence tomography (OCT): a review. IEEE J. Sel. Top. Quant. Electron. 5(4), 1205–1215 (1999). https://doi.org/10.1109/2944.796348
Shorten, C., Khoshgoftaar, T.M.: A survey on image data augmentation for deep learning. J. Big Data 6(1) (2019). https://doi.org/10.1186/s40537-019-0197-0
Ting, D.S.W., et al.: Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 318(22), 2211 (2017). https://doi.org/10.1001/jama.2017.18152
Ting, D.S.W., et al.: Artificial intelligence and deep learning in ophthalmology. Br. J. Ophthalmol. 103(2), 167–175 (2018). https://doi.org/10.1136/bjophthalmol-2018-313173
Triolo, G., Rabiolo, A.: Optical coherence tomography and optical coherence tomography angiography in glaucoma: diagnosis, progression, and correlation with functional tests. Ther. Adv. Ophthalmol. 12, 251584141989982 (2020). https://doi.org/10.1177/2515841419899822
Vujosevic, S., et al.: Diabetic macular edema with neuroretinal detachment: OCT and OCT-angiography biomarkers of treatment response to anti-VEGF and steroids. Acta Diabetol. 57(3), 287–296 (2019). https://doi.org/10.1007/s00592-019-01424-4
World Health Organization: World Report on Vision. World Health Organization (2019). https://www.who.int/publications/i/item/9789241516570
Xu, M., Tang, C., Hao, F., Chen, M., Lei, Z.: Texture preservation and speckle reduction in poor optical coherence tomography using the convolutional neural network. Med. Image Anal. 64, 101727 (2020). https://doi.org/10.1016/j.media.2020.101727
Yu, S., Dai, G., Wang, Z., Li, L., Wei, X., Xie, Y.: A consistency evaluation of signal-to-noise ratio in the quality assessment of human brain magnetic resonance images. BMC Med. Imaging 18(1) (2018). https://doi.org/10.1186/s12880-018-0256-6
Zhang, Z., et al.: Can signal-to-noise ratio perform as a baseline indicator for medical image quality assessment. IEEE Access 6, 11534–11543 (2018). https://doi.org/10.1109/access.2018.2796632
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Gende, M., de Moura, J., Novo, J., Ortega, M. (2022). High/Low Quality Style Transfer for Mutual Conversion of OCT Images Using Contrastive Unpaired Translation Generative Adversarial Networks. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds) Image Analysis and Processing – ICIAP 2022. ICIAP 2022. Lecture Notes in Computer Science, vol 13231. Springer, Cham. https://doi.org/10.1007/978-3-031-06427-2_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-06427-2_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-06426-5
Online ISBN: 978-3-031-06427-2
eBook Packages: Computer ScienceComputer Science (R0)