{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,10,6]],"date-time":"2024-10-06T01:06:00Z","timestamp":1728176760057},"reference-count":41,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2020,10,30]],"date-time":"2020-10-30T00:00:00Z","timestamp":1604016000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,10,30]],"date-time":"2020-10-30T00:00:00Z","timestamp":1604016000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["npj Digit. Med."],"abstract":"Abstract<\/jats:title>Artificial intelligence (AI) based on deep learning has shown excellent diagnostic performance in detecting various diseases with good-quality clinical images. Recently, AI diagnostic systems developed from ultra-widefield fundus (UWF) images have become popular standard-of-care tools in screening for ocular fundus diseases. However, in real-world settings, these systems must base their diagnoses on images with uncontrolled quality (\u201cpassive feeding\u201d), leading to uncertainty about their performance. Here, using 40,562 UWF images, we develop a deep learning\u2013based image filtering system (DLIFS) for detecting and filtering out poor-quality images in an automated fashion such that only good-quality images are transferred to the subsequent AI diagnostic system (\u201cselective eating\u201d). In three independent datasets from different clinical institutions, the DLIFS performed well with sensitivities of 96.9%, 95.6% and 96.6%, and specificities of 96.6%, 97.9% and 98.8%, respectively. Furthermore, we show that the application of our DLIFS significantly improves the performance of established AI diagnostic systems in real-world settings. Our work demonstrates that \u201cselective eating\u201d of real-world data is necessary and needs to be considered in the development of image-based AI systems.<\/jats:p>","DOI":"10.1038\/s41746-020-00350-y","type":"journal-article","created":{"date-parts":[[2020,10,30]],"date-time":"2020-10-30T11:03:14Z","timestamp":1604055794000},"update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":21,"title":["Deep learning from \u201cpassive feeding\u201d to \u201cselective eating\u201d of real-world data"],"prefix":"10.1038","volume":"3","author":[{"given":"Zhongwen","family":"Li","sequence":"first","affiliation":[]},{"given":"Chong","family":"Guo","sequence":"additional","affiliation":[]},{"given":"Danyao","family":"Nie","sequence":"additional","affiliation":[]},{"given":"Duoru","family":"Lin","sequence":"additional","affiliation":[]},{"given":"Yi","family":"Zhu","sequence":"additional","affiliation":[]},{"given":"Chuan","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Lanqin","family":"Zhao","sequence":"additional","affiliation":[]},{"given":"Xiaohang","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Meimei","family":"Dongye","sequence":"additional","affiliation":[]},{"given":"Fabao","family":"Xu","sequence":"additional","affiliation":[]},{"given":"Chenjin","family":"Jin","sequence":"additional","affiliation":[]},{"given":"Ping","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Yu","family":"Han","sequence":"additional","affiliation":[]},{"given":"Pisong","family":"Yan","sequence":"additional","affiliation":[]},{"ORCID":"http:\/\/orcid.org\/0000-0003-4672-9721","authenticated-orcid":false,"given":"Haotian","family":"Lin","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,10,30]]},"reference":[{"key":"350_CR1","doi-asserted-by":"publisher","first-page":"955","DOI":"10.1126\/science.aay5189","volume":"366","author":"A Hosny","year":"2019","unstructured":"Hosny, A. & Aerts, H. Artificial intelligence for global health. Science 366, 955\u2013956 (2019).","journal-title":"Science"},{"key":"350_CR2","doi-asserted-by":"publisher","first-page":"509","DOI":"10.1001\/jama.2019.21579","volume":"323","author":"ME Matheny","year":"2019","unstructured":"Matheny, M. E., Whicher, D. & Thadaney, I. S. Artificial intelligence in health care: a report from the national academy of medicine. JAMA 323, 509\u2013510 (2019).","journal-title":"JAMA"},{"key":"350_CR3","doi-asserted-by":"publisher","first-page":"71","DOI":"10.1038\/s41581-019-0243-3","volume":"16","author":"P Rashidi","year":"2019","unstructured":"Rashidi, P. & Bihorac, A. Artificial intelligence approaches to improve kidney care. Nat. Rev. Nephrol. 16, 71\u201372 (2019).","journal-title":"Nat. Rev. Nephrol."},{"key":"350_CR4","doi-asserted-by":"publisher","first-page":"2211","DOI":"10.1001\/jama.2017.18152","volume":"318","author":"D Ting","year":"2017","unstructured":"Ting, D. et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 318, 2211\u20132223 (2017).","journal-title":"JAMA"},{"key":"350_CR5","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1038\/nature21056","volume":"542","author":"A Esteva","year":"2017","unstructured":"Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115\u2013118 (2017).","journal-title":"Nature"},{"key":"350_CR6","doi-asserted-by":"publisher","first-page":"2402","DOI":"10.1001\/jama.2016.17216","volume":"316","author":"V Gulshan","year":"2016","unstructured":"Gulshan, V. et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316, 2402\u20132410 (2016).","journal-title":"JAMA"},{"key":"350_CR7","doi-asserted-by":"publisher","first-page":"962","DOI":"10.1016\/j.ophtha.2017.02.008","volume":"124","author":"R Gargeya","year":"2017","unstructured":"Gargeya, R. & Leng, T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology 124, 962\u2013969 (2017).","journal-title":"Ophthalmology"},{"key":"350_CR8","doi-asserted-by":"publisher","first-page":"1199","DOI":"10.1016\/j.ophtha.2018.01.023","volume":"125","author":"Z Li","year":"2018","unstructured":"Li, Z. et al. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology 125, 1199\u20131206 (2018).","journal-title":"Ophthalmology"},{"key":"350_CR9","doi-asserted-by":"publisher","first-page":"1342","DOI":"10.1038\/s41591-018-0107-6","volume":"24","author":"J De Fauw","year":"2018","unstructured":"De Fauw, J. et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24, 1342\u20131350 (2018).","journal-title":"Nat. Med."},{"key":"350_CR10","doi-asserted-by":"publisher","first-page":"1627","DOI":"10.1016\/j.ophtha.2019.07.024","volume":"126","author":"S Phene","year":"2019","unstructured":"Phene, S. et al. Deep learning and glaucoma specialists: The relative importance of optic disc features to predict glaucoma referral in fundus photographs. Ophthalmology 126, 1627\u20131639 (2019).","journal-title":"Ophthalmology"},{"key":"350_CR11","doi-asserted-by":"publisher","first-page":"85","DOI":"10.1016\/j.ophtha.2019.05.029","volume":"127","author":"J Son","year":"2019","unstructured":"Son, J. et al. Development and validation of deep learning models for screening multiple abnormal findings in retinal fundus images. Ophthalmology 127, 85\u201394 (2019).","journal-title":"Ophthalmology"},{"key":"350_CR12","doi-asserted-by":"publisher","first-page":"1645","DOI":"10.1016\/S1470-2045(19)30637-0","volume":"20","author":"H Luo","year":"2019","unstructured":"Luo, H. et al. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multicentre, case-control, diagnostic study. Lancet Oncol. 20, 1645\u20131654 (2019).","journal-title":"Lancet Oncol."},{"key":"350_CR13","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1136\/bcr-2013-200734","volume":"2013","author":"S Theodoropoulou","year":"2013","unstructured":"Theodoropoulou, S., Ainsworth, S. & Blaikie, A. Ultra-wide field imaging of retinopathy of prematurity (ROP) using Optomap-200TX. BMJ Case Rep. 2013, 1\u20132 (2013).","journal-title":"BMJ Case Rep."},{"key":"350_CR14","doi-asserted-by":"publisher","first-page":"660","DOI":"10.1097\/IAE.0000000000000937","volume":"36","author":"A Nagiel","year":"2016","unstructured":"Nagiel, A., Lalane, R. A., Sadda, S. R. & Schwartz, S. D. ULTRA-WIDEFIELD FUNDUS IMAGING: a review of clinical applications and future trends. Retina 36, 660\u2013678 (2016).","journal-title":"Retina"},{"key":"350_CR15","doi-asserted-by":"publisher","first-page":"229","DOI":"10.1007\/s00417-007-0631-4","volume":"246","author":"AS Neubauer","year":"2008","unstructured":"Neubauer, A. S. et al. Nonmydriatic screening for diabetic retinopathy by ultra-widefield scanning laser ophthalmoscopy (Optomap). Graefes Arch. Clin. Exp. Ophthalmol. 246, 229\u2013235 (2008).","journal-title":"Graefes Arch. Clin. Exp. Ophthalmol."},{"key":"350_CR16","doi-asserted-by":"publisher","first-page":"2459","DOI":"10.2337\/dc12-0346","volume":"35","author":"M Kernt","year":"2012","unstructured":"Kernt, M. et al. Assessment of diabetic retinopathy using nonmydriatic ultra-widefield scanning laser ophthalmoscopy (Optomap) compared with ETDRS 7-field stereo photography. Diabetes Care 35, 2459\u20132463 (2012).","journal-title":"Diabetes Care"},{"key":"350_CR17","doi-asserted-by":"publisher","first-page":"549","DOI":"10.1016\/j.ajo.2012.03.019","volume":"154","author":"PS Silva","year":"2012","unstructured":"Silva, P. S. et al. Nonmydriatic ultrawide field retinal imaging compared with dilated standard 7-field 35-mm photography and retinal specialist examination for evaluation of diabetic retinopathy. Am. J. Ophthalmol. 154, 549\u2013559 (2012).","journal-title":"Am. J. Ophthalmol."},{"key":"350_CR18","doi-asserted-by":"publisher","first-page":"e6900","DOI":"10.7717\/peerj.6900","volume":"7","author":"H Masumoto","year":"2019","unstructured":"Masumoto, H. et al. Accuracy of a deep convolutional neural network in detection of retinitis pigmentosa on ultrawide-field images. PeerJ 7, e6900 (2019).","journal-title":"PeerJ"},{"key":"350_CR19","doi-asserted-by":"publisher","first-page":"618","DOI":"10.21037\/atm.2019.11.28","volume":"7","author":"Z Li","year":"2019","unstructured":"Li, Z. et al. A deep learning system for identifying lattice degeneration and retinal breaks using ultra-widefield fundus images. Ann. Transl. Med. 7, 618 (2019).","journal-title":"Ann. Transl. Med."},{"key":"350_CR20","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-017-09891-x","volume":"7","author":"H Ohsugi","year":"2017","unstructured":"Ohsugi, H., Tabuchi, H., Enno, H. & Ishitobi, N. Accuracy of deep learning, a machine-learning technology, using ultra-wide-field fundus ophthalmoscopy for detecting rhegmatogenous retinal detachment. Sci. Rep. 7, 9425 (2017).","journal-title":"Sci. Rep."},{"key":"350_CR21","doi-asserted-by":"publisher","first-page":"1269","DOI":"10.1007\/s10792-018-0940-0","volume":"39","author":"S Matsuba","year":"2019","unstructured":"Matsuba, S. et al. Accuracy of ultra-wide-field fundus ophthalmoscopy-assisted deep learning, a machine-learning technology, for detecting age-related macular degeneration. Int. Ophthalmol. 39, 1269\u20131275 (2019).","journal-title":"Int. Ophthalmol."},{"key":"350_CR22","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1038\/s42003-019-0730-x","volume":"3","author":"Z Li","year":"2020","unstructured":"Li, Z. et al. Deep learning for detecting retinal detachment and discerning macular status using ultra-widefield fundus images. Commun. Biol. 3, 15 (2020).","journal-title":"Commun. Biol."},{"key":"350_CR23","doi-asserted-by":"publisher","first-page":"2153","DOI":"10.1007\/s10792-019-01074-z","volume":"39","author":"T Nagasawa","year":"2019","unstructured":"Nagasawa, T. et al. Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naive proliferative diabetic retinopathy. Int. Ophthalmol. 39, 2153\u20132159 (2019).","journal-title":"Int. Ophthalmol."},{"key":"350_CR24","doi-asserted-by":"publisher","first-page":"e5696","DOI":"10.7717\/peerj.5696","volume":"6","author":"T Nagasawa","year":"2018","unstructured":"Nagasawa, T. et al. Accuracy of deep learning, a machine learning technology, using ultra-wide-field fundus ophthalmoscopy for detecting idiopathic macular holes. PeerJ 6, e5696 (2018).","journal-title":"PeerJ"},{"key":"350_CR25","first-page":"94","volume":"12","author":"D Nagasato","year":"2019","unstructured":"Nagasato, D. et al. Deep-learning classifier with ultrawide-field fundus ophthalmoscopy for detecting branch retinal vein occlusion. Int. J. Ophthalmol. 12, 94\u201399 (2019).","journal-title":"Int. J. Ophthalmol."},{"key":"350_CR26","doi-asserted-by":"publisher","first-page":"1875431","DOI":"10.1155\/2018\/1875431","volume":"2018","author":"D Nagasato","year":"2018","unstructured":"Nagasato, D. et al. Deep neural Network-based method for detecting central retinal vein occlusion using Ultrawide-Field fundus ophthalmoscopy. J. Ophthalmol. 2018, 1875431 (2018).","journal-title":"J. Ophthalmol."},{"key":"350_CR27","doi-asserted-by":"publisher","first-page":"647","DOI":"10.1097\/IJG.0000000000000988","volume":"27","author":"H Masumoto","year":"2018","unstructured":"Masumoto, H. et al. Deep-learning classifier with an ultrawide-field scanning laser ophthalmoscope detects glaucoma visual field severity. J. Glaucoma 27, 647\u2013652 (2018).","journal-title":"J. Glaucoma"},{"key":"350_CR28","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1167\/tvst.9.2.3","volume":"9","author":"Z Li","year":"2020","unstructured":"Li, Z. et al. Development and evaluation of a deep learning system for screening retinal hemorrhage based on Ultra-Widefield fundus images. Transl. Vis. Sci. Technol. 9, 3 (2020).","journal-title":"Transl. Vis. Sci. Technol"},{"key":"350_CR29","doi-asserted-by":"publisher","first-page":"2","DOI":"10.1111\/j.1442-9071.2008.01812.x","volume":"37","author":"TJ Bennett","year":"2009","unstructured":"Bennett, T. J. & Barry, C. J. Ophthalmic imaging today: An ophthalmic photographer\u2019s viewpoint-a review. Clin. Exp. Ophthalmol. 37, 2\u201313 (2009).","journal-title":"Clin. Exp. Ophthalmol."},{"key":"350_CR30","doi-asserted-by":"publisher","first-page":"3546","DOI":"10.1167\/iovs.12-10347","volume":"54","author":"E Trucco","year":"2013","unstructured":"Trucco, E. et al. Validating retinal fundus image analysis algorithms: issues and a proposal. Invest. Ophthalmol. Vis. Sci. 54, 3546\u20133559 (2013).","journal-title":"Invest. Ophthalmol. Vis. Sci."},{"key":"350_CR31","doi-asserted-by":"publisher","first-page":"806","DOI":"10.1109\/ACCESS.2017.2776126","volume":"6","author":"F Shao","year":"2018","unstructured":"Shao, F., Yang, Y., Jiang, Q., Jiang, G. & Ho, Y. Automated quality assessment of fundus images via analysis of illumination, naturalness and structure. IEEE Access. 6, 806\u2013817 (2018).","journal-title":"IEEE Access."},{"key":"350_CR32","first-page":"1224","volume":"2018","author":"AS Coyner","year":"2018","unstructured":"Coyner, A. S. et al. Deep learning for image quality assessment of fundus images in retinopathy of prematurity. AMIA Annu. Symp. Proc. 2018, 1224\u20131232 (2018).","journal-title":"AMIA Annu. Symp. Proc."},{"key":"350_CR33","doi-asserted-by":"publisher","first-page":"64","DOI":"10.1016\/j.compbiomed.2018.10.004","volume":"103","author":"GT Zago","year":"2018","unstructured":"Zago, G. T., Andreao, R. V., Dorizzi, B. & Teatini, S. E. Retinal image quality assessment using deep learning. Comput. Biol. Med. 103, 64\u201370 (2018).","journal-title":"Comput. Biol. Med."},{"key":"350_CR34","first-page":"5955","volume":"2011","author":"A Hunter","year":"2011","unstructured":"Hunter, A. et al. An automated retinal image quality grading algorithm. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2011, 5955\u20135958 (2011).","journal-title":"Conf. Proc. IEEE Eng. Med. Biol. Soc."},{"key":"350_CR35","doi-asserted-by":"publisher","first-page":"l886","DOI":"10.1136\/bmj.l886","volume":"364","author":"DS Watson","year":"2019","unstructured":"Watson, D. S. et al. Clinical applications of machine learning algorithms: beyond the black box. BMJ 364, l886 (2019).","journal-title":"BMJ"},{"key":"350_CR36","doi-asserted-by":"publisher","first-page":"14001","DOI":"10.1117\/1.JMI.1.1.014001","volume":"1","author":"D Veiga","year":"2014","unstructured":"Veiga, D., Pereira, C., Ferreira, M., Goncalves, L. & Monteiro, J. Quality evaluation of digital fundus images through combined measures. J. Med. Imaging. 1, 14001 (2014).","journal-title":"J. Med. Imaging."},{"key":"350_CR37","doi-asserted-by":"publisher","first-page":"1009","DOI":"10.1111\/ceo.13575","volume":"47","author":"S Keel","year":"2019","unstructured":"Keel, S. et al. Development and validation of a deep-learning algorithm for the detection of neovascular age-related macular degeneration from colour fundus photographs. Clin. Exp. Ophthalmol. 47, 1009\u20131018 (2019).","journal-title":"Clin. Exp. Ophthalmol."},{"key":"350_CR38","doi-asserted-by":"crossref","unstructured":"Bhatia, Y., Bajpayee, A., Raghuvanshi, D. & Mittal, H. Image Captioning using Google\u2019s Inception-resnet-v2 and Recurrent Neural Network. In 2019 Twelfth International Conference on Contemporary Computing (IC3), IEEE, 1\u20136 (2019).","DOI":"10.1109\/IC3.2019.8844921"},{"key":"350_CR39","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","volume":"115","author":"O Russakovsky","year":"2015","unstructured":"Russakovsky, O. et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vision. 115, 211\u2013252 (2015).","journal-title":"Int. J. Comput. Vision."},{"key":"350_CR40","unstructured":"Diederik P. & Kingma, J. B. Adam: a method for stochastic optimization. Preprint at https:\/\/https:\/\/arxiv.org\/abs\/1412.6980 (2014)."},{"key":"350_CR41","unstructured":"Karen Simonyan, Andrea. V. & Andrew. Z. Deep inside convolutional networks: visualising image classification models and saliency maps. Preprint at https:\/\/arxiv.org\/abs\/1312.6034 (2014)."}],"container-title":["npj Digital Medicine"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s41746-020-00350-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-020-00350-y","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-020-00350-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,7]],"date-time":"2022-12-07T01:25:38Z","timestamp":1670376338000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s41746-020-00350-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,30]]},"references-count":41,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2020,12]]}},"alternative-id":["350"],"URL":"https:\/\/doi.org\/10.1038\/s41746-020-00350-y","relation":{},"ISSN":["2398-6352"],"issn-type":[{"value":"2398-6352","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,10,30]]},"assertion":[{"value":"27 May 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 September 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 October 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"143"}}