Purpose: To evaluate the performance of a deep learning algorithm in the detection of referral-warranted diabetic retinopathy (RDR) on low-resolution fundus images acquired with a smartphone and indirect ophthalmoscope lens adapter.Methods: An automated deep learning algorithm trained on 92,364 traditional fundus camera images was tested on a dataset of smartphone fundus images from 103 eyes acquired from two previously published studies. Images were extracted from live video screenshots from fundus examinations using a commercially available lens adapter and exported as a screenshot from live video clips filmed at 1080p resolution. Each image was graded twice by a board-certified ophthalmologist and compared to the output of the algorithm, which classified each image as having RDR (moderate nonproliferative DR or worse) or no RDR.Results: In spite of the presence of multiple artifacts (lens glare, lens particulates/smudging, user hands over the objective lens) and low-resolution images achieved by users of various levels of medical training, the algorithm achieved a 0.89 (95% confidence interval [CI] 0.83-0.95) area under the curve with an 89% sensitivity (95% CI 81%-100%) and 83% specificity (95% CI 77%-89%) for detecting RDR on mobile phone acquired fundus photos.Conclusions: The fully data-driven artificial intelligence-based grading algorithm herein can be used to screen fundus photos taken from mobile devices and identify with high reliability which cases should be referred to an ophthalmologist for further evaluation and treatment.Translational Relevance: The implementation of this algorithm on a global basis could drastically reduce the rate of vision loss attributed to DR.
View details for DOI 10.1167/tvst.9.2.60
View details for PubMedID 33294301