Looking for low vision: Predicting visual prognosis by fusing structured and free-text data from electronic health records. International journal of medical informatics Gui, H., Tseng, B., Hu, W., Wang, S. Y. 1800; 159: 104678

Abstract

INTRODUCTION: Low vision rehabilitation improves quality-of-life for visually impaired patients, but referral rates fall short of national guidelines. Automatically identifying, from electronic health records (EHR), patients with poor visual prognosis could allow targeted referrals to low vision services. The purpose of this study was to build and evaluate deep learning models that integrate EHR data that is both structured and free-text to predict visual prognosis.METHODS: We identified 5547 patients with low vision (defined as best documented visual acuity (VA)less than20/40) on=1 encounter from EHR from 2009 to 2018, with=1year of follow-up from the earliest date of low vision, who did not improve togreater than20/40 over 1year. Ophthalmology notes on or prior to the index date were extracted. Structured data available from the EHR included demographics, billing and procedure codes, medications, and exam findings including VA, intraocular pressure, corneal thickness, and refraction. To predict whether low vision patients would still have low vision a year later, we developed and compared deep learning models that used structured inputs and free-text progress notes. We compared three different representations of progress notes, including 1) using previously developed ophthalmology domain-specific word embeddings, and representing medical concepts from notes as 2) named entities represented by one-hot vectors and 3) named entities represented as embeddings. Standard performance metrics including area under the receiver operating curve (AUROC) and F1 score were evaluated on a held-out test set.RESULTS: Among the 5547 low vision patients in our cohort, 40.7% (N=2258) never improved to better than 20/40 over one year of follow-up. Our single-modality deep learning model based on structured inputs was able to predict low vision prognosis with AUROC of 80% and F1 score of 70%. Deep learning models utilizing named entity recognition achieved an AUROC of 79% and F1 score of 63%. Deep learning models further augmented with free-text inputs using domain-specific word embeddings, were able to achieve AUROC of 82% and F1 score of 69%, outperforming all single- and multiple-modality models representing text with biomedical concepts extracted through named entity recognition pipelines.DISCUSSION: Free text progress notes within the EHR provide valuable information relevant to predicting patients' visual prognosis. We observed that representing free-text using domain-specific word embeddings led to better performance than representing free-text using extracted named entities. The incorporation of domain-specific embeddings improved the performance over structured models, suggesting that domain-specific text representations may be especially important to the performance of predictive models in highly subspecialized fields such as ophthalmology.

View details for DOI 10.1016/j.ijmedinf.2021.104678

View details for PubMedID 34999410