Matches in SemOpenAlex for { <https://semopenalex.org/work/W4200594589> ?p ?o ?g. }
- W4200594589 endingPage "104678" @default.
- W4200594589 startingPage "104678" @default.
- W4200594589 abstract "Low vision rehabilitation improves quality-of-life for visually impaired patients, but referral rates fall short of national guidelines. Automatically identifying, from electronic health records (EHR), patients with poor visual prognosis could allow targeted referrals to low vision services. The purpose of this study was to build and evaluate deep learning models that integrate EHR data that is both structured and free-text to predict visual prognosis.We identified 5547 patients with low vision (defined as best documented visual acuity (VA) less than 20/40) on ≥ 1 encounter from EHR from 2009 to 2018, with ≥ 1 year of follow-up from the earliest date of low vision, who did not improve to greater than 20/40 over 1 year. Ophthalmology notes on or prior to the index date were extracted. Structured data available from the EHR included demographics, billing and procedure codes, medications, and exam findings including VA, intraocular pressure, corneal thickness, and refraction. To predict whether low vision patients would still have low vision a year later, we developed and compared deep learning models that used structured inputs and free-text progress notes. We compared three different representations of progress notes, including 1) using previously developed ophthalmology domain-specific word embeddings, and representing medical concepts from notes as 2) named entities represented by one-hot vectors and 3) named entities represented as embeddings. Standard performance metrics including area under the receiver operating curve (AUROC) and F1 score were evaluated on a held-out test set.Among the 5547 low vision patients in our cohort, 40.7% (N = 2258) never improved to better than 20/40 over one year of follow-up. Our single-modality deep learning model based on structured inputs was able to predict low vision prognosis with AUROC of 80% and F1 score of 70%. Deep learning models utilizing named entity recognition achieved an AUROC of 79% and F1 score of 63%. Deep learning models further augmented with free-text inputs using domain-specific word embeddings, were able to achieve AUROC of 82% and F1 score of 69%, outperforming all single- and multiple-modality models representing text with biomedical concepts extracted through named entity recognition pipelines.Free text progress notes within the EHR provide valuable information relevant to predicting patients' visual prognosis. We observed that representing free-text using domain-specific word embeddings led to better performance than representing free-text using extracted named entities. The incorporation of domain-specific embeddings improved the performance over structured models, suggesting that domain-specific text representations may be especially important to the performance of predictive models in highly subspecialized fields such as ophthalmology." @default.
- W4200594589 created "2021-12-31" @default.
- W4200594589 creator A5000683013 @default.
- W4200594589 creator A5031730846 @default.
- W4200594589 creator A5067629425 @default.
- W4200594589 creator A5072238171 @default.
- W4200594589 date "2022-03-01" @default.
- W4200594589 modified "2023-10-16" @default.
- W4200594589 title "Looking for low vision: Predicting visual prognosis by fusing structured and free-text data from electronic health records" @default.
- W4200594589 cites W1512404644 @default.
- W4200594589 cites W1725326238 @default.
- W4200594589 cites W1832693441 @default.
- W4200594589 cites W1869282115 @default.
- W4200594589 cites W2003623267 @default.
- W4200594589 cites W2006617902 @default.
- W4200594589 cites W2024792158 @default.
- W4200594589 cites W2029426756 @default.
- W4200594589 cites W2099119910 @default.
- W4200594589 cites W2169818249 @default.
- W4200594589 cites W2336687435 @default.
- W4200594589 cites W2471580555 @default.
- W4200594589 cites W2739362442 @default.
- W4200594589 cites W2769422756 @default.
- W4200594589 cites W2769851464 @default.
- W4200594589 cites W2898192966 @default.
- W4200594589 cites W2900266642 @default.
- W4200594589 cites W2911489562 @default.
- W4200594589 cites W2930139824 @default.
- W4200594589 cites W2942760134 @default.
- W4200594589 cites W2944218112 @default.
- W4200594589 cites W2955748575 @default.
- W4200594589 cites W2981022715 @default.
- W4200594589 cites W2990377124 @default.
- W4200594589 cites W2997522493 @default.
- W4200594589 cites W3017313856 @default.
- W4200594589 cites W3094263970 @default.
- W4200594589 cites W3094657717 @default.
- W4200594589 cites W3115898000 @default.
- W4200594589 cites W3156316299 @default.
- W4200594589 cites W3157657116 @default.
- W4200594589 doi "https://doi.org/10.1016/j.ijmedinf.2021.104678" @default.
- W4200594589 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/34999410" @default.
- W4200594589 hasPublicationYear "2022" @default.
- W4200594589 type Work @default.
- W4200594589 citedByCount "4" @default.
- W4200594589 countsByYear W42005945892022 @default.
- W4200594589 countsByYear W42005945892023 @default.
- W4200594589 crossrefType "journal-article" @default.
- W4200594589 hasAuthorship W4200594589A5000683013 @default.
- W4200594589 hasAuthorship W4200594589A5031730846 @default.
- W4200594589 hasAuthorship W4200594589A5067629425 @default.
- W4200594589 hasAuthorship W4200594589A5072238171 @default.
- W4200594589 hasBestOaLocation W42005945892 @default.
- W4200594589 hasConcept C118487528 @default.
- W4200594589 hasConcept C119767625 @default.
- W4200594589 hasConcept C119857082 @default.
- W4200594589 hasConcept C126322002 @default.
- W4200594589 hasConcept C126838900 @default.
- W4200594589 hasConcept C144024400 @default.
- W4200594589 hasConcept C149923435 @default.
- W4200594589 hasConcept C151730666 @default.
- W4200594589 hasConcept C154945302 @default.
- W4200594589 hasConcept C160735492 @default.
- W4200594589 hasConcept C162324750 @default.
- W4200594589 hasConcept C195910791 @default.
- W4200594589 hasConcept C2776135927 @default.
- W4200594589 hasConcept C2777267654 @default.
- W4200594589 hasConcept C2778257484 @default.
- W4200594589 hasConcept C2780084366 @default.
- W4200594589 hasConcept C2983447183 @default.
- W4200594589 hasConcept C3019952477 @default.
- W4200594589 hasConcept C3020144179 @default.
- W4200594589 hasConcept C41008148 @default.
- W4200594589 hasConcept C50522688 @default.
- W4200594589 hasConcept C512399662 @default.
- W4200594589 hasConcept C71924100 @default.
- W4200594589 hasConcept C72563966 @default.
- W4200594589 hasConcept C86803240 @default.
- W4200594589 hasConceptScore W4200594589C118487528 @default.
- W4200594589 hasConceptScore W4200594589C119767625 @default.
- W4200594589 hasConceptScore W4200594589C119857082 @default.
- W4200594589 hasConceptScore W4200594589C126322002 @default.
- W4200594589 hasConceptScore W4200594589C126838900 @default.
- W4200594589 hasConceptScore W4200594589C144024400 @default.
- W4200594589 hasConceptScore W4200594589C149923435 @default.
- W4200594589 hasConceptScore W4200594589C151730666 @default.
- W4200594589 hasConceptScore W4200594589C154945302 @default.
- W4200594589 hasConceptScore W4200594589C160735492 @default.
- W4200594589 hasConceptScore W4200594589C162324750 @default.
- W4200594589 hasConceptScore W4200594589C195910791 @default.
- W4200594589 hasConceptScore W4200594589C2776135927 @default.
- W4200594589 hasConceptScore W4200594589C2777267654 @default.
- W4200594589 hasConceptScore W4200594589C2778257484 @default.
- W4200594589 hasConceptScore W4200594589C2780084366 @default.
- W4200594589 hasConceptScore W4200594589C2983447183 @default.
- W4200594589 hasConceptScore W4200594589C3019952477 @default.
- W4200594589 hasConceptScore W4200594589C3020144179 @default.
- W4200594589 hasConceptScore W4200594589C41008148 @default.