Matches in SemOpenAlex for { <https://semopenalex.org/work/W3206468898> ?p ?o ?g. }
- W3206468898 endingPage "100069" @default.
- W3206468898 startingPage "100069" @default.
- W3206468898 abstract "PurposeTo evaluate the performance of a federated learning framework for deep neural network-based retinal microvasculature segmentation and referable diabetic retinopathy (RDR) classification using OCT and OCT angiography (OCTA).DesignRetrospective analysis of clinical OCT and OCTA scans of control participants and patients with diabetes.ParticipantsThe 153 OCTA en face images used for microvasculature segmentation were acquired from 4 OCT instruments with fields of view ranging from 2 × 2-mm to 6 × 6-mm. The 700 eyes used for RDR classification consisted of OCTA en face images and structural OCT projections acquired from 2 commercial OCT systems.MethodsOCT angiography images used for microvasculature segmentation were delineated manually and verified by retina experts. Diabetic retinopathy (DR) severity was evaluated by retinal specialists and was condensed into 2 classes: non-RDR and RDR. The federated learning configuration was demonstrated via simulation using 4 clients for microvasculature segmentation and was compared with other collaborative training methods. Subsequently, federated learning was applied over multiple institutions for RDR classification and was compared with models trained and tested on data from the same institution (internal models) and different institutions (external models).Main Outcome MeasuresFor microvasculature segmentation, we measured the accuracy and Dice similarity coefficient (DSC). For severity classification, we measured accuracy, area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve, balanced accuracy, F1 score, sensitivity, and specificity.ResultsFor both applications, federated learning achieved similar performance as internal models. Specifically, for microvasculature segmentation, the federated learning model achieved similar performance (mean DSC across all test sets, 0.793) as models trained on a fully centralized dataset (mean DSC, 0.807). For RDR classification, federated learning achieved a mean AUROC of 0.954 and 0.960; the internal models attained a mean AUROC of 0.956 and 0.973. Similar results are reflected in the other calculated evaluation metrics.ConclusionsFederated learning showed similar results to traditional deep learning in both applications of segmentation and classification, while maintaining data privacy. Evaluation metrics highlight the potential of collaborative learning for increasing domain diversity and the generalizability of models used for the classification of OCT data. To evaluate the performance of a federated learning framework for deep neural network-based retinal microvasculature segmentation and referable diabetic retinopathy (RDR) classification using OCT and OCT angiography (OCTA). Retrospective analysis of clinical OCT and OCTA scans of control participants and patients with diabetes. The 153 OCTA en face images used for microvasculature segmentation were acquired from 4 OCT instruments with fields of view ranging from 2 × 2-mm to 6 × 6-mm. The 700 eyes used for RDR classification consisted of OCTA en face images and structural OCT projections acquired from 2 commercial OCT systems. OCT angiography images used for microvasculature segmentation were delineated manually and verified by retina experts. Diabetic retinopathy (DR) severity was evaluated by retinal specialists and was condensed into 2 classes: non-RDR and RDR. The federated learning configuration was demonstrated via simulation using 4 clients for microvasculature segmentation and was compared with other collaborative training methods. Subsequently, federated learning was applied over multiple institutions for RDR classification and was compared with models trained and tested on data from the same institution (internal models) and different institutions (external models). For microvasculature segmentation, we measured the accuracy and Dice similarity coefficient (DSC). For severity classification, we measured accuracy, area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve, balanced accuracy, F1 score, sensitivity, and specificity. For both applications, federated learning achieved similar performance as internal models. Specifically, for microvasculature segmentation, the federated learning model achieved similar performance (mean DSC across all test sets, 0.793) as models trained on a fully centralized dataset (mean DSC, 0.807). For RDR classification, federated learning achieved a mean AUROC of 0.954 and 0.960; the internal models attained a mean AUROC of 0.956 and 0.973. Similar results are reflected in the other calculated evaluation metrics. Federated learning showed similar results to traditional deep learning in both applications of segmentation and classification, while maintaining data privacy. Evaluation metrics highlight the potential of collaborative learning for increasing domain diversity and the generalizability of models used for the classification of OCT data." @default.
- W3206468898 created "2021-10-25" @default.
- W3206468898 creator A5000972836 @default.
- W3206468898 creator A5013180024 @default.
- W3206468898 creator A5020434567 @default.
- W3206468898 creator A5026906414 @default.
- W3206468898 creator A5029283104 @default.
- W3206468898 creator A5044200406 @default.
- W3206468898 creator A5054580904 @default.
- W3206468898 creator A5055560750 @default.
- W3206468898 creator A5063931826 @default.
- W3206468898 creator A5081986793 @default.
- W3206468898 creator A5086060762 @default.
- W3206468898 date "2021-12-01" @default.
- W3206468898 modified "2023-10-17" @default.
- W3206468898 title "Federated Learning for Microvasculature Segmentation and Diabetic Retinopathy Classification of OCT Data" @default.
- W3206468898 cites W1984203772 @default.
- W3206468898 cites W2039314819 @default.
- W3206468898 cites W2093620952 @default.
- W3206468898 cites W2127322315 @default.
- W3206468898 cites W2510096069 @default.
- W3206468898 cites W2579891345 @default.
- W3206468898 cites W2741520251 @default.
- W3206468898 cites W2886801379 @default.
- W3206468898 cites W2898192966 @default.
- W3206468898 cites W2930926105 @default.
- W3206468898 cites W2934399013 @default.
- W3206468898 cites W2953273677 @default.
- W3206468898 cites W2970408908 @default.
- W3206468898 cites W3006354677 @default.
- W3206468898 cites W3012839906 @default.
- W3206468898 cites W3015711121 @default.
- W3206468898 cites W3021288413 @default.
- W3206468898 cites W3040685212 @default.
- W3206468898 cites W3045674654 @default.
- W3206468898 cites W3048241105 @default.
- W3206468898 cites W3086590218 @default.
- W3206468898 cites W3090496569 @default.
- W3206468898 cites W3092462775 @default.
- W3206468898 cites W3118996476 @default.
- W3206468898 cites W3127057363 @default.
- W3206468898 cites W4302760599 @default.
- W3206468898 doi "https://doi.org/10.1016/j.xops.2021.100069" @default.
- W3206468898 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/36246944" @default.
- W3206468898 hasPublicationYear "2021" @default.
- W3206468898 type Work @default.
- W3206468898 sameAs 3206468898 @default.
- W3206468898 citedByCount "28" @default.
- W3206468898 countsByYear W32064688982022 @default.
- W3206468898 countsByYear W32064688982023 @default.
- W3206468898 crossrefType "journal-article" @default.
- W3206468898 hasAuthorship W3206468898A5000972836 @default.
- W3206468898 hasAuthorship W3206468898A5013180024 @default.
- W3206468898 hasAuthorship W3206468898A5020434567 @default.
- W3206468898 hasAuthorship W3206468898A5026906414 @default.
- W3206468898 hasAuthorship W3206468898A5029283104 @default.
- W3206468898 hasAuthorship W3206468898A5044200406 @default.
- W3206468898 hasAuthorship W3206468898A5054580904 @default.
- W3206468898 hasAuthorship W3206468898A5055560750 @default.
- W3206468898 hasAuthorship W3206468898A5063931826 @default.
- W3206468898 hasAuthorship W3206468898A5081986793 @default.
- W3206468898 hasAuthorship W3206468898A5086060762 @default.
- W3206468898 hasBestOaLocation W32064688981 @default.
- W3206468898 hasConcept C108583219 @default.
- W3206468898 hasConcept C118487528 @default.
- W3206468898 hasConcept C119857082 @default.
- W3206468898 hasConcept C134018914 @default.
- W3206468898 hasConcept C148524875 @default.
- W3206468898 hasConcept C153180895 @default.
- W3206468898 hasConcept C154945302 @default.
- W3206468898 hasConcept C2779829184 @default.
- W3206468898 hasConcept C2780827179 @default.
- W3206468898 hasConcept C41008148 @default.
- W3206468898 hasConcept C555293320 @default.
- W3206468898 hasConcept C58471807 @default.
- W3206468898 hasConcept C71924100 @default.
- W3206468898 hasConcept C89600930 @default.
- W3206468898 hasConceptScore W3206468898C108583219 @default.
- W3206468898 hasConceptScore W3206468898C118487528 @default.
- W3206468898 hasConceptScore W3206468898C119857082 @default.
- W3206468898 hasConceptScore W3206468898C134018914 @default.
- W3206468898 hasConceptScore W3206468898C148524875 @default.
- W3206468898 hasConceptScore W3206468898C153180895 @default.
- W3206468898 hasConceptScore W3206468898C154945302 @default.
- W3206468898 hasConceptScore W3206468898C2779829184 @default.
- W3206468898 hasConceptScore W3206468898C2780827179 @default.
- W3206468898 hasConceptScore W3206468898C41008148 @default.
- W3206468898 hasConceptScore W3206468898C555293320 @default.
- W3206468898 hasConceptScore W3206468898C58471807 @default.
- W3206468898 hasConceptScore W3206468898C71924100 @default.
- W3206468898 hasConceptScore W3206468898C89600930 @default.
- W3206468898 hasFunder F4320314000 @default.
- W3206468898 hasFunder F4320319965 @default.
- W3206468898 hasFunder F4320334506 @default.
- W3206468898 hasFunder F4320334593 @default.
- W3206468898 hasIssue "4" @default.
- W3206468898 hasLocation W32064688981 @default.
- W3206468898 hasLocation W32064688982 @default.