Matches in SemOpenAlex for { <https://semopenalex.org/work/W3037823790> ?p ?o ?g. }
- W3037823790 endingPage "88" @default.
- W3037823790 startingPage "78" @default.
- W3037823790 abstract "PurposeTo illustrate what is inside the so-called black box of deep learning models (DLMs) so that clinicians can have greater confidence in the conclusions of artificial intelligence by evaluating adversarial explanation on its ability to explain the rationale of DLM decisions for glaucoma and glaucoma-related findings. Adversarial explanation generates adversarial examples (AEs), or images that have been changed to gain or lose pathologic characteristic-specific traits, to explain the DLM’s rationale.DesignEvaluation of explanation methods for DLMs.ParticipantsHealth screening participants (n = 1653) at the Seoul National University Hospital Health Promotion Center, Seoul, Republic of Korea.MethodsWe trained DLMs for referable glaucoma (RG), increased cup-to-disc ratio (ICDR), disc rim narrowing (DRN), and retinal nerve fiber layer defect (RNFLD) using 6430 retinal fundus images. Surveys consisting of explanations using AE and gradient-weighted class activation mapping (GradCAM), a conventional heatmap-based explanation method, were generated for 400 pathologic and healthy patient eyes. For each method, board-trained glaucoma specialists rated location explainability, the ability to pinpoint decision-relevant areas in the image, and rationale explainability, the ability to inform the user on the model’s reasoning for the decision based on pathologic features. Scores were compared by paired Wilcoxon signed-rank test.Main Outcome MeasuresArea under the receiver operating characteristic curve (AUC), sensitivities, and specificities of DLMs; visualization of clinical pathologic changes of AEs; and survey scores for locational and rationale explainability.ResultsThe AUCs were 0.90, 0.99, 0.95, and 0.79 and sensitivities were 0.79, 1.00, 0.82, and 0.55 at 0.90 specificity for RG, ICDR, DRN, and RNFLD DLMs, respectively. Generated AEs showed valid clinical feature changes, and survey results for location explainability were 3.94 ± 1.33 and 2.55 ± 1.24 using AEs and GradCAMs, respectively, of a possible maximum score of 5 points. The scores for rationale explainability were 3.97 ± 1.31 and 2.10 ± 1.25 for AEs and GradCAM, respectively. Adversarial example provided significantly better explainability than GradCAM.ConclusionsAdversarial explanation increased the explainability over GradCAM, a conventional heatmap-based explanation method. Adversarial explanation may help medical professionals understand more clearly the rationale of DLMs when using them for clinical decisions. To illustrate what is inside the so-called black box of deep learning models (DLMs) so that clinicians can have greater confidence in the conclusions of artificial intelligence by evaluating adversarial explanation on its ability to explain the rationale of DLM decisions for glaucoma and glaucoma-related findings. Adversarial explanation generates adversarial examples (AEs), or images that have been changed to gain or lose pathologic characteristic-specific traits, to explain the DLM’s rationale. Evaluation of explanation methods for DLMs. Health screening participants (n = 1653) at the Seoul National University Hospital Health Promotion Center, Seoul, Republic of Korea. We trained DLMs for referable glaucoma (RG), increased cup-to-disc ratio (ICDR), disc rim narrowing (DRN), and retinal nerve fiber layer defect (RNFLD) using 6430 retinal fundus images. Surveys consisting of explanations using AE and gradient-weighted class activation mapping (GradCAM), a conventional heatmap-based explanation method, were generated for 400 pathologic and healthy patient eyes. For each method, board-trained glaucoma specialists rated location explainability, the ability to pinpoint decision-relevant areas in the image, and rationale explainability, the ability to inform the user on the model’s reasoning for the decision based on pathologic features. Scores were compared by paired Wilcoxon signed-rank test. Area under the receiver operating characteristic curve (AUC), sensitivities, and specificities of DLMs; visualization of clinical pathologic changes of AEs; and survey scores for locational and rationale explainability. The AUCs were 0.90, 0.99, 0.95, and 0.79 and sensitivities were 0.79, 1.00, 0.82, and 0.55 at 0.90 specificity for RG, ICDR, DRN, and RNFLD DLMs, respectively. Generated AEs showed valid clinical feature changes, and survey results for location explainability were 3.94 ± 1.33 and 2.55 ± 1.24 using AEs and GradCAMs, respectively, of a possible maximum score of 5 points. The scores for rationale explainability were 3.97 ± 1.31 and 2.10 ± 1.25 for AEs and GradCAM, respectively. Adversarial example provided significantly better explainability than GradCAM. Adversarial explanation increased the explainability over GradCAM, a conventional heatmap-based explanation method. Adversarial explanation may help medical professionals understand more clearly the rationale of DLMs when using them for clinical decisions." @default.
- W3037823790 created "2020-07-02" @default.
- W3037823790 creator A5009393558 @default.
- W3037823790 creator A5009413939 @default.
- W3037823790 creator A5019126213 @default.
- W3037823790 creator A5019832508 @default.
- W3037823790 creator A5028910139 @default.
- W3037823790 creator A5035047849 @default.
- W3037823790 creator A5041387187 @default.
- W3037823790 creator A5051636160 @default.
- W3037823790 creator A5051694555 @default.
- W3037823790 creator A5061810491 @default.
- W3037823790 creator A5075140229 @default.
- W3037823790 creator A5077821420 @default.
- W3037823790 creator A5080962975 @default.
- W3037823790 creator A5087803692 @default.
- W3037823790 creator A5089412914 @default.
- W3037823790 date "2021-01-01" @default.
- W3037823790 modified "2023-09-23" @default.
- W3037823790 title "Explaining the Rationale of Deep Learning Glaucoma Decisions with Adversarial Examples" @default.
- W3037823790 cites W1969176779 @default.
- W3037823790 cites W2078548076 @default.
- W3037823790 cites W2169113983 @default.
- W3037823790 cites W2767236661 @default.
- W3037823790 cites W2772246530 @default.
- W3037823790 cites W2784652774 @default.
- W3037823790 cites W2792026451 @default.
- W3037823790 cites W2799723178 @default.
- W3037823790 cites W2809787027 @default.
- W3037823790 cites W2893356526 @default.
- W3037823790 cites W2898192966 @default.
- W3037823790 cites W2899951262 @default.
- W3037823790 cites W2929375793 @default.
- W3037823790 cites W2946839276 @default.
- W3037823790 cites W2952436003 @default.
- W3037823790 cites W2964693503 @default.
- W3037823790 cites W2976808722 @default.
- W3037823790 cites W2995850447 @default.
- W3037823790 doi "https://doi.org/10.1016/j.ophtha.2020.06.036" @default.
- W3037823790 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/32598951" @default.
- W3037823790 hasPublicationYear "2021" @default.
- W3037823790 type Work @default.
- W3037823790 sameAs 3037823790 @default.
- W3037823790 citedByCount "20" @default.
- W3037823790 countsByYear W30378237902021 @default.
- W3037823790 countsByYear W30378237902022 @default.
- W3037823790 countsByYear W30378237902023 @default.
- W3037823790 crossrefType "journal-article" @default.
- W3037823790 hasAuthorship W3037823790A5009393558 @default.
- W3037823790 hasAuthorship W3037823790A5009413939 @default.
- W3037823790 hasAuthorship W3037823790A5019126213 @default.
- W3037823790 hasAuthorship W3037823790A5019832508 @default.
- W3037823790 hasAuthorship W3037823790A5028910139 @default.
- W3037823790 hasAuthorship W3037823790A5035047849 @default.
- W3037823790 hasAuthorship W3037823790A5041387187 @default.
- W3037823790 hasAuthorship W3037823790A5051636160 @default.
- W3037823790 hasAuthorship W3037823790A5051694555 @default.
- W3037823790 hasAuthorship W3037823790A5061810491 @default.
- W3037823790 hasAuthorship W3037823790A5075140229 @default.
- W3037823790 hasAuthorship W3037823790A5077821420 @default.
- W3037823790 hasAuthorship W3037823790A5080962975 @default.
- W3037823790 hasAuthorship W3037823790A5087803692 @default.
- W3037823790 hasAuthorship W3037823790A5089412914 @default.
- W3037823790 hasBestOaLocation W30378237901 @default.
- W3037823790 hasConcept C108583219 @default.
- W3037823790 hasConcept C118487528 @default.
- W3037823790 hasConcept C119767625 @default.
- W3037823790 hasConcept C126322002 @default.
- W3037823790 hasConcept C154945302 @default.
- W3037823790 hasConcept C2776391266 @default.
- W3037823790 hasConcept C2778527774 @default.
- W3037823790 hasConcept C2780592520 @default.
- W3037823790 hasConcept C37736160 @default.
- W3037823790 hasConcept C41008148 @default.
- W3037823790 hasConcept C58471807 @default.
- W3037823790 hasConcept C71924100 @default.
- W3037823790 hasConceptScore W3037823790C108583219 @default.
- W3037823790 hasConceptScore W3037823790C118487528 @default.
- W3037823790 hasConceptScore W3037823790C119767625 @default.
- W3037823790 hasConceptScore W3037823790C126322002 @default.
- W3037823790 hasConceptScore W3037823790C154945302 @default.
- W3037823790 hasConceptScore W3037823790C2776391266 @default.
- W3037823790 hasConceptScore W3037823790C2778527774 @default.
- W3037823790 hasConceptScore W3037823790C2780592520 @default.
- W3037823790 hasConceptScore W3037823790C37736160 @default.
- W3037823790 hasConceptScore W3037823790C41008148 @default.
- W3037823790 hasConceptScore W3037823790C58471807 @default.
- W3037823790 hasConceptScore W3037823790C71924100 @default.
- W3037823790 hasFunder F4320322557 @default.
- W3037823790 hasIssue "1" @default.
- W3037823790 hasLocation W30378237901 @default.
- W3037823790 hasOpenAccess W3037823790 @default.
- W3037823790 hasPrimaryLocation W30378237901 @default.
- W3037823790 hasRelatedWork W1503211735 @default.
- W3037823790 hasRelatedWork W2012393349 @default.
- W3037823790 hasRelatedWork W2025723352 @default.
- W3037823790 hasRelatedWork W2391291956 @default.
- W3037823790 hasRelatedWork W2396777714 @default.