Matches in SemOpenAlex for { <https://semopenalex.org/work/W4386195382> ?p ?o ?g. }
Showing items 1 to 89 of
89
with 100 items per page.
- W4386195382 abstract "Abstract Multiple attempts at intracranial hemorrhage (ICH) detection using deep-learning techniques have been made and plagued with clinical failures. Most studies for ICH detection have insufficient data or weak annotations. We sought to determine whether a deep-learning algorithm for ICH detection trained on a strongly annotated dataset outperforms that trained on a weakly annotated dataset, and whether a weighted ensemble model that integrates separate models trained using datasets with different ICH subtypes is more accurate. We used publicly available brain CT scans from the Radiological Society of North America (27,861 CT scans, 3,528 ICHs) and AI-Hub (53,045 CT scans, 7,013 ICHs) for training datasets. For external testing, 600 CT scans (327 with ICH) from Dongguk University Medical Center and 386 CT scans (160 with ICH) from Qure.ai were used. DenseNet121, InceptionResNetV2, MobileNetV2, and VGG19 were trained on strongly and weakly annotated datasets and compared. We then developed a weighted ensemble model combining separate models trained on all ICH, subdural hemorrhage (SDH), subarachnoid hemorrhage (SAH), and small-lesion ICH cases. The final weighted ensemble model was compared to four well-known deep-learning models. Six neurologists reviewed difficult ICH cases after external testing. InceptionResNetV2, MobileNetV2, and VGG19 models outperformed when trained on strongly annotated datasets. A weighted ensemble model combining models trained on SDH, SAH, and small-lesion ICH had a higher AUC than a model only trained on all ICH cases. This model outperformed four well-known deep-learning models in terms of sensitivity, specificity, and AUC. Strongly annotated data are superior to weakly annotated data for training deep-learning algorithms. Since no model can capture all aspects of a complex task well, we developed a weighted ensemble model for ICH detection after training with large-scale strongly annotated CT scans. We also showed that a better understanding and management of cases challenging for AI and human is required to facilitate clinical use of ICH detection algorithms. Key Points Question Can a weighted ensemble method and strongly annotated training datasets develop a deep-learning model with high accuracy to detect intracranial hemorrhage? Findings A deep-learning algorithm for detecting ICH trained with a strongly annotated dataset outperformed models trained with a weakly annotated dataset. After ensembling separate models that were trained with only SDH, SAH, and small-lesion ICH, a weighted ensemble model had a higher AUC. Meaning This study suggests that to enhance the performance of deep-learning models, researchers should consider the distinct imaging characteristics of each hemorrhage subtype and use strongly annotated training datasets." @default.
- W4386195382 created "2023-08-27" @default.
- W4386195382 creator A5001539526 @default.
- W4386195382 creator A5007339498 @default.
- W4386195382 creator A5014199850 @default.
- W4386195382 creator A5022194885 @default.
- W4386195382 creator A5047005390 @default.
- W4386195382 creator A5050789321 @default.
- W4386195382 creator A5052100533 @default.
- W4386195382 creator A5069866514 @default.
- W4386195382 creator A5075510968 @default.
- W4386195382 creator A5090337019 @default.
- W4386195382 creator A5091036340 @default.
- W4386195382 date "2023-08-25" @default.
- W4386195382 modified "2023-09-29" @default.
- W4386195382 title "Strengthening Deep-learning Models for Intracranial Hemorrhage Detection: Strongly Annotated Computed Tomography Images and Model Ensembles" @default.
- W4386195382 cites W2065565905 @default.
- W4386195382 cites W2088731482 @default.
- W4386195382 cites W2183341477 @default.
- W4386195382 cites W2511730936 @default.
- W4386195382 cites W2537123990 @default.
- W4386195382 cites W2796438033 @default.
- W4386195382 cites W2899381700 @default.
- W4386195382 cites W2971013993 @default.
- W4386195382 cites W2981941315 @default.
- W4386195382 cites W3016970897 @default.
- W4386195382 cites W3023284086 @default.
- W4386195382 cites W3036586801 @default.
- W4386195382 cites W3040525591 @default.
- W4386195382 cites W3171849353 @default.
- W4386195382 cites W3188199774 @default.
- W4386195382 cites W3202023533 @default.
- W4386195382 cites W4210943902 @default.
- W4386195382 cites W4213328656 @default.
- W4386195382 cites W4213348184 @default.
- W4386195382 cites W4221027618 @default.
- W4386195382 cites W4224236454 @default.
- W4386195382 cites W4300939921 @default.
- W4386195382 doi "https://doi.org/10.1101/2023.08.24.23293394" @default.
- W4386195382 hasPublicationYear "2023" @default.
- W4386195382 type Work @default.
- W4386195382 citedByCount "0" @default.
- W4386195382 crossrefType "posted-content" @default.
- W4386195382 hasAuthorship W4386195382A5001539526 @default.
- W4386195382 hasAuthorship W4386195382A5007339498 @default.
- W4386195382 hasAuthorship W4386195382A5014199850 @default.
- W4386195382 hasAuthorship W4386195382A5022194885 @default.
- W4386195382 hasAuthorship W4386195382A5047005390 @default.
- W4386195382 hasAuthorship W4386195382A5050789321 @default.
- W4386195382 hasAuthorship W4386195382A5052100533 @default.
- W4386195382 hasAuthorship W4386195382A5069866514 @default.
- W4386195382 hasAuthorship W4386195382A5075510968 @default.
- W4386195382 hasAuthorship W4386195382A5090337019 @default.
- W4386195382 hasAuthorship W4386195382A5091036340 @default.
- W4386195382 hasBestOaLocation W43861953821 @default.
- W4386195382 hasConcept C108583219 @default.
- W4386195382 hasConcept C119857082 @default.
- W4386195382 hasConcept C126838900 @default.
- W4386195382 hasConcept C153180895 @default.
- W4386195382 hasConcept C154945302 @default.
- W4386195382 hasConcept C41008148 @default.
- W4386195382 hasConcept C45942800 @default.
- W4386195382 hasConcept C544519230 @default.
- W4386195382 hasConcept C71924100 @default.
- W4386195382 hasConceptScore W4386195382C108583219 @default.
- W4386195382 hasConceptScore W4386195382C119857082 @default.
- W4386195382 hasConceptScore W4386195382C126838900 @default.
- W4386195382 hasConceptScore W4386195382C153180895 @default.
- W4386195382 hasConceptScore W4386195382C154945302 @default.
- W4386195382 hasConceptScore W4386195382C41008148 @default.
- W4386195382 hasConceptScore W4386195382C45942800 @default.
- W4386195382 hasConceptScore W4386195382C544519230 @default.
- W4386195382 hasConceptScore W4386195382C71924100 @default.
- W4386195382 hasLocation W43861953821 @default.
- W4386195382 hasOpenAccess W4386195382 @default.
- W4386195382 hasPrimaryLocation W43861953821 @default.
- W4386195382 hasRelatedWork W2810053714 @default.
- W4386195382 hasRelatedWork W3124943098 @default.
- W4386195382 hasRelatedWork W3136979370 @default.
- W4386195382 hasRelatedWork W4223943233 @default.
- W4386195382 hasRelatedWork W4308112567 @default.
- W4386195382 hasRelatedWork W4312200629 @default.
- W4386195382 hasRelatedWork W4360585206 @default.
- W4386195382 hasRelatedWork W4364306694 @default.
- W4386195382 hasRelatedWork W4380075502 @default.
- W4386195382 hasRelatedWork W4380086463 @default.
- W4386195382 isParatext "false" @default.
- W4386195382 isRetracted "false" @default.
- W4386195382 workType "article" @default.