Matches in SemOpenAlex for { <https://semopenalex.org/work/W4385603850> ?p ?o ?g. }
- W4385603850 endingPage "104" @default.
- W4385603850 startingPage "104" @default.
- W4385603850 abstract "Epicardial adipose tissue (EAT) is located between the visceral pericardium and myocardium, and EAT volume is correlated with cardiovascular risk. Nowadays, many deep learning-based automated EAT segmentation and quantification methods in the U-net family have been developed to reduce the workload for radiologists. The automatic assessment of EAT on non-contrast low-dose CT calcium score images poses a greater challenge compared to the automatic assessment on coronary CT angiography, which requires a higher radiation dose to capture the intricate details of the coronary arteries. This study comprehensively examined and evaluated state-of-the-art segmentation methods while outlining future research directions. Our dataset consisted of 154 non-contrast low-dose CT scans from the ROBINSCA study, with two types of labels: (a) region inside the pericardium and (b) pixel-wise EAT labels. We selected four advanced methods from the U-net family: 3D U-net, 3D attention U-net, an extended 3D attention U-net, and U-net++. For evaluation, we performed both four-fold cross-validation and hold-out tests. Agreement between the automatic segmentation/quantification and the manual quantification was evaluated with the Pearson correlation and the Bland–Altman analysis. Generally, the models trained with label type (a) showed better performance compared to models trained with label type (b). The U-net++ model trained with label type (a) showed the best performance for segmentation and quantification. The U-net++ model trained with label type (a) efficiently provided better EAT segmentation results (hold-out test: DCS = 80.18±0.20%, mIoU = 67.13±0.39%, sensitivity = 81.47±0.43%, specificity = 99.64±0.00%, Pearson correlation = 0.9405) and EAT volume compared to the other U-net-based networks and the recent EAT segmentation method. Interestingly, our findings indicate that 3D convolutional neural networks do not consistently outperform 2D networks in EAT segmentation and quantification. Moreover, utilizing labels representing the region inside the pericardium proved advantageous in training more accurate EAT segmentation models. These insights highlight the potential of deep learning-based methods for achieving robust EAT segmentation and quantification outcomes." @default.
- W4385603850 created "2023-08-05" @default.
- W4385603850 creator A5005026444 @default.
- W4385603850 creator A5008554158 @default.
- W4385603850 creator A5012401657 @default.
- W4385603850 creator A5024082845 @default.
- W4385603850 creator A5031138348 @default.
- W4385603850 creator A5078725649 @default.
- W4385603850 creator A5084733299 @default.
- W4385603850 date "2023-08-05" @default.
- W4385603850 modified "2023-10-14" @default.
- W4385603850 title "The U-Net Family for Epicardial Adipose Tissue Segmentation and Quantification in Low-Dose CT" @default.
- W4385603850 cites W161249732 @default.
- W4385603850 cites W1856792630 @default.
- W4385603850 cites W1894830265 @default.
- W4385603850 cites W1901129140 @default.
- W4385603850 cites W1903029394 @default.
- W4385603850 cites W2026616100 @default.
- W4385603850 cites W2036175283 @default.
- W4385603850 cites W2038279598 @default.
- W4385603850 cites W2058902680 @default.
- W4385603850 cites W2071450001 @default.
- W4385603850 cites W2103906689 @default.
- W4385603850 cites W2114484288 @default.
- W4385603850 cites W2464708700 @default.
- W4385603850 cites W2519781522 @default.
- W4385603850 cites W2749253269 @default.
- W4385603850 cites W2756462150 @default.
- W4385603850 cites W2773154842 @default.
- W4385603850 cites W2790564346 @default.
- W4385603850 cites W2884436604 @default.
- W4385603850 cites W2921486645 @default.
- W4385603850 cites W2962914239 @default.
- W4385603850 cites W2989991226 @default.
- W4385603850 cites W2991912488 @default.
- W4385603850 cites W3009869033 @default.
- W4385603850 cites W3011442757 @default.
- W4385603850 cites W3011818728 @default.
- W4385603850 cites W3013414605 @default.
- W4385603850 cites W3040786357 @default.
- W4385603850 cites W3041026522 @default.
- W4385603850 cites W3101612813 @default.
- W4385603850 cites W3158629041 @default.
- W4385603850 cites W3164821526 @default.
- W4385603850 cites W3181524052 @default.
- W4385603850 cites W3192018998 @default.
- W4385603850 cites W4211137352 @default.
- W4385603850 cites W4283267290 @default.
- W4385603850 cites W4319460018 @default.
- W4385603850 cites W4379140988 @default.
- W4385603850 doi "https://doi.org/10.3390/technologies11040104" @default.
- W4385603850 hasPublicationYear "2023" @default.
- W4385603850 type Work @default.
- W4385603850 citedByCount "0" @default.
- W4385603850 crossrefType "journal-article" @default.
- W4385603850 hasAuthorship W4385603850A5005026444 @default.
- W4385603850 hasAuthorship W4385603850A5008554158 @default.
- W4385603850 hasAuthorship W4385603850A5012401657 @default.
- W4385603850 hasAuthorship W4385603850A5024082845 @default.
- W4385603850 hasAuthorship W4385603850A5031138348 @default.
- W4385603850 hasAuthorship W4385603850A5078725649 @default.
- W4385603850 hasAuthorship W4385603850A5084733299 @default.
- W4385603850 hasBestOaLocation W43856038501 @default.
- W4385603850 hasConcept C126322002 @default.
- W4385603850 hasConcept C154945302 @default.
- W4385603850 hasConcept C171089720 @default.
- W4385603850 hasConcept C2776820930 @default.
- W4385603850 hasConcept C2778742706 @default.
- W4385603850 hasConcept C2908987861 @default.
- W4385603850 hasConcept C2989005 @default.
- W4385603850 hasConcept C41008148 @default.
- W4385603850 hasConcept C71924100 @default.
- W4385603850 hasConcept C89600930 @default.
- W4385603850 hasConceptScore W4385603850C126322002 @default.
- W4385603850 hasConceptScore W4385603850C154945302 @default.
- W4385603850 hasConceptScore W4385603850C171089720 @default.
- W4385603850 hasConceptScore W4385603850C2776820930 @default.
- W4385603850 hasConceptScore W4385603850C2778742706 @default.
- W4385603850 hasConceptScore W4385603850C2908987861 @default.
- W4385603850 hasConceptScore W4385603850C2989005 @default.
- W4385603850 hasConceptScore W4385603850C41008148 @default.
- W4385603850 hasConceptScore W4385603850C71924100 @default.
- W4385603850 hasConceptScore W4385603850C89600930 @default.
- W4385603850 hasIssue "4" @default.
- W4385603850 hasLocation W43856038501 @default.
- W4385603850 hasOpenAccess W4385603850 @default.
- W4385603850 hasPrimaryLocation W43856038501 @default.
- W4385603850 hasRelatedWork W1950019171 @default.
- W4385603850 hasRelatedWork W2005437358 @default.
- W4385603850 hasRelatedWork W2071870700 @default.
- W4385603850 hasRelatedWork W2162728371 @default.
- W4385603850 hasRelatedWork W2418429318 @default.
- W4385603850 hasRelatedWork W2931966515 @default.
- W4385603850 hasRelatedWork W2966739491 @default.
- W4385603850 hasRelatedWork W3013121937 @default.
- W4385603850 hasRelatedWork W4200395100 @default.
- W4385603850 hasRelatedWork W4317545854 @default.
- W4385603850 hasVolume "11" @default.