Matches in SemOpenAlex for { <https://semopenalex.org/work/W4293207029> ?p ?o ?g. }
- W4293207029 endingPage "242" @default.
- W4293207029 startingPage "217" @default.
- W4293207029 abstract "Automatic analysis of medical images and endoscopic images, in particular, is an attractive research topic in recent years. To achieve this goal, many tasks must be conducted for example lesion detection, segmentation, and classification. However, existing methods for such problems still face challenges due to the spreading characteristic of the lesions as well as some artifacts caused by motion, specularities, low contrast, bubbles, debris, bodily fluid, and blood. As a consequence, single segmentation or detection could not deal with those issues. Besides, although deep learning has achieved impressive results in many tasks, lacking a large annotated dataset could lead to overfitting problems. In this paper, we tackle these issues by taking particular characteristics of lesions of interest and advantages of deep segmentation models into account. We propose a dual-path framework (namely, DCS-UNet) that combines both segmentation and classification. We first segment lesions from the image using U-Net architecture with different encoder backbones (ResNet-50, VGG-16, DenseNet-201). The segmented regions will be then refined in the second path where we classify every patch in the whole image or the Inner or Outer regions extended from the contours given by the segmentation results. For the refinement scheme, we utilize the color-dependent and texture features as doctors’ advice and deploy the support vector machine (SVM) technique to classify a patch into disease region or not. Extensive experiments were conducted on three datasets of gastroesophageal reflux disease (GERD) endoscopic images as GradeM, GradeA, GradeB, which are defined as modified Los Angeles classification. The experimental results show the improvements of the proposed method compared to the single U-Net or a segmentation using the hand-designed features-based scheme. The proposed method improves mDice and mIoU by 0.5% and 0.36% on GradeA dataset, which is the most challenging dataset with ambiguous separations of GERD vs. normal regions, and by 0.82% and 0.81% on GradeM dataset, which is clearer to segment GERD regions. This proposed framework shows the potential to develop a diagnosis assistance system to help endoscopists reduce burden in examining GERD in the future." @default.
- W4293207029 created "2022-08-27" @default.
- W4293207029 creator A5034422819 @default.
- W4293207029 creator A5040649737 @default.
- W4293207029 creator A5047803845 @default.
- W4293207029 creator A5049948188 @default.
- W4293207029 creator A5050418192 @default.
- W4293207029 creator A5070928351 @default.
- W4293207029 creator A5078527316 @default.
- W4293207029 creator A5081402863 @default.
- W4293207029 creator A5081957303 @default.
- W4293207029 creator A5086352731 @default.
- W4293207029 date "2022-10-14" @default.
- W4293207029 modified "2023-09-27" @default.
- W4293207029 title "DCS-UNet: Dual-Path Framework for Segmentation of Reflux Esophagitis Lesions from Endoscopic Images with U-Net-Based Segmentation and Color/Texture Analysis" @default.
- W4293207029 cites W1523612714 @default.
- W4293207029 cites W1901129140 @default.
- W4293207029 cites W1903029394 @default.
- W4293207029 cites W2046788624 @default.
- W4293207029 cites W2051209726 @default.
- W4293207029 cites W2052815754 @default.
- W4293207029 cites W2077474654 @default.
- W4293207029 cites W2090516711 @default.
- W4293207029 cites W2100037927 @default.
- W4293207029 cites W2106279103 @default.
- W4293207029 cites W2117539524 @default.
- W4293207029 cites W2119222965 @default.
- W4293207029 cites W2133715195 @default.
- W4293207029 cites W2147141800 @default.
- W4293207029 cites W2194775991 @default.
- W4293207029 cites W2291020065 @default.
- W4293207029 cites W2518605621 @default.
- W4293207029 cites W2541669745 @default.
- W4293207029 cites W2607941059 @default.
- W4293207029 cites W2735728519 @default.
- W4293207029 cites W2963446712 @default.
- W4293207029 cites W2989440555 @default.
- W4293207029 cites W2996290406 @default.
- W4293207029 cites W2999580839 @default.
- W4293207029 cites W3013772992 @default.
- W4293207029 cites W3028954669 @default.
- W4293207029 cites W3032158266 @default.
- W4293207029 cites W3048853714 @default.
- W4293207029 cites W3096947210 @default.
- W4293207029 cites W3099888165 @default.
- W4293207029 cites W3133432362 @default.
- W4293207029 cites W3210227331 @default.
- W4293207029 doi "https://doi.org/10.1142/s2196888822500385" @default.
- W4293207029 hasPublicationYear "2022" @default.
- W4293207029 type Work @default.
- W4293207029 citedByCount "0" @default.
- W4293207029 crossrefType "journal-article" @default.
- W4293207029 hasAuthorship W4293207029A5034422819 @default.
- W4293207029 hasAuthorship W4293207029A5040649737 @default.
- W4293207029 hasAuthorship W4293207029A5047803845 @default.
- W4293207029 hasAuthorship W4293207029A5049948188 @default.
- W4293207029 hasAuthorship W4293207029A5050418192 @default.
- W4293207029 hasAuthorship W4293207029A5070928351 @default.
- W4293207029 hasAuthorship W4293207029A5078527316 @default.
- W4293207029 hasAuthorship W4293207029A5081402863 @default.
- W4293207029 hasAuthorship W4293207029A5081957303 @default.
- W4293207029 hasAuthorship W4293207029A5086352731 @default.
- W4293207029 hasBestOaLocation W42932070291 @default.
- W4293207029 hasConcept C108583219 @default.
- W4293207029 hasConcept C12267149 @default.
- W4293207029 hasConcept C124504099 @default.
- W4293207029 hasConcept C153180895 @default.
- W4293207029 hasConcept C154945302 @default.
- W4293207029 hasConcept C199360897 @default.
- W4293207029 hasConcept C22019652 @default.
- W4293207029 hasConcept C2777735758 @default.
- W4293207029 hasConcept C31972630 @default.
- W4293207029 hasConcept C41008148 @default.
- W4293207029 hasConcept C50644808 @default.
- W4293207029 hasConcept C89600930 @default.
- W4293207029 hasConceptScore W4293207029C108583219 @default.
- W4293207029 hasConceptScore W4293207029C12267149 @default.
- W4293207029 hasConceptScore W4293207029C124504099 @default.
- W4293207029 hasConceptScore W4293207029C153180895 @default.
- W4293207029 hasConceptScore W4293207029C154945302 @default.
- W4293207029 hasConceptScore W4293207029C199360897 @default.
- W4293207029 hasConceptScore W4293207029C22019652 @default.
- W4293207029 hasConceptScore W4293207029C2777735758 @default.
- W4293207029 hasConceptScore W4293207029C31972630 @default.
- W4293207029 hasConceptScore W4293207029C41008148 @default.
- W4293207029 hasConceptScore W4293207029C50644808 @default.
- W4293207029 hasConceptScore W4293207029C89600930 @default.
- W4293207029 hasIssue "02" @default.
- W4293207029 hasLocation W42932070291 @default.
- W4293207029 hasOpenAccess W4293207029 @default.
- W4293207029 hasPrimaryLocation W42932070291 @default.
- W4293207029 hasRelatedWork W1669643531 @default.
- W4293207029 hasRelatedWork W2005437358 @default.
- W4293207029 hasRelatedWork W2008656436 @default.
- W4293207029 hasRelatedWork W2023558673 @default.
- W4293207029 hasRelatedWork W2134924024 @default.
- W4293207029 hasRelatedWork W2517104666 @default.
- W4293207029 hasRelatedWork W2790662084 @default.