Matches in SemOpenAlex for { <https://semopenalex.org/work/W3207972043> ?p ?o ?g. }
- W3207972043 endingPage "7772" @default.
- W3207972043 startingPage "7757" @default.
- W3207972043 abstract "To investigate multiple deep learning methods for automated segmentation (auto-segmentation) of the parotid glands, submandibular glands, and level II and level III lymph nodes on magnetic resonance imaging (MRI). Outlining radiosensitive organs on images used to assist radiation therapy (radiotherapy) of patients with head and neck cancer (HNC) is a time-consuming task, in which variability between observers may directly impact on patient treatment outcomes. Auto-segmentation on computed tomography imaging has been shown to result in significant time reductions and more consistent outlines of the organs at risk.Three convolutional neural network (CNN)-based auto-segmentation architectures were developed using manual segmentations and T2-weighted MRI images provided from the American Association of Physicists in Medicine (AAPM) radiotherapy MRI auto-contouring (RT-MAC) challenge dataset (n = 31). Auto-segmentation performance was evaluated with segmentation similarity and surface distance metrics on the RT-MAC dataset with institutional manual segmentations (n = 10). The generalizability of the auto-segmentation methods was assessed on an institutional MRI dataset (n = 10).Auto-segmentation performance on the RT-MAC images with institutional segmentations was higher than previously reported MRI methods for the parotid glands (Dice: 0.860 ± 0.067, mean surface distance [MSD]: 1.33 ± 0.40 mm) and the first report of MRI performance for submandibular glands (Dice: 0.830 ± 0.032, MSD: 1.16 ± 0.47 mm). We demonstrate that high-resolution auto-segmentations with improved geometric accuracy can be generated for the parotid and submandibular glands by cascading a localizer CNN and a cropped high-resolution CNN. Improved MSDs were observed between automatic and manual segmentations of the submandibular glands when a low-resolution auto-segmentation was used as prior knowledge in the second-stage CNN. Reduced auto-segmentation performance was observed on our institutional MRI dataset when trained on external RT-MAC images; only the parotid gland auto-segmentations were considered clinically feasible for manual correction (Dice: 0.775 ± 0.105, MSD: 1.20 ± 0.60 mm).This work demonstrates that CNNs are a suitable method to auto-segment the parotid and submandibular glands on MRI images of patients with HNC, and that cascaded CNNs can generate high-resolution segmentations with improved geometric accuracy. Deep learning methods may be suitable for auto-segmentation of the parotid glands on T2-weighted MRI images from different scanners, but further work is required to improve the performance and generalizability of these methods for auto-segmentation of the submandibular glands and lymph nodes." @default.
- W3207972043 created "2021-10-25" @default.
- W3207972043 creator A5008361353 @default.
- W3207972043 creator A5020408563 @default.
- W3207972043 creator A5034271080 @default.
- W3207972043 creator A5052290457 @default.
- W3207972043 creator A5056333681 @default.
- W3207972043 creator A5070501683 @default.
- W3207972043 date "2021-11-01" @default.
- W3207972043 modified "2023-10-18" @default.
- W3207972043 title "Cascaded deep learning‐based auto‐segmentation for head and neck cancer patients: Organs at risk on T2‐weighted magnetic resonance imaging" @default.
- W3207972043 cites W1696976950 @default.
- W3207972043 cites W1852200068 @default.
- W3207972043 cites W1909740415 @default.
- W3207972043 cites W1966764112 @default.
- W3207972043 cites W1967172105 @default.
- W3207972043 cites W1977484942 @default.
- W3207972043 cites W1986209497 @default.
- W3207972043 cites W1987869189 @default.
- W3207972043 cites W2005887998 @default.
- W3207972043 cites W2021414173 @default.
- W3207972043 cites W2028911404 @default.
- W3207972043 cites W2040357326 @default.
- W3207972043 cites W2070825176 @default.
- W3207972043 cites W2074271088 @default.
- W3207972043 cites W2082425609 @default.
- W3207972043 cites W2103683686 @default.
- W3207972043 cites W2107595635 @default.
- W3207972043 cites W2117340355 @default.
- W3207972043 cites W2137013440 @default.
- W3207972043 cites W2140110477 @default.
- W3207972043 cites W2144771422 @default.
- W3207972043 cites W2145029612 @default.
- W3207972043 cites W2148754315 @default.
- W3207972043 cites W2160754664 @default.
- W3207972043 cites W2161501889 @default.
- W3207972043 cites W2292862470 @default.
- W3207972043 cites W2465575813 @default.
- W3207972043 cites W2560725027 @default.
- W3207972043 cites W2585890928 @default.
- W3207972043 cites W2804460770 @default.
- W3207972043 cites W2807227924 @default.
- W3207972043 cites W2808302424 @default.
- W3207972043 cites W2809628938 @default.
- W3207972043 cites W2889646458 @default.
- W3207972043 cites W2900237898 @default.
- W3207972043 cites W2921073881 @default.
- W3207972043 cites W2922812404 @default.
- W3207972043 cites W2930910485 @default.
- W3207972043 cites W2943976517 @default.
- W3207972043 cites W2962914239 @default.
- W3207972043 cites W2967248603 @default.
- W3207972043 cites W3024167052 @default.
- W3207972043 cites W3041894125 @default.
- W3207972043 cites W3046004442 @default.
- W3207972043 cites W3083705298 @default.
- W3207972043 cites W3103145119 @default.
- W3207972043 cites W3109886399 @default.
- W3207972043 cites W3116861932 @default.
- W3207972043 doi "https://doi.org/10.1002/mp.15290" @default.
- W3207972043 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/34676555" @default.
- W3207972043 hasPublicationYear "2021" @default.
- W3207972043 type Work @default.
- W3207972043 sameAs 3207972043 @default.
- W3207972043 citedByCount "12" @default.
- W3207972043 countsByYear W32079720432022 @default.
- W3207972043 countsByYear W32079720432023 @default.
- W3207972043 crossrefType "journal-article" @default.
- W3207972043 hasAuthorship W3207972043A5008361353 @default.
- W3207972043 hasAuthorship W3207972043A5020408563 @default.
- W3207972043 hasAuthorship W3207972043A5034271080 @default.
- W3207972043 hasAuthorship W3207972043A5052290457 @default.
- W3207972043 hasAuthorship W3207972043A5056333681 @default.
- W3207972043 hasAuthorship W3207972043A5070501683 @default.
- W3207972043 hasBestOaLocation W32079720432 @default.
- W3207972043 hasConcept C108583219 @default.
- W3207972043 hasConcept C121684516 @default.
- W3207972043 hasConcept C124504099 @default.
- W3207972043 hasConcept C126838900 @default.
- W3207972043 hasConcept C143409427 @default.
- W3207972043 hasConcept C154945302 @default.
- W3207972043 hasConcept C2779104521 @default.
- W3207972043 hasConcept C2989005 @default.
- W3207972043 hasConcept C31601959 @default.
- W3207972043 hasConcept C41008148 @default.
- W3207972043 hasConcept C509974204 @default.
- W3207972043 hasConcept C71924100 @default.
- W3207972043 hasConcept C81363708 @default.
- W3207972043 hasConcept C89600930 @default.
- W3207972043 hasConceptScore W3207972043C108583219 @default.
- W3207972043 hasConceptScore W3207972043C121684516 @default.
- W3207972043 hasConceptScore W3207972043C124504099 @default.
- W3207972043 hasConceptScore W3207972043C126838900 @default.
- W3207972043 hasConceptScore W3207972043C143409427 @default.
- W3207972043 hasConceptScore W3207972043C154945302 @default.
- W3207972043 hasConceptScore W3207972043C2779104521 @default.
- W3207972043 hasConceptScore W3207972043C2989005 @default.
- W3207972043 hasConceptScore W3207972043C31601959 @default.