Matches in SemOpenAlex for { <https://semopenalex.org/work/W2560725027> ?p ?o ?g. }
- W2560725027 endingPage "557" @default.
- W2560725027 startingPage "547" @default.
- W2560725027 abstract "Purpose Accurate segmentation of organs‐at‐risks ( OAR s) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep learning‐based algorithm, for segmentation of OAR s in HaN CT images, and compared its performance against state‐of‐the‐art automated segmentation algorithms, commercial software, and interobserver variability. Methods Convolutional neural networks ( CNN s)—a concept from the field of deep learning—were used to study consistent intensity patterns of OAR s from training CT images and to segment the OAR in a previously unseen test CT image. For CNN training, we extracted a representative number of positive intensity patches around voxels that belong to the OAR of interest in training CT images, and negative intensity patches around voxels that belong to the surrounding structures. These patches then passed through a sequence of CNN layers that captured local image features such as corners, end‐points, and edges, and combined them into more complex high‐order features that can efficiently describe the OAR . The trained network was applied to classify voxels in a region of interest in the test image where the corresponding OAR is expected to be located. We then smoothed the obtained classification results by using Markov random fields algorithm. We finally extracted the largest connected component of the smoothed voxels classified as the OAR by CNN , performed dilate–erode operations to remove cavities of the component, which resulted in segmentation of the OAR in the test image. Results The performance of CNN s was validated on segmentation of spinal cord, mandible, parotid glands, submandibular glands, larynx, pharynx, eye globes, optic nerves, and optic chiasm using 50 CT images. The obtained segmentation results varied from 37.4% Dice coefficient ( DSC ) for chiasm to 89.5% DSC for mandible. We also analyzed the performance of state‐of‐the‐art algorithms and commercial software reported in the literature, and observed that CNN s demonstrate similar or superior performance on segmentation of spinal cord, mandible, parotid glands, larynx, pharynx, eye globes, and optic nerves, but inferior performance on segmentation of submandibular glands and optic chiasm. Conclusion We concluded that convolution neural networks can accurately segment most of OAR s using a representative database of 50 HaN CT images. At the same time, inclusion of additional information, for example, MR images, may be beneficial to some OAR s with poorly visible boundaries." @default.
- W2560725027 created "2016-12-16" @default.
- W2560725027 creator A5025751238 @default.
- W2560725027 creator A5070871008 @default.
- W2560725027 date "2017-02-01" @default.
- W2560725027 modified "2023-10-14" @default.
- W2560725027 title "Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks" @default.
- W2560725027 cites W1509957145 @default.
- W2560725027 cites W1852200068 @default.
- W2560725027 cites W1963688560 @default.
- W2560725027 cites W1964491208 @default.
- W2560725027 cites W1965160322 @default.
- W2560725027 cites W1966764112 @default.
- W2560725027 cites W1974237660 @default.
- W2560725027 cites W1977484942 @default.
- W2560725027 cites W1982447105 @default.
- W2560725027 cites W1982668309 @default.
- W2560725027 cites W1988832131 @default.
- W2560725027 cites W1992585621 @default.
- W2560725027 cites W1992722904 @default.
- W2560725027 cites W1995003188 @default.
- W2560725027 cites W1998496242 @default.
- W2560725027 cites W2002489794 @default.
- W2560725027 cites W2004326789 @default.
- W2560725027 cites W2009752869 @default.
- W2560725027 cites W2011996578 @default.
- W2560725027 cites W2015189457 @default.
- W2560725027 cites W2015406587 @default.
- W2560725027 cites W2025240023 @default.
- W2560725027 cites W2025556895 @default.
- W2560725027 cites W2028129509 @default.
- W2560725027 cites W2029119154 @default.
- W2560725027 cites W2037071753 @default.
- W2560725027 cites W2048076171 @default.
- W2560725027 cites W2060901727 @default.
- W2560725027 cites W2060903331 @default.
- W2560725027 cites W2066208236 @default.
- W2560725027 cites W2066511532 @default.
- W2560725027 cites W2067522317 @default.
- W2560725027 cites W2070825176 @default.
- W2560725027 cites W2072042721 @default.
- W2560725027 cites W2074271088 @default.
- W2560725027 cites W2079487963 @default.
- W2560725027 cites W2097580797 @default.
- W2560725027 cites W2103683686 @default.
- W2560725027 cites W2106467008 @default.
- W2560725027 cites W2107595635 @default.
- W2560725027 cites W2108057069 @default.
- W2560725027 cites W2108174317 @default.
- W2560725027 cites W2119459894 @default.
- W2560725027 cites W2119867409 @default.
- W2560725027 cites W2123312079 @default.
- W2560725027 cites W2137013440 @default.
- W2560725027 cites W2139792111 @default.
- W2560725027 cites W2141997640 @default.
- W2560725027 cites W2144771422 @default.
- W2560725027 cites W2145029612 @default.
- W2560725027 cites W2153700064 @default.
- W2560725027 cites W2158173863 @default.
- W2560725027 cites W2168650498 @default.
- W2560725027 cites W2185444432 @default.
- W2560725027 cites W2222318341 @default.
- W2560725027 cites W2275865840 @default.
- W2560725027 cites W2917837889 @default.
- W2560725027 cites W2919115771 @default.
- W2560725027 doi "https://doi.org/10.1002/mp.12045" @default.
- W2560725027 hasPubMedCentralId "https://www.ncbi.nlm.nih.gov/pmc/articles/5383420" @default.
- W2560725027 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/28205307" @default.
- W2560725027 hasPublicationYear "2017" @default.
- W2560725027 type Work @default.
- W2560725027 sameAs 2560725027 @default.
- W2560725027 citedByCount "397" @default.
- W2560725027 countsByYear W25607250272017 @default.
- W2560725027 countsByYear W25607250272018 @default.
- W2560725027 countsByYear W25607250272019 @default.
- W2560725027 countsByYear W25607250272020 @default.
- W2560725027 countsByYear W25607250272021 @default.
- W2560725027 countsByYear W25607250272022 @default.
- W2560725027 countsByYear W25607250272023 @default.
- W2560725027 crossrefType "journal-article" @default.
- W2560725027 hasAuthorship W2560725027A5025751238 @default.
- W2560725027 hasAuthorship W2560725027A5070871008 @default.
- W2560725027 hasBestOaLocation W25607250272 @default.
- W2560725027 hasConcept C108583219 @default.
- W2560725027 hasConcept C124504099 @default.
- W2560725027 hasConcept C153180895 @default.
- W2560725027 hasConcept C154945302 @default.
- W2560725027 hasConcept C31972630 @default.
- W2560725027 hasConcept C41008148 @default.
- W2560725027 hasConcept C54170458 @default.
- W2560725027 hasConcept C81363708 @default.
- W2560725027 hasConcept C89600930 @default.
- W2560725027 hasConceptScore W2560725027C108583219 @default.
- W2560725027 hasConceptScore W2560725027C124504099 @default.
- W2560725027 hasConceptScore W2560725027C153180895 @default.
- W2560725027 hasConceptScore W2560725027C154945302 @default.
- W2560725027 hasConceptScore W2560725027C31972630 @default.
- W2560725027 hasConceptScore W2560725027C41008148 @default.