Matches in SemOpenAlex for { <https://semopenalex.org/work/W2885477007> ?p ?o ?g. }
- W2885477007 abstract "Purpose Radiation therapy (RT) is a common treatment for head and neck (HaN) cancer where therapists are often required to manually delineate boundaries of the organs-at-risks (OARs). The radiation therapy planning is time-consuming as each computed tomography (CT) volumetric data set typically consists of hundreds to thousands of slices and needs to be individually inspected. Automated head and neck anatomical segmentation provides a way to speed up and improve the reproducibility of radiation therapy planning. Previous work on anatomical segmentation is primarily based on atlas registrations, which takes up to hours for one patient and requires sophisticated atlas creation. In this work, we propose the AnatomyNet, an end-to-end and atlas-free three dimensional squeeze-and-excitation U-Net (3D SE U-Net), for fast and fully automated whole-volume HaN anatomical segmentation. Methods There are two main challenges for fully automated HaN OARs segmentation: 1) challenge in segmenting small anatomies (i.e., optic chiasm and optic nerves) occupying only a few slices, and 2) training model with inconsistent data annotations with missing ground truth for some anatomical structures because of different RT planning. We propose the AnatomyNet that has one down-sampling layer with the trade-off between GPU memory and feature representation capacity, and 3D SE residual blocks for effective feature learning to alleviate these challenges. Moreover, we design a hybrid loss function with the Dice loss and the focal loss. The Dice loss is a class level distribution loss that depends less on the number of voxels in the anatomy, and the focal loss is designed to deal with highly unbalanced segmentation. For missing annotations, we propose masked loss and weighted loss for accurate and balanced weights updating in the learning of the AnatomyNet. Results We collect 261 HaN CT images to train the AnatomyNet, and use MICCAI Head and Neck Auto Segmentation Challenge 2015 as the benchmark dataset to evaluate the performance of the AnatomyNet. The objective is to segment nine anatomies: brain stem, chiasm, mandible, optic nerve left, optic nerve right, parotid gland left, parotid gland right, submandibular gland left, and submandibular gland right. Compared to previous state-of-the-art methods for each anatomy from the MICCAI 2015 competition, the AnatomyNet increases Dice similarity coefficient (DSC) by 3.3% on average. The proposed AnatomyNet takes only 0.12 seconds on average to segment a whole-volume HaN CT image of an average dimension of 178 × 302 × 225. All the data and code will be available a . Conclusion1 We propose an end-to-end, fast and fully automated deep convolutional network, AnatomyNet, for accurate and whole-volume HaN anatomical segmentation. The proposed Anato-myNet outperforms previous state-of-the-art methods on the benchmark dataset. Extensive experiments demonstrate the effectiveness and good generalization ability of the components in the AnatomyNet." @default.
- W2885477007 created "2018-08-22" @default.
- W2885477007 creator A5024350063 @default.
- W2885477007 creator A5031854562 @default.
- W2885477007 creator A5038836690 @default.
- W2885477007 creator A5045954034 @default.
- W2885477007 creator A5059941852 @default.
- W2885477007 creator A5081735500 @default.
- W2885477007 creator A5084618257 @default.
- W2885477007 date "2018-08-16" @default.
- W2885477007 modified "2023-09-26" @default.
- W2885477007 title "AnatomyNet: Deep 3D Squeeze-and-excitation U-Nets for fast and fully automated whole-volume anatomical segmentation" @default.
- W2885477007 cites W1509957145 @default.
- W2885477007 cites W1852200068 @default.
- W2885477007 cites W1901129140 @default.
- W2885477007 cites W1964491208 @default.
- W2885477007 cites W1977484942 @default.
- W2885477007 cites W1982668309 @default.
- W2885477007 cites W1988832131 @default.
- W2885477007 cites W2002489794 @default.
- W2885477007 cites W2004326789 @default.
- W2885477007 cites W2009752869 @default.
- W2885477007 cites W2048076171 @default.
- W2885477007 cites W2066511532 @default.
- W2885477007 cites W2074271088 @default.
- W2885477007 cites W2083927153 @default.
- W2885477007 cites W2097580797 @default.
- W2885477007 cites W2101608218 @default.
- W2885477007 cites W2106467008 @default.
- W2885477007 cites W2108174317 @default.
- W2885477007 cites W2108598243 @default.
- W2885477007 cites W2185444432 @default.
- W2885477007 cites W2194775991 @default.
- W2885477007 cites W2526019331 @default.
- W2885477007 cites W2560725027 @default.
- W2885477007 cites W2593013519 @default.
- W2885477007 cites W2734349601 @default.
- W2885477007 cites W2752782242 @default.
- W2885477007 cites W2766518925 @default.
- W2885477007 cites W2792155504 @default.
- W2885477007 cites W2891155035 @default.
- W2885477007 cites W2917837889 @default.
- W2885477007 cites W2950550088 @default.
- W2885477007 cites W2962914239 @default.
- W2885477007 cites W2963150920 @default.
- W2885477007 cites W2963351448 @default.
- W2885477007 cites W2964098128 @default.
- W2885477007 cites W2964242896 @default.
- W2885477007 cites W2964275459 @default.
- W2885477007 doi "https://doi.org/10.1101/392969" @default.
- W2885477007 hasPublicationYear "2018" @default.
- W2885477007 type Work @default.
- W2885477007 sameAs 2885477007 @default.
- W2885477007 citedByCount "23" @default.
- W2885477007 countsByYear W28854770072019 @default.
- W2885477007 countsByYear W28854770072020 @default.
- W2885477007 countsByYear W28854770072021 @default.
- W2885477007 countsByYear W28854770072022 @default.
- W2885477007 countsByYear W28854770072023 @default.
- W2885477007 crossrefType "posted-content" @default.
- W2885477007 hasAuthorship W2885477007A5024350063 @default.
- W2885477007 hasAuthorship W2885477007A5031854562 @default.
- W2885477007 hasAuthorship W2885477007A5038836690 @default.
- W2885477007 hasAuthorship W2885477007A5045954034 @default.
- W2885477007 hasAuthorship W2885477007A5059941852 @default.
- W2885477007 hasAuthorship W2885477007A5081735500 @default.
- W2885477007 hasAuthorship W2885477007A5084618257 @default.
- W2885477007 hasBestOaLocation W28854770071 @default.
- W2885477007 hasConcept C105702510 @default.
- W2885477007 hasConcept C108583219 @default.
- W2885477007 hasConcept C138885662 @default.
- W2885477007 hasConcept C146849305 @default.
- W2885477007 hasConcept C153180895 @default.
- W2885477007 hasConcept C154945302 @default.
- W2885477007 hasConcept C22029948 @default.
- W2885477007 hasConcept C2524010 @default.
- W2885477007 hasConcept C2776401178 @default.
- W2885477007 hasConcept C2776673561 @default.
- W2885477007 hasConcept C2779797997 @default.
- W2885477007 hasConcept C2780837183 @default.
- W2885477007 hasConcept C31972630 @default.
- W2885477007 hasConcept C33923547 @default.
- W2885477007 hasConcept C41008148 @default.
- W2885477007 hasConcept C41895202 @default.
- W2885477007 hasConcept C54170458 @default.
- W2885477007 hasConcept C58489278 @default.
- W2885477007 hasConcept C71924100 @default.
- W2885477007 hasConcept C89600930 @default.
- W2885477007 hasConceptScore W2885477007C105702510 @default.
- W2885477007 hasConceptScore W2885477007C108583219 @default.
- W2885477007 hasConceptScore W2885477007C138885662 @default.
- W2885477007 hasConceptScore W2885477007C146849305 @default.
- W2885477007 hasConceptScore W2885477007C153180895 @default.
- W2885477007 hasConceptScore W2885477007C154945302 @default.
- W2885477007 hasConceptScore W2885477007C22029948 @default.
- W2885477007 hasConceptScore W2885477007C2524010 @default.
- W2885477007 hasConceptScore W2885477007C2776401178 @default.
- W2885477007 hasConceptScore W2885477007C2776673561 @default.
- W2885477007 hasConceptScore W2885477007C2779797997 @default.
- W2885477007 hasConceptScore W2885477007C2780837183 @default.