Matches in SemOpenAlex for { <https://semopenalex.org/work/W3161635503> ?p ?o ?g. }
Showing items 1 to 68 of
68
with 100 items per page.
- W3161635503 endingPage "1423" @default.
- W3161635503 startingPage "1423" @default.
- W3161635503 abstract "1423 Objectives: Quantitative SPECT has been shown to provide improved lesion quantification compared to planar scintigraphy. Quantifying lesion uptake and metabolic tumor burden in SPECT rely heavily on accurate segmentation of bone and lesion structures. However, developing automated segmentation methods for SPECT images is challenging due to the limited spatial resolution and noise properties of SPECT images. Thus, manual segmentation is still the most common method used clinically for SPECT imaging, which is time-consuming and can suffer intra- and inter-observer variability. In this work, we present an automated segmentation algorithm that is based on convolutional neural networks (ConvNets). The proposed loss function effectively uses both intensity and shape information in a given image. We show that the proposed model trained using simulation images can produce accurate segmentation of previously-unseen patient images. Methods: This work introduces a novel semi-supervised loss function that is based on the classical Fuzzy C-means (FCM) algorithm. The proposed loss function incorporates the fundamental ideas of FCM, taking intensity information (mean intensities of each class) into consideration, and allowing for control of fuzzy overlap between segmentation classes with a user-controlled hyperparameter. The loss function is written as: L=LFCM+αLFCM-label, where LFCM=∑j∑kfqjk(y;θ)||yj-vk||2 is an unsupervised loss function that does not depend on segmentation labels, and LFCM-label=∑j∑kfqjk(y;θ)||gjk-1||2 is a supervised loss function; fjk(y;θ) is the ConvNet’s segmentation output of voxel j and class k; y and g denote the input SPECT image and the segmentation label, respectively; q controls the fuzzy overlap (q=2 used here); and α is a parameter that controls the weight between the unsupervised and the supervised terms. This loss function forces the ConvNet to leverage both the intensity distributions of the images and the available ground truth labels. The architecture of the ConvNet was a standard U-net. We implemented the model using Keras/Tensorflow on an NVIDIA Titan RTX GPU. The model was trained using 2D slices from 9 SPECT simulations (with different anatomical variations) for segmenting bone, lesion, and background. We tested the model on 12 clinical Tc-99m bone SPECT images. Dice coefficient (DSC) and surface DSC were used to as evaluation criteria. Results: We compared the proposed method with ConvNets trained using conventional dice and cross-entropy loss functions. Quantitatively, the proposed method outperformed the others by a significant margin, as shown in the table below. Qualitatively, the conventional supervised loss functions failed to yield usable segmentation results, which we believe is because the training images (SPECT simulations) did not capture all the lesion shape variations. In comparison, the proposed model produced reasonable segmentation results thanks to its ability to consider both shape information and intensity distributions within an image. Conclusion: We developed a semi-supervised loss function for SPECT segmentation using convolutional neural networks. The results demonstrated that our model trained using a dataset of simulated images was able to provide fast and robust segmentation on clinical SPECT images." @default.
- W3161635503 created "2021-05-24" @default.
- W3161635503 creator A5013973657 @default.
- W3161635503 creator A5015961162 @default.
- W3161635503 creator A5018889205 @default.
- W3161635503 creator A5067187971 @default.
- W3161635503 creator A5075316302 @default.
- W3161635503 creator A5089692710 @default.
- W3161635503 date "2021-05-01" @default.
- W3161635503 modified "2023-09-23" @default.
- W3161635503 title "Semi-supervised SPECT segmentation using convolutional neural networks" @default.
- W3161635503 hasPublicationYear "2021" @default.
- W3161635503 type Work @default.
- W3161635503 sameAs 3161635503 @default.
- W3161635503 citedByCount "0" @default.
- W3161635503 crossrefType "journal-article" @default.
- W3161635503 hasAuthorship W3161635503A5013973657 @default.
- W3161635503 hasAuthorship W3161635503A5015961162 @default.
- W3161635503 hasAuthorship W3161635503A5018889205 @default.
- W3161635503 hasAuthorship W3161635503A5067187971 @default.
- W3161635503 hasAuthorship W3161635503A5075316302 @default.
- W3161635503 hasAuthorship W3161635503A5089692710 @default.
- W3161635503 hasConcept C124504099 @default.
- W3161635503 hasConcept C153180895 @default.
- W3161635503 hasConcept C154945302 @default.
- W3161635503 hasConcept C31972630 @default.
- W3161635503 hasConcept C41008148 @default.
- W3161635503 hasConcept C54170458 @default.
- W3161635503 hasConcept C65885262 @default.
- W3161635503 hasConcept C81363708 @default.
- W3161635503 hasConcept C89600930 @default.
- W3161635503 hasConceptScore W3161635503C124504099 @default.
- W3161635503 hasConceptScore W3161635503C153180895 @default.
- W3161635503 hasConceptScore W3161635503C154945302 @default.
- W3161635503 hasConceptScore W3161635503C31972630 @default.
- W3161635503 hasConceptScore W3161635503C41008148 @default.
- W3161635503 hasConceptScore W3161635503C54170458 @default.
- W3161635503 hasConceptScore W3161635503C65885262 @default.
- W3161635503 hasConceptScore W3161635503C81363708 @default.
- W3161635503 hasConceptScore W3161635503C89600930 @default.
- W3161635503 hasOpenAccess W3161635503 @default.
- W3161635503 hasRelatedWork W176441129 @default.
- W3161635503 hasRelatedWork W1984649514 @default.
- W3161635503 hasRelatedWork W2135420923 @default.
- W3161635503 hasRelatedWork W2140100839 @default.
- W3161635503 hasRelatedWork W2146691973 @default.
- W3161635503 hasRelatedWork W2591213449 @default.
- W3161635503 hasRelatedWork W2611826638 @default.
- W3161635503 hasRelatedWork W2912848025 @default.
- W3161635503 hasRelatedWork W2921331879 @default.
- W3161635503 hasRelatedWork W2922180175 @default.
- W3161635503 hasRelatedWork W2961062143 @default.
- W3161635503 hasRelatedWork W2965270294 @default.
- W3161635503 hasRelatedWork W2979436723 @default.
- W3161635503 hasRelatedWork W2995760636 @default.
- W3161635503 hasRelatedWork W3007148159 @default.
- W3161635503 hasRelatedWork W3011844840 @default.
- W3161635503 hasRelatedWork W3104785286 @default.
- W3161635503 hasRelatedWork W3135408919 @default.
- W3161635503 hasRelatedWork W3157139923 @default.
- W3161635503 hasRelatedWork W3176384940 @default.
- W3161635503 hasVolume "62" @default.
- W3161635503 isParatext "false" @default.
- W3161635503 isRetracted "false" @default.
- W3161635503 magId "3161635503" @default.
- W3161635503 workType "article" @default.