Matches in SemOpenAlex for { <https://semopenalex.org/work/W2887301360> ?p ?o ?g. }
Showing items 1 to 78 of
78
with 100 items per page.
- W2887301360 abstract "Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors such as patient discomfort, increased cost, prolonged scanning time and scanner unavailability. In addition, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. Moreover, independently of how well an imaging system is, the performance of the imaging equipment usually comes to a certain limit through different physical devices. Additional interferences arise (particularly for medical imaging systems), for example, limited acquisition times, sophisticated and costly equipment and patients with severe medical conditions, which also cause image degradation. The acquisitions can be considered as the degraded version of the original high-quality images. In this dissertation, we explore the problems of image super-resolution and cross-modality synthesis for one Magnetic Resonance Imaging (MRI) modality from an image of another MRI modality of the same subject using an image synthesis framework for reconstructing the missing/complex modality data. We develop models and techniques that allow us to connect the domain of source modality data and the domain of target modality data, enabling transformation between elements ofthe two domains. In particular, we first introduce the models that project both source modality data and target modality data into a common multi-modality feature space in a supervised setting. This common space then allows us to connect cross-modality features that depict a relationship between each other, and we can impose the learned association function that synthesizes any target modality image. Moreover, we develop a weakly-supervised method that takes a few registered multi-modality image pairs as training data and generates the desired modality data without being constrained a large number of multi-modality images collection of well-processed (textit{e.g.}, skull-stripped and strictly registered) brain data. Finally, we propose an approach that provides a generic way of learning a dual mapping between source and target domains while considering both visually high-fidelity synthesis and task-practicability. We demonstrate that this model can be used to take any arbitrary modality and efficiently synthesize the desirable modality data in an unsupervised manner.We show that these proposed models advance the state-of-the-art on image super-resolution and cross-modality synthesis tasks that need jointly processing of multi-modality images and that we can design the algorithms in ways to generate the practically beneficial data to medical image analysis." @default.
- W2887301360 created "2018-08-22" @default.
- W2887301360 creator A5018584786 @default.
- W2887301360 date "2018-08-01" @default.
- W2887301360 modified "2023-09-27" @default.
- W2887301360 title "Cross-modality feature learning for three-dimensional brain image synthesis" @default.
- W2887301360 cites W2008771257 @default.
- W2887301360 cites W2065494927 @default.
- W2887301360 cites W2097622337 @default.
- W2887301360 cites W2139852318 @default.
- W2887301360 cites W2140245639 @default.
- W2887301360 cites W2172275395 @default.
- W2887301360 cites W2519536754 @default.
- W2887301360 cites W2558764425 @default.
- W2887301360 cites W2560982456 @default.
- W2887301360 hasPublicationYear "2018" @default.
- W2887301360 type Work @default.
- W2887301360 sameAs 2887301360 @default.
- W2887301360 citedByCount "0" @default.
- W2887301360 crossrefType "dissertation" @default.
- W2887301360 hasAuthorship W2887301360A5018584786 @default.
- W2887301360 hasConcept C126838900 @default.
- W2887301360 hasConcept C138885662 @default.
- W2887301360 hasConcept C143409427 @default.
- W2887301360 hasConcept C144024400 @default.
- W2887301360 hasConcept C153180895 @default.
- W2887301360 hasConcept C154945302 @default.
- W2887301360 hasConcept C2776401178 @default.
- W2887301360 hasConcept C2779903281 @default.
- W2887301360 hasConcept C2780226545 @default.
- W2887301360 hasConcept C31601959 @default.
- W2887301360 hasConcept C31972630 @default.
- W2887301360 hasConcept C36289849 @default.
- W2887301360 hasConcept C41008148 @default.
- W2887301360 hasConcept C41895202 @default.
- W2887301360 hasConcept C71924100 @default.
- W2887301360 hasConceptScore W2887301360C126838900 @default.
- W2887301360 hasConceptScore W2887301360C138885662 @default.
- W2887301360 hasConceptScore W2887301360C143409427 @default.
- W2887301360 hasConceptScore W2887301360C144024400 @default.
- W2887301360 hasConceptScore W2887301360C153180895 @default.
- W2887301360 hasConceptScore W2887301360C154945302 @default.
- W2887301360 hasConceptScore W2887301360C2776401178 @default.
- W2887301360 hasConceptScore W2887301360C2779903281 @default.
- W2887301360 hasConceptScore W2887301360C2780226545 @default.
- W2887301360 hasConceptScore W2887301360C31601959 @default.
- W2887301360 hasConceptScore W2887301360C31972630 @default.
- W2887301360 hasConceptScore W2887301360C36289849 @default.
- W2887301360 hasConceptScore W2887301360C41008148 @default.
- W2887301360 hasConceptScore W2887301360C41895202 @default.
- W2887301360 hasConceptScore W2887301360C71924100 @default.
- W2887301360 hasLocation W28873013601 @default.
- W2887301360 hasOpenAccess W2887301360 @default.
- W2887301360 hasPrimaryLocation W28873013601 @default.
- W2887301360 hasRelatedWork W2102990282 @default.
- W2887301360 hasRelatedWork W2121919147 @default.
- W2887301360 hasRelatedWork W2297381019 @default.
- W2887301360 hasRelatedWork W2463908129 @default.
- W2887301360 hasRelatedWork W2791623035 @default.
- W2887301360 hasRelatedWork W2890890029 @default.
- W2887301360 hasRelatedWork W2985506298 @default.
- W2887301360 hasRelatedWork W3007486523 @default.
- W2887301360 hasRelatedWork W3091640241 @default.
- W2887301360 hasRelatedWork W3121491376 @default.
- W2887301360 hasRelatedWork W3161765756 @default.
- W2887301360 hasRelatedWork W3162011436 @default.
- W2887301360 hasRelatedWork W3173452286 @default.
- W2887301360 hasRelatedWork W3174392853 @default.
- W2887301360 hasRelatedWork W3181439599 @default.
- W2887301360 hasRelatedWork W3197740889 @default.
- W2887301360 hasRelatedWork W3213940327 @default.
- W2887301360 hasRelatedWork W2188091348 @default.
- W2887301360 hasRelatedWork W2295071629 @default.
- W2887301360 hasRelatedWork W3099296515 @default.
- W2887301360 isParatext "false" @default.
- W2887301360 isRetracted "false" @default.
- W2887301360 magId "2887301360" @default.
- W2887301360 workType "dissertation" @default.