Matches in SemOpenAlex for { <https://semopenalex.org/work/W4322723714> ?p ?o ?g. }
- W4322723714 endingPage "102788" @default.
- W4322723714 startingPage "102788" @default.
- W4322723714 abstract "Diffusion magnetic resonance imaging (dMRI) is an important tool in characterizing tissue microstructure based on biophysical models, which are typically multi-compartmental models with mathematically complex and highly non-linear forms. Resolving microstructures from these models with conventional optimization techniques is prone to estimation errors and requires dense sampling in the q-space with a long scan time. Deep learning based approaches have been proposed to overcome these limitations. Motivated by the superior performance of the Transformer in feature extraction than the convolutional structure, in this work, we present a learning-based framework based on Transformer, namely, a Microstructure Estimation Transformer with Sparse Coding (METSC) for dMRI-based microstructural parameter estimation. To take advantage of the Transformer while addressing its limitation in large training data requirement, we explicitly introduce an inductive bias-model bias into the Transformer using a sparse coding technique to facilitate the training process. Thus, the METSC is composed with three stages, an embedding stage, a sparse representation stage, and a mapping stage. The embedding stage is a Transformer-based structure that encodes the signal in a high-level space to ensure the core voxel of a patch is represented effectively. In the sparse representation stage, a dictionary is constructed by solving a sparse reconstruction problem that unfolds the Iterative Hard Thresholding (IHT) process. The mapping stage is essentially a decoder that computes the microstructural parameters from the output of the second stage, based on the weighted sum of normalized dictionary coefficients where the weights are also learned. We tested our framework on two dMRI models with downsampled q-space data, including the intravoxel incoherent motion (IVIM) model and the neurite orientation dispersion and density imaging (NODDI) model. The proposed method achieved up to 11.25 folds of acceleration while retaining high fitting accuracy for NODDI fitting, reducing the mean squared error (MSE) up to 70% compared with the previous q-space learning approach. METSC outperformed the other state-of-the-art learning-based methods, including the model-free and model-based methods. The network also showed robustness against noise and generalizability across different datasets. The superior performance of METSC indicates its potential to improve dMRI acquisition and model fitting in clinical applications." @default.
- W4322723714 created "2023-03-03" @default.
- W4322723714 creator A5002386229 @default.
- W4322723714 creator A5025724253 @default.
- W4322723714 creator A5035557835 @default.
- W4322723714 creator A5040190038 @default.
- W4322723714 creator A5044544424 @default.
- W4322723714 creator A5049554657 @default.
- W4322723714 creator A5058150411 @default.
- W4322723714 creator A5059703639 @default.
- W4322723714 date "2023-05-01" @default.
- W4322723714 modified "2023-10-16" @default.
- W4322723714 title "A microstructure estimation Transformer inspired by sparse representation for diffusion MRI" @default.
- W4322723714 cites W2024729467 @default.
- W4322723714 cites W2031345090 @default.
- W4322723714 cites W2032254014 @default.
- W4322723714 cites W2040812980 @default.
- W4322723714 cites W2077559791 @default.
- W4322723714 cites W2118879132 @default.
- W4322723714 cites W2120259577 @default.
- W4322723714 cites W2122662954 @default.
- W4322723714 cites W2139794744 @default.
- W4322723714 cites W2156295356 @default.
- W4322723714 cites W2158995840 @default.
- W4322723714 cites W2194775991 @default.
- W4322723714 cites W2213972894 @default.
- W4322723714 cites W2328247767 @default.
- W4322723714 cites W2625047734 @default.
- W4322723714 cites W2745541172 @default.
- W4322723714 cites W2752441243 @default.
- W4322723714 cites W2778713466 @default.
- W4322723714 cites W2887887303 @default.
- W4322723714 cites W2901826627 @default.
- W4322723714 cites W2902719825 @default.
- W4322723714 cites W2938120698 @default.
- W4322723714 cites W2963091558 @default.
- W4322723714 cites W2963322354 @default.
- W4322723714 cites W2964012060 @default.
- W4322723714 cites W2966242695 @default.
- W4322723714 cites W3001258698 @default.
- W4322723714 cites W3015618497 @default.
- W4322723714 cites W3163993681 @default.
- W4322723714 cites W3196886293 @default.
- W4322723714 doi "https://doi.org/10.1016/j.media.2023.102788" @default.
- W4322723714 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/36921485" @default.
- W4322723714 hasPublicationYear "2023" @default.
- W4322723714 type Work @default.
- W4322723714 citedByCount "2" @default.
- W4322723714 countsByYear W43227237142023 @default.
- W4322723714 crossrefType "journal-article" @default.
- W4322723714 hasAuthorship W4322723714A5002386229 @default.
- W4322723714 hasAuthorship W4322723714A5025724253 @default.
- W4322723714 hasAuthorship W4322723714A5035557835 @default.
- W4322723714 hasAuthorship W4322723714A5040190038 @default.
- W4322723714 hasAuthorship W4322723714A5044544424 @default.
- W4322723714 hasAuthorship W4322723714A5049554657 @default.
- W4322723714 hasAuthorship W4322723714A5058150411 @default.
- W4322723714 hasAuthorship W4322723714A5059703639 @default.
- W4322723714 hasBestOaLocation W43227237142 @default.
- W4322723714 hasConcept C11413529 @default.
- W4322723714 hasConcept C121332964 @default.
- W4322723714 hasConcept C124066611 @default.
- W4322723714 hasConcept C153180895 @default.
- W4322723714 hasConcept C154945302 @default.
- W4322723714 hasConcept C162324750 @default.
- W4322723714 hasConcept C165801399 @default.
- W4322723714 hasConcept C187736073 @default.
- W4322723714 hasConcept C197352929 @default.
- W4322723714 hasConcept C2780451532 @default.
- W4322723714 hasConcept C28006648 @default.
- W4322723714 hasConcept C41008148 @default.
- W4322723714 hasConcept C41608201 @default.
- W4322723714 hasConcept C62520636 @default.
- W4322723714 hasConcept C66322947 @default.
- W4322723714 hasConcept C77637269 @default.
- W4322723714 hasConceptScore W4322723714C11413529 @default.
- W4322723714 hasConceptScore W4322723714C121332964 @default.
- W4322723714 hasConceptScore W4322723714C124066611 @default.
- W4322723714 hasConceptScore W4322723714C153180895 @default.
- W4322723714 hasConceptScore W4322723714C154945302 @default.
- W4322723714 hasConceptScore W4322723714C162324750 @default.
- W4322723714 hasConceptScore W4322723714C165801399 @default.
- W4322723714 hasConceptScore W4322723714C187736073 @default.
- W4322723714 hasConceptScore W4322723714C197352929 @default.
- W4322723714 hasConceptScore W4322723714C2780451532 @default.
- W4322723714 hasConceptScore W4322723714C28006648 @default.
- W4322723714 hasConceptScore W4322723714C41008148 @default.
- W4322723714 hasConceptScore W4322723714C41608201 @default.
- W4322723714 hasConceptScore W4322723714C62520636 @default.
- W4322723714 hasConceptScore W4322723714C66322947 @default.
- W4322723714 hasConceptScore W4322723714C77637269 @default.
- W4322723714 hasFunder F4320321001 @default.
- W4322723714 hasFunder F4320321540 @default.
- W4322723714 hasFunder F4320338110 @default.
- W4322723714 hasLocation W43227237141 @default.
- W4322723714 hasLocation W43227237142 @default.
- W4322723714 hasLocation W43227237143 @default.
- W4322723714 hasOpenAccess W4322723714 @default.