Matches in SemOpenAlex for { <https://semopenalex.org/work/W4313238380> ?p ?o ?g. }
- W4313238380 abstract "Abstract Introduction Automatic whole brain and lesion segmentation at 7T presents challenges, primarily from bias fields and susceptibility artifacts. Recent advances in segmentation methods, namely using atlas-free and multi-contrast (for example, using T 1 -weighted, T 2 -weighted, fluid attenuated inversion recovery or FLAIR images) can enhance segmentation performance, however perfect registration at high fields remain a challenge primarily from distortion effects. We sought to use deep-learning algorithms (D/L) to do both skull stripping and whole brain segmentation on multiple imaging contrasts generated in a single Magnetization Prepared 2 Rapid Acquisition Gradient Echoes (MP2RAGE) acquisition on participants clinically diagnosed with multiple sclerosis (MS). The segmentation results were compared to that from 3T images acquired on the same participants, and with commonly available software packages. Finally, we explored ways to boost the performance of the D/L by using pseudo-labels generated from trainings on the 3T data (transfer learning). Methods 3T and 7T MRI acquired within 9 months of each other, from 25 study participants clinically diagnosed with multiple sclerosis (mean age 51, SD 16 years, 18 women), were retrospectively analyzed with commonly used software packages (such as FreeSurfer), Classification using Derivative-based Features (C-DEF), nnU-net (“no-new-Net” version of U-Net algorithm), and a novel 3T-to-7T transfer learning method, Pseudo-Label Assisted nnU-Net (PLAn). These segmentation results were then rated visually by trained experts and quantitatively in comparison with 3T label masks. Results Of the previously published methods considered, nnU-Net produced the best skull stripping at 7T in both the qualitative and quantitative ratings followed by C-DEF 7T and FreeSurfer 7T. A similar trend was observed for tissue segmentation, as nnU-Net was again the best method at 7T for all tissue classes. Dice Similarity Coefficient (DSC) from lesions segmented with nnU-Net were 1.5 times higher than from FreeSurfer at 7T. Relative to analysis with C-DEF segmentation on 3T scans, nnU-Net 7T had lower lesion volumes, with a correlation slope of just 0.68. PLAn 7T produced equivalent results to nnU-Net 7T in terms of skull stripping and most tissue classes, but it boosted lesion sensitivity by 15% relative to 3T, increasing the correlation slope to 0.90. This resulted in significantly better lesion segmentations as measured by expert rating (4% increase) and Dice coefficient (6% increase). Conclusion Deep learning methods can produce fast and reliable whole brain segmentations, including skull stripping and lesion detection, using data from a single 7T MRI sequence. While nnU-Net segmentations at 7T are superior to the other methods considered, the limited availability of labeled 7T data makes transfer learning an attractive option. In this case, pre-training a nnU-Net model using readily obtained 3T pseudo-labels was shown to boost lesion detection capabilities at 7T. This approach, which we call PLAn, is robust and readily adaptable due to its use of a single commonly gathered MRI sequence." @default.
- W4313238380 created "2023-01-06" @default.
- W4313238380 creator A5015714366 @default.
- W4313238380 creator A5024904743 @default.
- W4313238380 creator A5051620091 @default.
- W4313238380 creator A5052713371 @default.
- W4313238380 creator A5063726982 @default.
- W4313238380 creator A5071109533 @default.
- W4313238380 creator A5088696166 @default.
- W4313238380 date "2022-12-26" @default.
- W4313238380 modified "2023-10-16" @default.
- W4313238380 title "Pseudo-Label Assisted Nnu-Net (PLAn) Enables Automatic Segmentation of 7T MRI From a Single Acquisition" @default.
- W4313238380 cites W1572639874 @default.
- W4313238380 cites W1993947467 @default.
- W4313238380 cites W2083839350 @default.
- W4313238380 cites W2105552075 @default.
- W4313238380 cites W2117140276 @default.
- W4313238380 cites W2117340355 @default.
- W4313238380 cites W2133127325 @default.
- W4313238380 cites W2162041043 @default.
- W4313238380 cites W2257766930 @default.
- W4313238380 cites W2777074421 @default.
- W4313238380 cites W2790874154 @default.
- W4313238380 cites W2806074443 @default.
- W4313238380 cites W2809191209 @default.
- W4313238380 cites W2908331852 @default.
- W4313238380 cites W2911708449 @default.
- W4313238380 cites W2914540786 @default.
- W4313238380 cites W2916497546 @default.
- W4313238380 cites W2920825015 @default.
- W4313238380 cites W2942752384 @default.
- W4313238380 cites W2951680115 @default.
- W4313238380 cites W2977883299 @default.
- W4313238380 cites W2980236807 @default.
- W4313238380 cites W3036388453 @default.
- W4313238380 cites W3045740144 @default.
- W4313238380 cites W3096874987 @default.
- W4313238380 cites W3112701542 @default.
- W4313238380 cites W3126990165 @default.
- W4313238380 cites W3130592898 @default.
- W4313238380 cites W3152394047 @default.
- W4313238380 cites W3157690401 @default.
- W4313238380 cites W3162731900 @default.
- W4313238380 cites W3186852274 @default.
- W4313238380 cites W3187472674 @default.
- W4313238380 cites W3204474939 @default.
- W4313238380 cites W4205882540 @default.
- W4313238380 cites W4232960070 @default.
- W4313238380 cites W4283729341 @default.
- W4313238380 doi "https://doi.org/10.1101/2022.12.22.22283866" @default.
- W4313238380 hasPublicationYear "2022" @default.
- W4313238380 type Work @default.
- W4313238380 citedByCount "1" @default.
- W4313238380 crossrefType "posted-content" @default.
- W4313238380 hasAuthorship W4313238380A5015714366 @default.
- W4313238380 hasAuthorship W4313238380A5024904743 @default.
- W4313238380 hasAuthorship W4313238380A5051620091 @default.
- W4313238380 hasAuthorship W4313238380A5052713371 @default.
- W4313238380 hasAuthorship W4313238380A5063726982 @default.
- W4313238380 hasAuthorship W4313238380A5071109533 @default.
- W4313238380 hasAuthorship W4313238380A5088696166 @default.
- W4313238380 hasBestOaLocation W43132383801 @default.
- W4313238380 hasConcept C101070640 @default.
- W4313238380 hasConcept C108583219 @default.
- W4313238380 hasConcept C118552586 @default.
- W4313238380 hasConcept C126838900 @default.
- W4313238380 hasConcept C143409427 @default.
- W4313238380 hasConcept C150899416 @default.
- W4313238380 hasConcept C153180895 @default.
- W4313238380 hasConcept C154945302 @default.
- W4313238380 hasConcept C41008148 @default.
- W4313238380 hasConcept C58693492 @default.
- W4313238380 hasConcept C71924100 @default.
- W4313238380 hasConcept C89600930 @default.
- W4313238380 hasConceptScore W4313238380C101070640 @default.
- W4313238380 hasConceptScore W4313238380C108583219 @default.
- W4313238380 hasConceptScore W4313238380C118552586 @default.
- W4313238380 hasConceptScore W4313238380C126838900 @default.
- W4313238380 hasConceptScore W4313238380C143409427 @default.
- W4313238380 hasConceptScore W4313238380C150899416 @default.
- W4313238380 hasConceptScore W4313238380C153180895 @default.
- W4313238380 hasConceptScore W4313238380C154945302 @default.
- W4313238380 hasConceptScore W4313238380C41008148 @default.
- W4313238380 hasConceptScore W4313238380C58693492 @default.
- W4313238380 hasConceptScore W4313238380C71924100 @default.
- W4313238380 hasConceptScore W4313238380C89600930 @default.
- W4313238380 hasLocation W43132383801 @default.
- W4313238380 hasOpenAccess W4313238380 @default.
- W4313238380 hasPrimaryLocation W43132383801 @default.
- W4313238380 hasRelatedWork W2551012455 @default.
- W4313238380 hasRelatedWork W2790662084 @default.
- W4313238380 hasRelatedWork W2889705046 @default.
- W4313238380 hasRelatedWork W2943474764 @default.
- W4313238380 hasRelatedWork W2964629181 @default.
- W4313238380 hasRelatedWork W3002526821 @default.
- W4313238380 hasRelatedWork W3018421652 @default.
- W4313238380 hasRelatedWork W3091976719 @default.
- W4313238380 hasRelatedWork W3192840557 @default.
- W4313238380 hasRelatedWork W4213299466 @default.
- W4313238380 isParatext "false" @default.