Matches in SemOpenAlex for { <https://semopenalex.org/work/W4368359265> ?p ?o ?g. }
Showing items 1 to 86 of
86
with 100 items per page.
- W4368359265 endingPage "225" @default.
- W4368359265 startingPage "217" @default.
- W4368359265 abstract "Alzheimer's disease (AD) is a progressive and irreversible neurodegenerative disease. Neuroimaging based on magnetic resonance imaging (MRI) is one of the most intuitive and reliable methods to perform AD screening and diagnosis. Clinical head MRI detection generates multimodal image data, and to solve the problem of multimodal MRI processing and information fusion, this paper proposes a structural and functional MRI feature extraction and fusion method based on generalized convolutional neural networks (gCNN). The method includes a three-dimensional residual U-shaped network based on hybrid attention mechanism (3D HA-ResUNet) for feature representation and classification for structural MRI, and a U-shaped graph convolutional neural network (U-GCN) for node feature representation and classification of brain functional networks for functional MRI. Based on the fusion of the two types of image features, the optimal feature subset is selected based on discrete binary particle swarm optimization, and the prediction results are output by a machine learning classifier. The validation results of multimodal dataset from the AD Neuroimaging Initiative (ADNI) open-source database show that the proposed models have superior performance in their respective data domains. The gCNN framework combines the advantages of these two models and further improves the performance of the methods using single-modal MRI, improving the classification accuracy and sensitivity by 5.56% and 11.11%, respectively. In conclusion, the gCNN-based multimodal MRI classification method proposed in this paper can provide a technical basis for the auxiliary diagnosis of Alzheimer's disease.阿尔茨海默病(AD)是一种进行性、不可逆的神经系统退行性疾病,基于磁共振成像(MRI)的神经影像学检查是进行AD筛查与诊断最直观、可靠的方法之一。临床上头颅MRI检测会产生多模态影像数据,为解决多模态MRI处理与信息融合的问题,本文提出基于广义卷积神经网络(gCNN)的结构MRI和功能MRI特征提取与融合方法。该方法针对结构MRI提出基于混合注意力机制的三维残差U型网络(3D HA-ResUNet)进行特征表示与分类;针对功能MRI提出U型图卷积神经网络(U-GCN)进行脑功能网络的节点特征表示与分类。在两类影像特征融合的基础上,基于离散二进制粒子群优化算法筛选最优特征子集,并使用机器学习分类器输出预测结果。来自AD神经影像学计划(ADNI)开源数据库的多模态数据集验证结果表明,本文所提出的模型在各自数据域内都有优秀的表现,而gCNN框架结合了两类模型的优势,进一步提高使用单一模态MRI的方法性能,将分类准确率和敏感性分别提升了5.56%和11.11%。综上,本文所提出的基于gCNN的多模态MRI分类方法可以为AD的辅助诊断提供技术基础。." @default.
- W4368359265 created "2023-05-05" @default.
- W4368359265 creator A5011639580 @default.
- W4368359265 creator A5029598460 @default.
- W4368359265 creator A5030846584 @default.
- W4368359265 creator A5085346031 @default.
- W4368359265 date "2023-04-25" @default.
- W4368359265 modified "2023-09-30" @default.
- W4368359265 title "[Research on classification method of multimodal magnetic resonance images of Alzheimer's disease based on generalized convolutional neural networks]." @default.
- W4368359265 cites W1820340588 @default.
- W4368359265 cites W2009494091 @default.
- W4368359265 cites W2044770804 @default.
- W4368359265 cites W2167822639 @default.
- W4368359265 cites W2625749968 @default.
- W4368359265 cites W2798438349 @default.
- W4368359265 cites W2907101105 @default.
- W4368359265 cites W2912944512 @default.
- W4368359265 cites W2963420686 @default.
- W4368359265 cites W2995495466 @default.
- W4368359265 cites W2995864059 @default.
- W4368359265 cites W3009435139 @default.
- W4368359265 cites W3089341061 @default.
- W4368359265 cites W3095479837 @default.
- W4368359265 cites W3171990888 @default.
- W4368359265 doi "https://doi.org/10.7507/1001-5515.202212046" @default.
- W4368359265 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/37139751" @default.
- W4368359265 hasPublicationYear "2023" @default.
- W4368359265 type Work @default.
- W4368359265 citedByCount "0" @default.
- W4368359265 crossrefType "journal-article" @default.
- W4368359265 hasAuthorship W4368359265A5011639580 @default.
- W4368359265 hasAuthorship W4368359265A5029598460 @default.
- W4368359265 hasAuthorship W4368359265A5030846584 @default.
- W4368359265 hasAuthorship W4368359265A5085346031 @default.
- W4368359265 hasConcept C119857082 @default.
- W4368359265 hasConcept C126838900 @default.
- W4368359265 hasConcept C138885662 @default.
- W4368359265 hasConcept C143409427 @default.
- W4368359265 hasConcept C153180895 @default.
- W4368359265 hasConcept C154945302 @default.
- W4368359265 hasConcept C15744967 @default.
- W4368359265 hasConcept C169760540 @default.
- W4368359265 hasConcept C2776401178 @default.
- W4368359265 hasConcept C2779226451 @default.
- W4368359265 hasConcept C41008148 @default.
- W4368359265 hasConcept C41895202 @default.
- W4368359265 hasConcept C52622490 @default.
- W4368359265 hasConcept C58693492 @default.
- W4368359265 hasConcept C71924100 @default.
- W4368359265 hasConcept C81363708 @default.
- W4368359265 hasConceptScore W4368359265C119857082 @default.
- W4368359265 hasConceptScore W4368359265C126838900 @default.
- W4368359265 hasConceptScore W4368359265C138885662 @default.
- W4368359265 hasConceptScore W4368359265C143409427 @default.
- W4368359265 hasConceptScore W4368359265C153180895 @default.
- W4368359265 hasConceptScore W4368359265C154945302 @default.
- W4368359265 hasConceptScore W4368359265C15744967 @default.
- W4368359265 hasConceptScore W4368359265C169760540 @default.
- W4368359265 hasConceptScore W4368359265C2776401178 @default.
- W4368359265 hasConceptScore W4368359265C2779226451 @default.
- W4368359265 hasConceptScore W4368359265C41008148 @default.
- W4368359265 hasConceptScore W4368359265C41895202 @default.
- W4368359265 hasConceptScore W4368359265C52622490 @default.
- W4368359265 hasConceptScore W4368359265C58693492 @default.
- W4368359265 hasConceptScore W4368359265C71924100 @default.
- W4368359265 hasConceptScore W4368359265C81363708 @default.
- W4368359265 hasIssue "2" @default.
- W4368359265 hasLocation W43683592651 @default.
- W4368359265 hasOpenAccess W4368359265 @default.
- W4368359265 hasPrimaryLocation W43683592651 @default.
- W4368359265 hasRelatedWork W2059299633 @default.
- W4368359265 hasRelatedWork W2546942002 @default.
- W4368359265 hasRelatedWork W2732542196 @default.
- W4368359265 hasRelatedWork W2760085659 @default.
- W4368359265 hasRelatedWork W2767651786 @default.
- W4368359265 hasRelatedWork W2940977206 @default.
- W4368359265 hasRelatedWork W2969680539 @default.
- W4368359265 hasRelatedWork W2977314777 @default.
- W4368359265 hasRelatedWork W2995914718 @default.
- W4368359265 hasRelatedWork W3156786002 @default.
- W4368359265 hasVolume "40" @default.
- W4368359265 isParatext "false" @default.
- W4368359265 isRetracted "false" @default.
- W4368359265 workType "article" @default.