Matches in SemOpenAlex for { <https://semopenalex.org/work/W2969245029> ?p ?o ?g. }
Showing items 1 to 98 of
98
with 100 items per page.
- W2969245029 endingPage "272" @default.
- W2969245029 startingPage "256" @default.
- W2969245029 abstract "Abstract fMRI word decoding refers to decode what the human brain is thinking by interpreting functional Magnetic Resonance Imaging (fMRI) scans from people watching or listening to words, representing a sort of mind-reading technology. Existing works decoding words from imaging data have been largely limited to concrete nouns from a relatively small number of semantic categories. Moreover, such studies use different word-stimulus presentation paradigms and different computational models, lacking a comprehensive understanding of the influence of different factors on fMRI word decoding. In this paper, we present a large-scale evaluation of eight word embedding models and their combinations for decoding fine-grained fMRI data associated with three classes of words recorded from three stimulus-presentation paradigms. Specifically, we investigate the following research questions: (1) How does the brain-image decoder perform on different classes of words? (2) How does the brain-image decoder perform in different stimulus-presentation paradigms? (3) How well does each word embedding model allow us to decode neural activation patterns in the human brain? Furthermore, we analyze the most informative voxels associated with different classes of words, stimulus-presentation paradigms and word embedding models to explore their neural basis. The results have shown the following: (1) Different word classes can be decoded most effectively with different word embedding models. Concrete nouns and verbs are more easily distinguished than abstract nouns and verbs. (2) Among the three stimulus-presentation paradigms (picture, sentence and word clouds), the picture paradigm achieves the highest decoding accuracy, followed by the sentence paradigm. (3) Among the eight word embedding models, the model that encodes visual information obtains the best performance, followed by models that encode textual and contextual information. (4) Compared to concrete nouns, which activate mostly vision-related brain regions, abstract nouns activate broader brain regions such as the visual, language and default-mode networks. Moreover, both the picture paradigm and the model that encodes visual information have stronger associations with vision-related brain regions than other paradigms and word embedding models, respectively." @default.
- W2969245029 created "2019-08-29" @default.
- W2969245029 creator A5001541516 @default.
- W2969245029 creator A5015785439 @default.
- W2969245029 creator A5016087392 @default.
- W2969245029 creator A5053473846 @default.
- W2969245029 creator A5062722477 @default.
- W2969245029 date "2020-01-01" @default.
- W2969245029 modified "2023-09-24" @default.
- W2969245029 title "Fine-grained neural decoding with distributed word representations" @default.
- W2969245029 cites W1976193721 @default.
- W2969245029 cites W1980592753 @default.
- W2969245029 cites W1992570774 @default.
- W2969245029 cites W2007226897 @default.
- W2969245029 cites W2019411442 @default.
- W2969245029 cites W2040036684 @default.
- W2969245029 cites W2042684628 @default.
- W2969245029 cites W2060824369 @default.
- W2969245029 cites W2063951486 @default.
- W2969245029 cites W2087473386 @default.
- W2969245029 cites W2105251485 @default.
- W2969245029 cites W2110259656 @default.
- W2969245029 cites W2112180451 @default.
- W2969245029 cites W2123819943 @default.
- W2969245029 cites W2126810579 @default.
- W2969245029 cites W2130095305 @default.
- W2969245029 cites W2130167591 @default.
- W2969245029 cites W2145887413 @default.
- W2969245029 cites W2165588015 @default.
- W2969245029 cites W2168217710 @default.
- W2969245029 cites W2169964599 @default.
- W2969245029 cites W2344527693 @default.
- W2969245029 cites W2344975321 @default.
- W2969245029 cites W2493916176 @default.
- W2969245029 cites W2592280765 @default.
- W2969245029 cites W2733865234 @default.
- W2969245029 cites W2782213998 @default.
- W2969245029 cites W2800082313 @default.
- W2969245029 cites W2800311957 @default.
- W2969245029 cites W2940585064 @default.
- W2969245029 cites W2943083682 @default.
- W2969245029 cites W4234698323 @default.
- W2969245029 doi "https://doi.org/10.1016/j.ins.2019.08.043" @default.
- W2969245029 hasPublicationYear "2020" @default.
- W2969245029 type Work @default.
- W2969245029 sameAs 2969245029 @default.
- W2969245029 citedByCount "15" @default.
- W2969245029 countsByYear W29692450292020 @default.
- W2969245029 countsByYear W29692450292021 @default.
- W2969245029 countsByYear W29692450292022 @default.
- W2969245029 countsByYear W29692450292023 @default.
- W2969245029 crossrefType "journal-article" @default.
- W2969245029 hasAuthorship W2969245029A5001541516 @default.
- W2969245029 hasAuthorship W2969245029A5015785439 @default.
- W2969245029 hasAuthorship W2969245029A5016087392 @default.
- W2969245029 hasAuthorship W2969245029A5053473846 @default.
- W2969245029 hasAuthorship W2969245029A5062722477 @default.
- W2969245029 hasConcept C11413529 @default.
- W2969245029 hasConcept C138885662 @default.
- W2969245029 hasConcept C154945302 @default.
- W2969245029 hasConcept C204321447 @default.
- W2969245029 hasConcept C28490314 @default.
- W2969245029 hasConcept C40743351 @default.
- W2969245029 hasConcept C41008148 @default.
- W2969245029 hasConcept C41895202 @default.
- W2969245029 hasConcept C57273362 @default.
- W2969245029 hasConcept C90805587 @default.
- W2969245029 hasConceptScore W2969245029C11413529 @default.
- W2969245029 hasConceptScore W2969245029C138885662 @default.
- W2969245029 hasConceptScore W2969245029C154945302 @default.
- W2969245029 hasConceptScore W2969245029C204321447 @default.
- W2969245029 hasConceptScore W2969245029C28490314 @default.
- W2969245029 hasConceptScore W2969245029C40743351 @default.
- W2969245029 hasConceptScore W2969245029C41008148 @default.
- W2969245029 hasConceptScore W2969245029C41895202 @default.
- W2969245029 hasConceptScore W2969245029C57273362 @default.
- W2969245029 hasConceptScore W2969245029C90805587 @default.
- W2969245029 hasFunder F4320325902 @default.
- W2969245029 hasLocation W29692450291 @default.
- W2969245029 hasOpenAccess W2969245029 @default.
- W2969245029 hasPrimaryLocation W29692450291 @default.
- W2969245029 hasRelatedWork W1508636238 @default.
- W2969245029 hasRelatedWork W2030492936 @default.
- W2969245029 hasRelatedWork W2351992004 @default.
- W2969245029 hasRelatedWork W2358034992 @default.
- W2969245029 hasRelatedWork W2360025963 @default.
- W2969245029 hasRelatedWork W2367936931 @default.
- W2969245029 hasRelatedWork W2532361892 @default.
- W2969245029 hasRelatedWork W2946095416 @default.
- W2969245029 hasRelatedWork W2969245029 @default.
- W2969245029 hasRelatedWork W3067752700 @default.
- W2969245029 hasVolume "507" @default.
- W2969245029 isParatext "false" @default.
- W2969245029 isRetracted "false" @default.
- W2969245029 magId "2969245029" @default.
- W2969245029 workType "article" @default.