Matches in SemOpenAlex for { <https://semopenalex.org/work/W3197347857> ?p ?o ?g. }
Showing items 1 to 80 of
80
with 100 items per page.
- W3197347857 endingPage "2633" @default.
- W3197347857 startingPage "2633" @default.
- W3197347857 abstract "A detailed understanding of visual object representations in brain and behavior is fundamentally limited by the number of stimuli that can be presented in any one experiment. Ideally, the space of objects should be sampled in a representative manner, with (1) maximal breadth of the stimulus material and (2) minimal bias in the object categories. Such a dataset would allow the detailed study of object representations and provide a basis for testing and comparing computational models of vision and semantics. Towards this end, we recently developed the large-scale object image database THINGS of more than 26,000 images of 1,854 object concepts sampled representatively from the American English language (Hebart et al., 2019). Here we introduce THINGS-fMRI and THINGS-MEG, two large-scale brain imaging datasets using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). Over the course of 12 scanning sessions, 7 participants (fMRI: n = 3, MEG: n = 4) were presented with images from the THINGS database (fMRI: 8,740 images of 720 concepts, MEG: 22,448 images of 1,854 concepts) while they carried out an oddball detection task. To reduce noise, participants’ heads were stabilized and repositioned between sessions using custom head casts. To facilitate the use by other researchers, the data were converted to the Brain Imaging Data Structure format (BIDS; Gorgolewski et al., 2016) and preprocessed with fMRIPrep (Esteban et al., 2018). Estimates of the noise ceiling and general quality control demonstrate overall high data quality, with only small overall displacement between sessions. By carrying out a broad and representative multimodal sampling of object representations in humans, we hope this dataset to be of use for visual neuroscience and computational vision research alike." @default.
- W3197347857 created "2021-09-13" @default.
- W3197347857 creator A5007550136 @default.
- W3197347857 creator A5013008075 @default.
- W3197347857 creator A5015223191 @default.
- W3197347857 creator A5038125176 @default.
- W3197347857 creator A5038264046 @default.
- W3197347857 creator A5043035732 @default.
- W3197347857 creator A5074594419 @default.
- W3197347857 creator A5079406989 @default.
- W3197347857 creator A5086192287 @default.
- W3197347857 date "2021-09-27" @default.
- W3197347857 modified "2023-09-26" @default.
- W3197347857 title "THINGS-fMRI/MEG: A large-scale multimodal neuroimaging dataset of responses to natural object images" @default.
- W3197347857 doi "https://doi.org/10.1167/jov.21.9.2633" @default.
- W3197347857 hasPublicationYear "2021" @default.
- W3197347857 type Work @default.
- W3197347857 sameAs 3197347857 @default.
- W3197347857 citedByCount "1" @default.
- W3197347857 countsByYear W31973478572022 @default.
- W3197347857 crossrefType "journal-article" @default.
- W3197347857 hasAuthorship W3197347857A5007550136 @default.
- W3197347857 hasAuthorship W3197347857A5013008075 @default.
- W3197347857 hasAuthorship W3197347857A5015223191 @default.
- W3197347857 hasAuthorship W3197347857A5038125176 @default.
- W3197347857 hasAuthorship W3197347857A5038264046 @default.
- W3197347857 hasAuthorship W3197347857A5043035732 @default.
- W3197347857 hasAuthorship W3197347857A5074594419 @default.
- W3197347857 hasAuthorship W3197347857A5079406989 @default.
- W3197347857 hasAuthorship W3197347857A5086192287 @default.
- W3197347857 hasBestOaLocation W31973478571 @default.
- W3197347857 hasConcept C115961682 @default.
- W3197347857 hasConcept C120843803 @default.
- W3197347857 hasConcept C153180895 @default.
- W3197347857 hasConcept C154945302 @default.
- W3197347857 hasConcept C15744967 @default.
- W3197347857 hasConcept C169760540 @default.
- W3197347857 hasConcept C2779226451 @default.
- W3197347857 hasConcept C2781238097 @default.
- W3197347857 hasConcept C31972630 @default.
- W3197347857 hasConcept C41008148 @default.
- W3197347857 hasConcept C522805319 @default.
- W3197347857 hasConcept C55020928 @default.
- W3197347857 hasConcept C556910895 @default.
- W3197347857 hasConcept C58693492 @default.
- W3197347857 hasConceptScore W3197347857C115961682 @default.
- W3197347857 hasConceptScore W3197347857C120843803 @default.
- W3197347857 hasConceptScore W3197347857C153180895 @default.
- W3197347857 hasConceptScore W3197347857C154945302 @default.
- W3197347857 hasConceptScore W3197347857C15744967 @default.
- W3197347857 hasConceptScore W3197347857C169760540 @default.
- W3197347857 hasConceptScore W3197347857C2779226451 @default.
- W3197347857 hasConceptScore W3197347857C2781238097 @default.
- W3197347857 hasConceptScore W3197347857C31972630 @default.
- W3197347857 hasConceptScore W3197347857C41008148 @default.
- W3197347857 hasConceptScore W3197347857C522805319 @default.
- W3197347857 hasConceptScore W3197347857C55020928 @default.
- W3197347857 hasConceptScore W3197347857C556910895 @default.
- W3197347857 hasConceptScore W3197347857C58693492 @default.
- W3197347857 hasIssue "9" @default.
- W3197347857 hasLocation W31973478571 @default.
- W3197347857 hasOpenAccess W3197347857 @default.
- W3197347857 hasPrimaryLocation W31973478571 @default.
- W3197347857 hasRelatedWork W1622545542 @default.
- W3197347857 hasRelatedWork W1922604896 @default.
- W3197347857 hasRelatedWork W1997891553 @default.
- W3197347857 hasRelatedWork W2029657812 @default.
- W3197347857 hasRelatedWork W2055921173 @default.
- W3197347857 hasRelatedWork W2122418437 @default.
- W3197347857 hasRelatedWork W2978817795 @default.
- W3197347857 hasRelatedWork W4308414267 @default.
- W3197347857 hasRelatedWork W4312038223 @default.
- W3197347857 hasRelatedWork W4313156204 @default.
- W3197347857 hasVolume "21" @default.
- W3197347857 isParatext "false" @default.
- W3197347857 isRetracted "false" @default.
- W3197347857 magId "3197347857" @default.
- W3197347857 workType "article" @default.