Matches in SemOpenAlex for { <https://semopenalex.org/work/W3048897280> ?p ?o ?g. }
- W3048897280 abstract "ABSTRACT Any given visual object input is characterized by multiple visual features, such as identity, position and size. Despite the usefulness of identity and nonidentity features in vision and their joint coding throughout the primate ventral visual processing pathway, they have so far been studied relatively independently. Here we document the relative coding strength of object identity and nonidentity features in a brain region and how this may change across the human ventral visual pathway. We examined a total of four nonidentity features, including two Euclidean features (position and size) and two non-Euclidean features (image statistics and spatial frequency content of an image). Overall, identity representation increased and nonidentity feature representation decreased along the ventral visual pathway, with identity outweighed the non-Euclidean features, but not the Euclidean ones, in higher levels of visual processing. A similar analysis was performed in 14 convolutional neural networks (CNNs) pretrained to perform object categorization with varying architecture, depth, and with/without recurrent processing. While the relative coding strength of object identity and nonidentity features in lower CNN layers matched well with that in early human visual areas, the match between higher CNN layers and higher human visual regions were limited. Similar results were obtained regardless of whether a CNN was trained with real-world or stylized object images that emphasized shape representation. Together, by measuring the relative coding strength of object identity and nonidentity features, our approach provided a new tool to characterize feature coding in the human brain and the correspondence between the brain and CNNs. SIGNIFICANCE STATEMENT This study documented the relative coding strength of object identity compared to four types of nonidentity features along the human ventral visual processing pathway and compared brain responses with those of 14 CNNs pretrained to perform object categorization. Overall, identity representation increased and nonidentity feature representation decreased along the ventral visual pathway, with the coding strength of the different nonidentity features differed at higher levels of visual processing. While feature coding in lower CNN layers matched well with that of early human visual areas, the match between higher CNN layers and higher human visual regions were limited. Our approach provided a new tool to characterize feature coding in the human brain and the correspondence between the brain and CNNs." @default.
- W3048897280 created "2020-08-18" @default.
- W3048897280 creator A5038125176 @default.
- W3048897280 creator A5071841129 @default.
- W3048897280 date "2020-08-12" @default.
- W3048897280 modified "2023-09-23" @default.
- W3048897280 title "The relative coding strength of object identity and nonidentity features in human occipito-temporal cortex and convolutional neural networks" @default.
- W3048897280 cites W1535606938 @default.
- W3048897280 cites W1969196674 @default.
- W3048897280 cites W1971017968 @default.
- W3048897280 cites W1982336813 @default.
- W3048897280 cites W2000089809 @default.
- W3048897280 cites W2013137720 @default.
- W3048897280 cites W2040036684 @default.
- W3048897280 cites W2051697612 @default.
- W3048897280 cites W2058616551 @default.
- W3048897280 cites W2077496595 @default.
- W3048897280 cites W2080514369 @default.
- W3048897280 cites W2086220442 @default.
- W3048897280 cites W2089016247 @default.
- W3048897280 cites W2108598243 @default.
- W3048897280 cites W2113624200 @default.
- W3048897280 cites W2117539524 @default.
- W3048897280 cites W2121538974 @default.
- W3048897280 cites W2131354767 @default.
- W3048897280 cites W2139721641 @default.
- W3048897280 cites W2145458017 @default.
- W3048897280 cites W2145668416 @default.
- W3048897280 cites W2151721316 @default.
- W3048897280 cites W2153633111 @default.
- W3048897280 cites W2155993650 @default.
- W3048897280 cites W2156128050 @default.
- W3048897280 cites W2162950292 @default.
- W3048897280 cites W2166130385 @default.
- W3048897280 cites W2166206801 @default.
- W3048897280 cites W2176287621 @default.
- W3048897280 cites W2201865119 @default.
- W3048897280 cites W2274405424 @default.
- W3048897280 cites W2280426979 @default.
- W3048897280 cites W2286279415 @default.
- W3048897280 cites W2343204383 @default.
- W3048897280 cites W2412479940 @default.
- W3048897280 cites W2412480261 @default.
- W3048897280 cites W2505114805 @default.
- W3048897280 cites W2537084945 @default.
- W3048897280 cites W2755036008 @default.
- W3048897280 cites W2766369197 @default.
- W3048897280 cites W2797254707 @default.
- W3048897280 cites W2892247889 @default.
- W3048897280 cites W2902537539 @default.
- W3048897280 cites W2903867357 @default.
- W3048897280 cites W2915130814 @default.
- W3048897280 cites W2919115771 @default.
- W3048897280 cites W2949512190 @default.
- W3048897280 cites W2951506741 @default.
- W3048897280 cites W2987648659 @default.
- W3048897280 cites W3011123116 @default.
- W3048897280 cites W3029307787 @default.
- W3048897280 cites W3098596645 @default.
- W3048897280 cites W4235868404 @default.
- W3048897280 doi "https://doi.org/10.1101/2020.08.11.246967" @default.
- W3048897280 hasPublicationYear "2020" @default.
- W3048897280 type Work @default.
- W3048897280 sameAs 3048897280 @default.
- W3048897280 citedByCount "1" @default.
- W3048897280 countsByYear W30488972802021 @default.
- W3048897280 crossrefType "posted-content" @default.
- W3048897280 hasAuthorship W3048897280A5038125176 @default.
- W3048897280 hasAuthorship W3048897280A5071841129 @default.
- W3048897280 hasBestOaLocation W30488972801 @default.
- W3048897280 hasConcept C105795698 @default.
- W3048897280 hasConcept C121332964 @default.
- W3048897280 hasConcept C138885662 @default.
- W3048897280 hasConcept C153180895 @default.
- W3048897280 hasConcept C154945302 @default.
- W3048897280 hasConcept C15744967 @default.
- W3048897280 hasConcept C169760540 @default.
- W3048897280 hasConcept C179518139 @default.
- W3048897280 hasConcept C24890656 @default.
- W3048897280 hasConcept C26760741 @default.
- W3048897280 hasConcept C2776401178 @default.
- W3048897280 hasConcept C2778251979 @default.
- W3048897280 hasConcept C2778355321 @default.
- W3048897280 hasConcept C33923547 @default.
- W3048897280 hasConcept C41008148 @default.
- W3048897280 hasConcept C41895202 @default.
- W3048897280 hasConcept C46312422 @default.
- W3048897280 hasConcept C81363708 @default.
- W3048897280 hasConcept C94124525 @default.
- W3048897280 hasConceptScore W3048897280C105795698 @default.
- W3048897280 hasConceptScore W3048897280C121332964 @default.
- W3048897280 hasConceptScore W3048897280C138885662 @default.
- W3048897280 hasConceptScore W3048897280C153180895 @default.
- W3048897280 hasConceptScore W3048897280C154945302 @default.
- W3048897280 hasConceptScore W3048897280C15744967 @default.
- W3048897280 hasConceptScore W3048897280C169760540 @default.
- W3048897280 hasConceptScore W3048897280C179518139 @default.
- W3048897280 hasConceptScore W3048897280C24890656 @default.
- W3048897280 hasConceptScore W3048897280C26760741 @default.
- W3048897280 hasConceptScore W3048897280C2776401178 @default.