Matches in SemOpenAlex for { <https://semopenalex.org/work/W2963102759> ?p ?o ?g. }
Showing items 1 to 71 of
71
with 100 items per page.
- W2963102759 abstract "Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences." @default.
- W2963102759 created "2019-07-30" @default.
- W2963102759 creator A5001833084 @default.
- W2963102759 creator A5071433151 @default.
- W2963102759 creator A5079424258 @default.
- W2963102759 date "2017-10-01" @default.
- W2963102759 modified "2023-09-28" @default.
- W2963102759 title "Invariant recognition drives neural representations of action sequences" @default.
- W2963102759 hasPublicationYear "2017" @default.
- W2963102759 type Work @default.
- W2963102759 sameAs 2963102759 @default.
- W2963102759 citedByCount "0" @default.
- W2963102759 crossrefType "book-chapter" @default.
- W2963102759 hasAuthorship W2963102759A5001833084 @default.
- W2963102759 hasAuthorship W2963102759A5071433151 @default.
- W2963102759 hasAuthorship W2963102759A5079424258 @default.
- W2963102759 hasConcept C153180895 @default.
- W2963102759 hasConcept C154945302 @default.
- W2963102759 hasConcept C15744967 @default.
- W2963102759 hasConcept C169760540 @default.
- W2963102759 hasConcept C190470478 @default.
- W2963102759 hasConcept C26760741 @default.
- W2963102759 hasConcept C2780103172 @default.
- W2963102759 hasConcept C2781238097 @default.
- W2963102759 hasConcept C33923547 @default.
- W2963102759 hasConcept C37914503 @default.
- W2963102759 hasConcept C41008148 @default.
- W2963102759 hasConcept C64876066 @default.
- W2963102759 hasConcept C81363708 @default.
- W2963102759 hasConcept C94124525 @default.
- W2963102759 hasConceptScore W2963102759C153180895 @default.
- W2963102759 hasConceptScore W2963102759C154945302 @default.
- W2963102759 hasConceptScore W2963102759C15744967 @default.
- W2963102759 hasConceptScore W2963102759C169760540 @default.
- W2963102759 hasConceptScore W2963102759C190470478 @default.
- W2963102759 hasConceptScore W2963102759C26760741 @default.
- W2963102759 hasConceptScore W2963102759C2780103172 @default.
- W2963102759 hasConceptScore W2963102759C2781238097 @default.
- W2963102759 hasConceptScore W2963102759C33923547 @default.
- W2963102759 hasConceptScore W2963102759C37914503 @default.
- W2963102759 hasConceptScore W2963102759C41008148 @default.
- W2963102759 hasConceptScore W2963102759C64876066 @default.
- W2963102759 hasConceptScore W2963102759C81363708 @default.
- W2963102759 hasConceptScore W2963102759C94124525 @default.
- W2963102759 hasLocation W29631027591 @default.
- W2963102759 hasOpenAccess W2963102759 @default.
- W2963102759 hasPrimaryLocation W29631027591 @default.
- W2963102759 hasRelatedWork W2128280551 @default.
- W2963102759 hasRelatedWork W2139788083 @default.
- W2963102759 hasRelatedWork W2204570207 @default.
- W2963102759 hasRelatedWork W2295078983 @default.
- W2963102759 hasRelatedWork W2343204383 @default.
- W2963102759 hasRelatedWork W2492109573 @default.
- W2963102759 hasRelatedWork W2504442950 @default.
- W2963102759 hasRelatedWork W2563716168 @default.
- W2963102759 hasRelatedWork W2621975776 @default.
- W2963102759 hasRelatedWork W2768180730 @default.
- W2963102759 hasRelatedWork W2795373741 @default.
- W2963102759 hasRelatedWork W2929989234 @default.
- W2963102759 hasRelatedWork W2967777122 @default.
- W2963102759 hasRelatedWork W3023072174 @default.
- W2963102759 hasRelatedWork W3087598430 @default.
- W2963102759 hasRelatedWork W3106444986 @default.
- W2963102759 hasRelatedWork W3108490213 @default.
- W2963102759 hasRelatedWork W3135395501 @default.
- W2963102759 hasRelatedWork W3153140985 @default.
- W2963102759 hasRelatedWork W3207371441 @default.
- W2963102759 isParatext "false" @default.
- W2963102759 isRetracted "false" @default.
- W2963102759 magId "2963102759" @default.
- W2963102759 workType "book-chapter" @default.