Matches in SemOpenAlex for { <https://semopenalex.org/work/W3046170022> ?p ?o ?g. }
Showing items 1 to 76 of
76
with 100 items per page.
- W3046170022 endingPage "114" @default.
- W3046170022 startingPage "103" @default.
- W3046170022 abstract "The current state-of-the-art object recognition algorithms, deep convolutional neural networks (DCNNs), are inspired by the architecture of the mammalian visual system, and are capable of human-level performance on many tasks. As they are trained for object recognition tasks, it has been shown that DCNNs develop hidden representations that resemble those observed in the mammalian visual system (Razavi and Kriegeskorte, 2014; Yamins and Dicarlo, 2016; Gu and van Gerven, 2015; Mcclure and Kriegeskorte, 2016). Moreover, DCNNs trained on object recognition tasks are currently among the best models we have of the mammalian visual system. This led us to hypothesize that teaching DCNNs to achieve even more brain-like representations could improve their performance. To test this, we trained DCNNs on a composite task, wherein networks were trained to: (a) classify images of objects; while (b) having intermediate representations that resemble those observed in neural recordings from monkey visual cortex. Compared with DCNNs trained purely for object categorization, DCNNs trained on the composite task had better object recognition performance and are more robust to label corruption. Interestingly, we found that neural data was not required for this process, but randomized data with the same statistical properties as neural data also boosted performance. While the performance gains we observed when training on the composite task vs the pure object recognition task were modest, they were remarkably robust. Notably, we observed these performance gains across all network variations we studied, including: smaller (CORNet-Z) vs larger (VGG-16) architectures; variations in optimizers (Adam vs gradient descent); variations in activation function (ReLU vs ELU); and variations in network initialization. Our results demonstrate the potential utility of a new approach to training object recognition networks, using strategies in which the brain - or at least the statistical properties of its activation patterns - serves as a teacher signal for training DCNNs." @default.
- W3046170022 created "2020-08-03" @default.
- W3046170022 creator A5005460746 @default.
- W3046170022 creator A5026419951 @default.
- W3046170022 creator A5028769863 @default.
- W3046170022 creator A5085831385 @default.
- W3046170022 date "2020-11-01" @default.
- W3046170022 modified "2023-09-24" @default.
- W3046170022 title "Improved object recognition using neural networks trained to mimic the brain’s statistical properties" @default.
- W3046170022 cites W2024938108 @default.
- W3046170022 cites W2117731089 @default.
- W3046170022 cites W2121008432 @default.
- W3046170022 cites W2160654481 @default.
- W3046170022 cites W2215103083 @default.
- W3046170022 cites W2274405424 @default.
- W3046170022 cites W2763767712 @default.
- W3046170022 cites W2919115771 @default.
- W3046170022 cites W2951506741 @default.
- W3046170022 cites W2955863859 @default.
- W3046170022 cites W2963138386 @default.
- W3046170022 cites W2964017885 @default.
- W3046170022 cites W2979357328 @default.
- W3046170022 doi "https://doi.org/10.1016/j.neunet.2020.07.013" @default.
- W3046170022 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/32771841" @default.
- W3046170022 hasPublicationYear "2020" @default.
- W3046170022 type Work @default.
- W3046170022 sameAs 3046170022 @default.
- W3046170022 citedByCount "14" @default.
- W3046170022 countsByYear W30461700222020 @default.
- W3046170022 countsByYear W30461700222021 @default.
- W3046170022 countsByYear W30461700222022 @default.
- W3046170022 countsByYear W30461700222023 @default.
- W3046170022 crossrefType "journal-article" @default.
- W3046170022 hasAuthorship W3046170022A5005460746 @default.
- W3046170022 hasAuthorship W3046170022A5026419951 @default.
- W3046170022 hasAuthorship W3046170022A5028769863 @default.
- W3046170022 hasAuthorship W3046170022A5085831385 @default.
- W3046170022 hasBestOaLocation W30461700222 @default.
- W3046170022 hasConcept C153180895 @default.
- W3046170022 hasConcept C154945302 @default.
- W3046170022 hasConcept C2781238097 @default.
- W3046170022 hasConcept C31972630 @default.
- W3046170022 hasConcept C41008148 @default.
- W3046170022 hasConcept C50644808 @default.
- W3046170022 hasConcept C64876066 @default.
- W3046170022 hasConceptScore W3046170022C153180895 @default.
- W3046170022 hasConceptScore W3046170022C154945302 @default.
- W3046170022 hasConceptScore W3046170022C2781238097 @default.
- W3046170022 hasConceptScore W3046170022C31972630 @default.
- W3046170022 hasConceptScore W3046170022C41008148 @default.
- W3046170022 hasConceptScore W3046170022C50644808 @default.
- W3046170022 hasConceptScore W3046170022C64876066 @default.
- W3046170022 hasFunder F4320309949 @default.
- W3046170022 hasFunder F4320320994 @default.
- W3046170022 hasFunder F4320334593 @default.
- W3046170022 hasLocation W30461700221 @default.
- W3046170022 hasLocation W30461700222 @default.
- W3046170022 hasOpenAccess W3046170022 @default.
- W3046170022 hasPrimaryLocation W30461700221 @default.
- W3046170022 hasRelatedWork W1528044252 @default.
- W3046170022 hasRelatedWork W1531683208 @default.
- W3046170022 hasRelatedWork W1912506516 @default.
- W3046170022 hasRelatedWork W2009052148 @default.
- W3046170022 hasRelatedWork W2200925278 @default.
- W3046170022 hasRelatedWork W2328068029 @default.
- W3046170022 hasRelatedWork W2330829846 @default.
- W3046170022 hasRelatedWork W2350353705 @default.
- W3046170022 hasRelatedWork W2363840281 @default.
- W3046170022 hasRelatedWork W2372904789 @default.
- W3046170022 hasVolume "131" @default.
- W3046170022 isParatext "false" @default.
- W3046170022 isRetracted "false" @default.
- W3046170022 magId "3046170022" @default.
- W3046170022 workType "article" @default.