Matches in SemOpenAlex for { <https://semopenalex.org/work/W2765480867> ?p ?o ?g. }
Showing items 1 to 59 of
59
with 100 items per page.
- W2765480867 abstract "Workshop on Deep Learning and the Brain Andrew Michael Saxe (asaxe@stanford.edu) Center for Mind, Brain, and Computation, Department of Electrical Engineering, Stanford University 316 Jordan Hall, Stanford, CA 94305 USA Keywords: Deep learning; Neural networks; Sensory process- ing ences, makes this workshop both timely and important for the cognitive science community. Introduction Goals and scope Deep learning methods rely on many layers of processing to perform sensory processing tasks like visual object recog- nition, speech recognition, and natural language processing (Bengio & LeCun, 2007). By learning simpler features in lower layers, and composing these into more complex fea- tures in higher layers, deep learning systems take advantage of the compositional nature of many real world tasks. To recognize cars, for instance, a deep system might first build wheel detectors, window detectors, etc, in lower layers, be- fore combining these into a car detector at a higher layer. Deep learning has emerged as a central tool in the engineer- ing disciplines due to its impressive performance in a range of applications, from visual object classification (Krizhevsky, Sutskever, & Hinton, 2012; Ciresan, Meier, & Schmidhu- ber, 2012) to speech recognition (Mohamed, Dahl, & Hinton, 2012) and natural language processing (Collobert & Weston, Parts of the brain (and in particular the visual system) ap- pear to share some of these features. Anatomically, they con- sist of a series of processing layers than can be arranged hi- erarchically (Felleman & Van Essen, 1991). And function- ally, neural responses show a progression of complexity from lower to higher levels (Quiroga, Reddy, Kreiman, Koch, & Fried, 2005), and these representations change with experi- ence. In light of these similarities, this workshop will explore the implications of deep learning for our understanding of the brain and mind. To what degree can the brain be con- sidered “deep”? How central is depth to its function? What insights from machine learning can inform work in cognitive science, and visa versa? How does depth impact both the dy- namics of learning in a neural network, and the content of what is learned? How might deep learning models illuminate phenomena of interest to cognitive scientists such as percep- tual learning, language acquisition, cognitive development, and category formation? The participants in this workshop have been chosen to present a broad range of perspectives on deep learning in the cognitive sciences. They span computational and empirical approaches, and allow for critical contact with other theoret- ical perspectives. The recent rapid progress on deep learn- ing within the machine learning community, and the growing number of deep learning-based models in the cognitive sci- The goal of the workshop is to explore the relevance of re- cent deep learning advances to cognition, to bring together cognitive science-oriented deep learning researchers, and to facilitate exchanges between the machine learning and cog- nitive science communities. While deep learning has been a persistent thread of re- search in the cognitive sciences from the very beginning, a goal of the workshop is to provide a focal point for this com- munity and a forum for important discussions and collabora- tions that can span methodological approaches. Because of the domain general nature of deep learning methods, these approaches can serve to unite a diverse set of researchers fo- cusing on a variety of phenomena. In addition, the workshop will demonstrate the ability of deep learning models to address phenomena at a variety of different scales and levels of detail, with talks covering ma- terial from receptive field models in retina and early visual cortices, through mid-level vision and object recognition, to semantic cognition. Workshop organization The main feature of the workshop will be a series of invited talks meant to span a broad range of perspectives on deep learning and the brain, and concentrated mostly on visual processing. Visual object recognition is the area most stud- ied in prior deep learning work both in machine learning and cognitive science, and hence makes a natural first focus for a workshop. Although the talks will address recent research, by their diverse perspectives they will also constitute a good introduction to the field for those who have not engaged with deep learning before. The workshop is planned as a full day workshop, and each speaker will have approximately 30 min- utes, to leave time for questions and discussion following the talks. Depending on time considerations, the workshop will close with a panel discussion to allow the audience further in- teraction with the speakers, and to permit speakers from dif- ferent backgrounds to engage each other on themes that have emerged during the day. The workshop will also accept submissions of abstracts for posters to be presented during lunch and coffee breaks. Ac- cepted poster submissions will be made available from the workshop website. The aim of the poster sessions is to show- case the much broader range of issues relevant to cognitive" @default.
- W2765480867 created "2017-11-10" @default.
- W2765480867 creator A5011428379 @default.
- W2765480867 date "2014-01-01" @default.
- W2765480867 modified "2023-09-27" @default.
- W2765480867 title "Workshop on Deep Learning and the Brain" @default.
- W2765480867 cites W2098580305 @default.
- W2765480867 cites W2101295242 @default.
- W2765480867 cites W2117130368 @default.
- W2765480867 cites W2125930537 @default.
- W2765480867 cites W2613634265 @default.
- W2765480867 hasPublicationYear "2014" @default.
- W2765480867 type Work @default.
- W2765480867 sameAs 2765480867 @default.
- W2765480867 citedByCount "1" @default.
- W2765480867 countsByYear W27654808672017 @default.
- W2765480867 crossrefType "journal-article" @default.
- W2765480867 hasAuthorship W2765480867A5011428379 @default.
- W2765480867 hasConcept C108583219 @default.
- W2765480867 hasConcept C154945302 @default.
- W2765480867 hasConcept C15744967 @default.
- W2765480867 hasConcept C188147891 @default.
- W2765480867 hasConcept C2781238097 @default.
- W2765480867 hasConcept C41008148 @default.
- W2765480867 hasConceptScore W2765480867C108583219 @default.
- W2765480867 hasConceptScore W2765480867C154945302 @default.
- W2765480867 hasConceptScore W2765480867C15744967 @default.
- W2765480867 hasConceptScore W2765480867C188147891 @default.
- W2765480867 hasConceptScore W2765480867C2781238097 @default.
- W2765480867 hasConceptScore W2765480867C41008148 @default.
- W2765480867 hasIssue "36" @default.
- W2765480867 hasLocation W27654808671 @default.
- W2765480867 hasOpenAccess W2765480867 @default.
- W2765480867 hasPrimaryLocation W27654808671 @default.
- W2765480867 hasRelatedWork W108625169 @default.
- W2765480867 hasRelatedWork W1499962209 @default.
- W2765480867 hasRelatedWork W1969999572 @default.
- W2765480867 hasRelatedWork W2009515272 @default.
- W2765480867 hasRelatedWork W2011320859 @default.
- W2765480867 hasRelatedWork W2113790011 @default.
- W2765480867 hasRelatedWork W2276995168 @default.
- W2765480867 hasRelatedWork W2341406001 @default.
- W2765480867 hasRelatedWork W2414005067 @default.
- W2765480867 hasRelatedWork W2587399391 @default.
- W2765480867 hasRelatedWork W2897415940 @default.
- W2765480867 hasRelatedWork W2911885499 @default.
- W2765480867 hasRelatedWork W2913961112 @default.
- W2765480867 hasRelatedWork W2916322827 @default.
- W2765480867 hasRelatedWork W2917875268 @default.
- W2765480867 hasRelatedWork W2952651718 @default.
- W2765480867 hasRelatedWork W3023403177 @default.
- W2765480867 hasRelatedWork W3156419612 @default.
- W2765480867 hasRelatedWork W3190334427 @default.
- W2765480867 hasRelatedWork W2615983441 @default.
- W2765480867 hasVolume "36" @default.
- W2765480867 isParatext "false" @default.
- W2765480867 isRetracted "false" @default.
- W2765480867 magId "2765480867" @default.
- W2765480867 workType "article" @default.