Matches in SemOpenAlex for { <https://semopenalex.org/work/W2082732955> ?p ?o ?g. }
Showing items 1 to 77 of
77
with 100 items per page.
- W2082732955 endingPage "315" @default.
- W2082732955 startingPage "314" @default.
- W2082732955 abstract "Many objects in the real world have multiple sensory attributes—for example, an object may both reflect light and emit sound. This leads to the percept that both the sound and the light originate from the same object, even though the neural processing of spatial information by the visual and auditory system is very different. In the visual system, space is encoded at the level of the retina based on the position of the activated photoreceptors. Thus, visual space must initially be represented in an eye-centered reference frame. In the auditory system, spatial information must be computed based on differences in intensity and timing of the stimulus at the two ears and on spectral cues resulting from reflections of the stimulus by the torso, head, and pinnae. Thus, since the ears are fixed to the head, auditory spatial information should be represented in a head-centered reference frame. At some point in the nervous system, these two reference frames must somehow align in order to create the unified percept of a single object. A central question is what reference frame(s) the nervous system uses to encode the spatial attributes of single- and multimodal stimuli and how these reference frames could be used to generate unified percepts. Several studies have investigated how the position of an auditory stimulus relative to the head and to the eyes modulates neuronal responses in multimodal regions of the brain (see Andersen 1997Andersen R.A. Phil. Trans. R. Soc. Lond. B. 1997; 352: 1421-1428Crossref PubMed Scopus (320) Google Scholar). For example, in the parietal lobe (Stricanne et al. 1996Stricanne B. Andersen R.A. Mazzoni P. J. Neurophysiol. 1996; 76: 2071-2076PubMed Google Scholar) and the superior colliculus (Jay and Sparks 1987Jay M.F. Sparks D.L. J. Neurophysiol. 1987; 57: 35-55PubMed Google Scholar), the responses of neurons to auditory stimuli are modulated by the eye position, so most of these neurons do not represent space in a purely head-centered reference frame. What has not been carefully examined is how early in the processing pathway that the reference frame of auditory neurons can be modulated. The results from experiments tackling this issue are reported by Groh et al. 2001Groh J.M. Trause A.S. Underhill A.M. Clark K.R. Inati S. Neuron. 2001; 29 (this issue,): 509-518Abstract Full Text Full Text PDF PubMed Scopus (134) Google Scholar in this issue of Neuron. They measured the responses of single neurons to noise stimuli in the inferior colliculus while monkeys were either looking to the left, to the right, or straight ahead (see Figure 1, left column). The inferior colliculus has traditionally been considered a “relay” nucleus, in which inputs from the auditory brainstem converge and are then relayed to the thalamus. One would therefore expect these neurons to encode acoustic space in a head-centered reference frame (see Figure 1, middle column), and not an eye-centered reference frame (see Figure 1, right column). Groh et al. 2001Groh J.M. Trause A.S. Underhill A.M. Clark K.R. Inati S. Neuron. 2001; 29 (this issue,): 509-518Abstract Full Text Full Text PDF PubMed Scopus (134) Google Scholar found that approximately one-third of the neurons encountered showed neither a head-centered nor an eye-centered reference frame, but something in between. This result indicates that the visual system can influence auditory spatial processing at a very early level and raises several interesting questions. How is the eye position information incorporated in the responses of inferior collicular neurons? How does eye position influence the spatial processing of other auditory areas such as the thalamus and cortex? Is it the strategy of the nervous system to encode spatial information across all sensory modalities in a similar reference frame as soon as possible? Investigating these questions should provide key insights into general mechanisms of sensory representations and perception." @default.
- W2082732955 created "2016-06-24" @default.
- W2082732955 creator A5018925807 @default.
- W2082732955 date "2001-02-01" @default.
- W2082732955 modified "2023-10-18" @default.
- W2082732955 title "Hearing and Looking" @default.
- W2082732955 cites W1237830754 @default.
- W2082732955 cites W2007482827 @default.
- W2082732955 cites W2113507419 @default.
- W2082732955 cites W2147503922 @default.
- W2082732955 doi "https://doi.org/10.1016/s0896-6273(01)00205-7" @default.
- W2082732955 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/11239422" @default.
- W2082732955 hasPublicationYear "2001" @default.
- W2082732955 type Work @default.
- W2082732955 sameAs 2082732955 @default.
- W2082732955 citedByCount "0" @default.
- W2082732955 crossrefType "journal-article" @default.
- W2082732955 hasAuthorship W2082732955A5018925807 @default.
- W2082732955 hasBestOaLocation W20827329551 @default.
- W2082732955 hasConcept C121332964 @default.
- W2082732955 hasConcept C126042441 @default.
- W2082732955 hasConcept C154945302 @default.
- W2082732955 hasConcept C15744967 @default.
- W2082732955 hasConcept C169760540 @default.
- W2082732955 hasConcept C172849965 @default.
- W2082732955 hasConcept C180747234 @default.
- W2082732955 hasConcept C26760741 @default.
- W2082732955 hasConcept C2777443451 @default.
- W2082732955 hasConcept C2779687425 @default.
- W2082732955 hasConcept C2779918689 @default.
- W2082732955 hasConcept C31972630 @default.
- W2082732955 hasConcept C41008148 @default.
- W2082732955 hasConcept C46312422 @default.
- W2082732955 hasConcept C62520636 @default.
- W2082732955 hasConcept C74992021 @default.
- W2082732955 hasConcept C76155785 @default.
- W2082732955 hasConcept C94487597 @default.
- W2082732955 hasConceptScore W2082732955C121332964 @default.
- W2082732955 hasConceptScore W2082732955C126042441 @default.
- W2082732955 hasConceptScore W2082732955C154945302 @default.
- W2082732955 hasConceptScore W2082732955C15744967 @default.
- W2082732955 hasConceptScore W2082732955C169760540 @default.
- W2082732955 hasConceptScore W2082732955C172849965 @default.
- W2082732955 hasConceptScore W2082732955C180747234 @default.
- W2082732955 hasConceptScore W2082732955C26760741 @default.
- W2082732955 hasConceptScore W2082732955C2777443451 @default.
- W2082732955 hasConceptScore W2082732955C2779687425 @default.
- W2082732955 hasConceptScore W2082732955C2779918689 @default.
- W2082732955 hasConceptScore W2082732955C31972630 @default.
- W2082732955 hasConceptScore W2082732955C41008148 @default.
- W2082732955 hasConceptScore W2082732955C46312422 @default.
- W2082732955 hasConceptScore W2082732955C62520636 @default.
- W2082732955 hasConceptScore W2082732955C74992021 @default.
- W2082732955 hasConceptScore W2082732955C76155785 @default.
- W2082732955 hasConceptScore W2082732955C94487597 @default.
- W2082732955 hasIssue "2" @default.
- W2082732955 hasLocation W20827329551 @default.
- W2082732955 hasLocation W20827329552 @default.
- W2082732955 hasOpenAccess W2082732955 @default.
- W2082732955 hasPrimaryLocation W20827329551 @default.
- W2082732955 hasRelatedWork W2028172455 @default.
- W2082732955 hasRelatedWork W2058593756 @default.
- W2082732955 hasRelatedWork W2082732955 @default.
- W2082732955 hasRelatedWork W2087032085 @default.
- W2082732955 hasRelatedWork W2090974681 @default.
- W2082732955 hasRelatedWork W2327429639 @default.
- W2082732955 hasRelatedWork W286672843 @default.
- W2082732955 hasRelatedWork W3013530455 @default.
- W2082732955 hasRelatedWork W3117721362 @default.
- W2082732955 hasRelatedWork W4243402053 @default.
- W2082732955 hasVolume "29" @default.
- W2082732955 isParatext "false" @default.
- W2082732955 isRetracted "false" @default.
- W2082732955 magId "2082732955" @default.
- W2082732955 workType "article" @default.