Matches in SemOpenAlex for { <https://semopenalex.org/work/W2891005457> ?p ?o ?g. }
- W2891005457 endingPage "2528" @default.
- W2891005457 startingPage "2521" @default.
- W2891005457 abstract "•Most V1 L2/3 neurons show task-related activity after learning a rewarded task•A subset of neurons became responsive to an expected reward location•Without visual cues, behavioral and neuronal responses rely on self-motion signals•With visual cues, behavioral and neuronal responses rely on visual information The integration of visual stimuli and motor feedback is critical for successful visually guided navigation. These signals have been shown to shape neuronal activity in the primary visual cortex (V1), in an experience-dependent manner. Here, we examined whether visual, reward, and self-motion-related inputs are integrated in order to encode behaviorally relevant locations in V1 neurons. Using a behavioral task in a virtual environment, we monitored layer 2/3 neuronal activity as mice learned to locate a reward along a linear corridor. With learning, a subset of neurons became responsive to the expected reward location. Without a visual cue to the reward location, both behavioral and neuronal responses relied on self-motion-derived estimations. However, when visual cues were available, both neuronal and behavioral responses were driven by visual information. Therefore, a population of V1 neurons encode behaviorally relevant spatial locations, based on either visual cues or on self-motion feedback when visual cues are absent. The integration of visual stimuli and motor feedback is critical for successful visually guided navigation. These signals have been shown to shape neuronal activity in the primary visual cortex (V1), in an experience-dependent manner. Here, we examined whether visual, reward, and self-motion-related inputs are integrated in order to encode behaviorally relevant locations in V1 neurons. Using a behavioral task in a virtual environment, we monitored layer 2/3 neuronal activity as mice learned to locate a reward along a linear corridor. With learning, a subset of neurons became responsive to the expected reward location. Without a visual cue to the reward location, both behavioral and neuronal responses relied on self-motion-derived estimations. However, when visual cues were available, both neuronal and behavioral responses were driven by visual information. Therefore, a population of V1 neurons encode behaviorally relevant spatial locations, based on either visual cues or on self-motion feedback when visual cues are absent. The ability to identify behaviorally relevant locations is critical for successful navigation through the environment and, ultimately, survival. This ability requires an estimation of location that can rely on positional cues, such as visual features of the environment, or on internal representations based on speed and direction of movement (Chen et al., 2013Chen G. King J.A. Burgess N. O’Keefe J. How vision and movement combine in the hippocampal place code.Proc. Natl. Acad. Sci. USA. 2013; 110: 378-383Crossref PubMed Scopus (206) Google Scholar, Etienne and Jeffery, 2004Etienne A.S. Jeffery K.J. Path integration in mammals.Hippocampus. 2004; 14: 180-192Crossref PubMed Scopus (469) Google Scholar, Tcheang et al., 2011Tcheang L. Bülthoff H.H. Burgess N. Visual influence on path integration in darkness indicates a multimodal representation of large-scale space.Proc. Natl. Acad. Sci. USA. 2011; 108: 1152-1157Crossref PubMed Scopus (76) Google Scholar, Tennant et al., 2018Tennant S.A. Fischer L. Garden D.L.F. Gerlei K.Z. Martinez-Gonzalez C. McClure C. Wood E.R. Nolan M.F. Stellate cells in the medial entorhinal cortex are required for spatial learning.Cell Rep. 2018; 22: 1313-1324Abstract Full Text Full Text PDF PubMed Scopus (34) Google Scholar, Campbell et al., 2018Campbell M.G. Ocko S.A. Mallory C.S. Low I.I.C. Ganguli S. Giocomo L.M. Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation.Nat. Neurosci. 2018; 21: 1096-1106Crossref PubMed Scopus (70) Google Scholar). While it is well known that physical features of the visual world are represented by neuronal activity in the primary visual cortex (V1), recent studies have shown that self-motion-related information is also represented in V1 and can directly modulate visual responses (Erisken et al., 2014Erisken S. Vaiceliunaite A. Jurjut O. Fiorini M. Katzner S. Busse L. Effects of locomotion extend throughout the mouse early visual system.Curr. Biol. 2014; 24: 2899-2907Abstract Full Text Full Text PDF PubMed Scopus (114) Google Scholar, Keller et al., 2012Keller G.B. Bonhoeffer T. Hübener M. Sensorimotor mismatch signals in primary visual cortex of the behaving mouse.Neuron. 2012; 74: 809-815Abstract Full Text Full Text PDF PubMed Scopus (313) Google Scholar, Niell and Stryker, 2010Niell C.M. Stryker M.P. Modulation of visual responses by behavioral state in mouse visual cortex.Neuron. 2010; 65: 472-479Abstract Full Text Full Text PDF PubMed Scopus (808) Google Scholar, Pakan et al., 2016Pakan J.M. Lowe S.C. Dylda E. Keemink S.W. Currie S.P. Coutts C.A. Rochefort N.L. Behavioral-state modulation of inhibition is context-dependent and cell type specific in mouse visual cortex.eLife. 2016; 5: e14985Crossref PubMed Scopus (137) Google Scholar, Saleem et al., 2013Saleem A.B. Ayaz A. Jeffery K.J. Harris K.D. Carandini M. Integration of visual motion and locomotion in mouse visual cortex.Nat. Neurosci. 2013; 16: 1864-1869Crossref PubMed Scopus (224) Google Scholar). These results suggest that the visual cortex may combine motor-related and visual information to encode signals related to the spatial position of visual stimuli. Consistent with this hypothesis, it was shown that a subset of V1 neurons responds specifically to a given visual stimulus placed in one location along a virtual corridor and less to the same stimulus at another location (Fiser et al., 2016Fiser A. Mahringer D. Oyibo H.K. Petersen A.V. Leinweber M. Keller G.B. Experience-dependent spatial expectations in mouse visual cortex.Nat. Neurosci. 2016; 19: 1658-1664Crossref PubMed Scopus (115) Google Scholar). A representation of the spatial location of a visual cue in V1 (i.e., at an early stage of sensory information processing) may facilitate the perception of stimuli associated with danger or a reward at specific locations. However, it remains unknown whether V1 neurons represent spatial locations that are relevant for a behavioral task, such as the location associated with a reward, and whether spatial expectations would exclusively rely on visual cues or may also be triggered by self-motion signals alone. Previous studies have used visual discrimination tasks, in which mice learn to discriminate a rewarded visual stimulus from a non-rewarded one, to show that the representation of behaviorally relevant visual stimuli in V1 are enhanced with experience (Jurjut et al., 2017Jurjut O. Georgieva P. Busse L. Katzner S. Learning enhances sensory processing in mouse V1 before improving behavior.J. Neurosci. 2017; 37: 6460-6474Crossref PubMed Scopus (36) Google Scholar, Keller et al., 2017Keller A.J. Houlton R. Kampa B.M. Lesica N.A. Mrsic-Flogel T.D. Keller G.B. Helmchen F. Stimulus relevance modulates contrast adaptation in visual cortex.eLife. 2017; 6: e21589Crossref PubMed Scopus (29) Google Scholar, Pakan et al., 2018Pakan J.M. Francioni V. Rochefort N.L. Action and learning shape the activity of neuronal circuits in the visual cortex.Curr. Opin. Neurobiol. 2018; 52: 88-97Crossref PubMed Scopus (37) Google Scholar, Poort et al., 2015Poort J. Khan A.G. Pachitariu M. Nemri A. Orsolic I. Krupic J. Bauza M. Sahani M. Keller G.B. Mrsic-Flogel T.D. Hofer S.B. Learning enhances sensory and multiple non-sensory representations in primary visual cortex.Neuron. 2015; 86: 1478-1490Abstract Full Text Full Text PDF PubMed Scopus (199) Google Scholar). These results suggest that feedforward visual inputs are integrated with reward-related signals that have been shown to be present in V1 (Chubykin et al., 2013Chubykin A.A. Roach E.B. Bear M.F. Shuler M.G.H. A cholinergic mechanism for reward timing within primary visual cortex.Neuron. 2013; 77: 723-735Abstract Full Text Full Text PDF PubMed Scopus (120) Google Scholar, Shuler and Bear, 2006Shuler M.G. Bear M.F. Reward timing in the primary visual cortex.Science. 2006; 311: 1606-1609Crossref PubMed Scopus (417) Google Scholar). However, it is unclear whether visual, reward, and self-motion-related signals combine to activate V1 neurons in response to relevant spatial locations, such as a location associated with a reward. In this study, we used two-photon calcium imaging in head-fixed mice placed in a virtual environment, to monitor the activity of V1 neurons before, during, and after mice learned to locate a reward on a virtual linear corridor. Mice had to lick at a given spatial location, demarcated by a visual cue, in order to receive a reward. We found that V1 neuronal activity correlated with behavioral responses: with training, most neurons became specifically responsive to the reward zone region of the virtual corridor. When the visual cue was removed but the reward remained at the same spatial location, we found that the expected reward location was represented by a subset of V1 neurons. We then manipulated the gain between treadmill rotation and the virtual environment to decouple visual information from self-motion feedback. Our results show that, in the absence of a visual cue, animal behavior and neural responses both rely on self-motion cues; however, in the presence of a visual cue indicating the reward location, visual input dominates self-motion cues. We trained head-fixed mice to perform a visually guided task and used two-photon calcium imaging to assess changes in neuronal activity in V1 during learning (Figure 1). Seven mice were trained daily to perform a rewarded task in a virtual environment (Figures 1A and 1B) while we imaged the same population of layer 2/3 neurons, which expressed the genetically encoded calcium indicator GCaMP6f (Chen et al., 2013Chen G. King J.A. Burgess N. O’Keefe J. How vision and movement combine in the hippocampal place code.Proc. Natl. Acad. Sci. USA. 2013; 110: 378-383Crossref PubMed Scopus (206) Google Scholar) (e.g., Figure 1C). The task required water-deprived mice to lick a spout for a water reward at a specific location along a virtual corridor (80 cm from the beginning of the corridor), which was indicated by a change in visual stimulus from an oriented grating pattern to black walls, referred to as the reward zone (Figure 1A). Once the animal entered the reward zone, within the first 20 cm (80–100 cm) it could lick for a water droplet (early reward, Figure 1A); this was considered a successful trial. To facilitate learning on missed trials, where a reward was not triggered by the mouse, animals were given a water droplet at a default location 20 cm beyond the reward zone onset (default reward, 100 cm, Figure 1A). In the first training sessions, mice licked randomly along the length of the corridor but quickly learned to target their licking behavior to the reward zone region: they were considered “expert” at the task when they achieved a success rate of >75% early rewarded trials (e.g., Figure 1D). This criterion was achieved after an average of five sessions (range, 4–6 days) and was maintained through the remaining training days (Figure 1E). In this paradigm, it would be possible for the animals to adopt a strategy of licking constantly along the length of the corridor and still maintain a high success rate based on the percentage of early rewarded trials. To account for this, we calculated a spatial modulation index (SMI) (see Experimental Procedures) that significantly increased from 0.68 ± 0.16 on the novice day to 1.76 ± 0.14 by the end of the training sessions (Figure 1E, lower panel; p = 0.002, n = 7; Kruskal-Wallis test), indicating that mice learned to associate a water reward with the visually cued location and consequently produce spatially confined licking behavior. On the first day of training (novice), the maximal response of neurons ranged across all locations along the corridor; however, by the expert day (success rate, >75%), a large proportion of peak responses were centered around the reward zone transition (Figures 2A, 2B, and S2B). We identified task-related neurons as those having a significant change in response before (Rpre) compared to after (Rpost) the reward zone onset (Rpre versus Rpost: p < 0.001, Wilcoxon signed rank test; Figure S1A). We found that, with training, most neurons became specifically responsive to the reward zone transition (percentage of task-related cells, 40% ± 12% novice, 88% ± 3% end of training; p = 0.010, n = 7; Kruskal-Wallis test; Figures 2A–2C; see also Figure S2B). Consequently, when we utilized a template matching decoder (Montijn et al., 2014Montijn J.S. Vinck M. Pennartz C.M.A. Population coding in mouse visual cortex: response reliability and dissociability of stimulus tuning and noise correlation.Front. Comput. Neurosci. 2014; 8: 58Crossref PubMed Scopus (31) Google Scholar; see Supplemental Experimental Procedures), using neuronal population activity to predict behavioral outcome by differentiating between successful trials (early rewarded) and missed trials (default rewarded), the decoder accuracy significantly increased from novice to expert days (decoder accuracy, 55% ± 5% novice, 77% ± 6% end of training; p = 0.015, n = 7; Wilcoxon signed rank; Figure 2D). Accordingly, the proportion of task-related neurons correlated with the behavioral success rate (quantified by the SMI; Figure 2E). As the mice learned the task, they became faster at performing it and more consistent in their execution (Figure S2A). We thus tested whether the task-related responses observed on expert days were due to an entrainment effect of a stereotypic trial time. We found consistent responses at the reward zone onset even for the slowest and fastest trial times, which could differ from each other by more than an order of magnitude (Figure S2B). The task-related responses were thus more consistent across distance than time and did not reflect stimulus entrainment (Figures S2D–S2F). The large proportion of V1 task-related neurons on expert day included a variety of responses with neurons either decreasing or increasing their activity at the reward zone (Figures S1A and S1B). Neurons decreasing their activity included neurons that were responsive to the oriented grating along the corridor and decreased their activity at the reward zone onset (transition to black walls; corridor responsive; 39%), as well as neurons that decreased their activity with lower running speed (locomotion responsive; 12%; Figures S1B and S1C). Neurons increasing their activity at the reward zone onset included a small proportion of neurons responding to licking independently of the reward (lick responsive; 5%) and reward zone-related neurons (21%; Figures S1B and S1C). We then tested the relative contribution of the visual cue (black walls) and self-motion-related cues to the reward zone-related responses. After reaching the expert day, all seven mice were tested on an additional corridor configuration (phase 2) in which the reward zone remained at the same distance along the virtual corridor but was no longer “cued” by a visual landmark (i.e., the black corridor walls demarcating the reward zone were removed; see Figure 3A). In these uncued trials, animals still had to lick at the same physical location along the corridor to receive the reward and be considered a successful trial. However, as before, if they did not lick successfully they also received a later reward at the default location (see Figure 1A). On the first day without a visual cue, the success rate was 44% ± 4% on uncued trials, and after an average of six sessions (range, four to seven), mice reached the 75% ± 4% success rate criteria to be considered expert (Figure 3B). From the population responses in V1 layer 2/3, we identified neurons responding at the reward location in both visually cued as well as uncued trials (Rpost > Rpre: p < 0.001, Wilcoxon signed rank test, in both conditions). An example neuron is shown in Figure 3A (see also Figure S2C). We thus excluded neurons that were specifically responding to the grating offset (off response). On the novice day without a visual cue, 7% of neurons specifically responded to the reward zone in both cued and uncued trials. However, by the expert day this proportion had doubled (15%). On the first uncued day, neurons showed distinct responses to successful cued and uncued trials, whereas by the expert day, responses to the visually cued and uncued trials were similar (Figure 3C, upper panel). When we utilized a template matching decoder to predict whether a given successful trial was either visually cued or uncued from all neuronal responses, the accuracy of the decoder significantly decreased from the novice to expert day (Figure 3D; decoder accuracy: novice, 90% ± 4%; expert, 73% ± 5%, p = 0.015, n = 7; Wilcoxon signed rank). This result was consistent with the increased proportion of neurons showing corresponding responses to cued and uncued trials, making these conditions less distinguishable. Reward zone responses in the absence of the visual landmark may result from multiple variables: the licking behavior, the time from trial onset (through an entrainment effect), reward consumption, the spatial location of the reward, or a combination of these signals. We tested the response of this neuronal population to licking behavior by analyzing licks that occurred along the virtual corridor (outside the reward zone): the activity of the neurons during licking was not significantly different from non-licking periods (mean ΔF/F0: licking, 0.32 ± 0.08; non-licking, 0.31 ± 0.07; p = 0.535; Wilcoxon signed rank). We then assessed the contribution of time (Figure S2C) and found that neuronal responses in uncued trials were more consistent across distance than time and did not reflect stimulus entrainment (Figures S2D and S2F). Next, we tested the response to reward consumption. The population of neurons that developed the reward zone-specific responses by the expert day showed a peak response at the reward event for both successful (early reward) and missed (default reward) trials, in which reward occurred at different spatial locations (Figure 3C), indicating that this neuronal population was responsive to the reward. This suggests that individual neuronal responses could reflect either the reward event itself or the reward associated to a specific spatial location. To further investigate whether responses in V1 could specifically represent an expected location of a reward, we altered the gain relating the rotation of the cylindrical treadmill to the progression of the virtual corridor. In this last phase of the experiment, we used three expert trained mice and reduced the gain from 1 to 0.75 in a subset of trials. In this condition, the expected (i.e., trained) reward location was at 80 cm of distance traveled by the mice on the treadmill; however, this physical distance now correlated to only 60 cm in virtual space, along the virtual corridor (Figure 3E). If the mice were relying on motor-derived self-motion cues alone, they would lick at 80 cm of physical distance traveled on the treadmill (corresponding to 60 cm on the virtual corridor). If the mice were relying on the virtual corridor cues (such as the number of stripes), they would lick at 80 cm in virtual space (corresponding to 107-cm physical distance traveled). In these trials, the reward was given at 80 cm in virtual space along the virtual corridor, therefore after the expected reward location based on physical distance along the treadmill (see also Supplemental Experimental Procedures). We found a subset of neurons that showed significant gain-modulated responses on the uncued trials (Figure 3E; gain-modulated cells, 10% of the population). On average, these neurons had a peak response approximately midway between the expected reward onset and the actual reward onset (Figure 3F). We assessed the contribution of time to these neuronal responses and found that neuronal responses in gain-modulated trials were less variable across distance than time (Figures S2D and S2E). In most of these uncued gain-modulated trials, the mice also licked at the expected reward location (Figure 3G). When the mice did not lick at the expected reward location, the response amplitude of these neurons (between the expected reward onset and the actual reward onset) was decreased by two-thirds, without any clear peak. These results indicate that the gain-modulated neuronal responses correlate with the behavioral expectation of a reward at this specific location (see also Figure S2F). Therefore, in the absence of a visual cue (black walls), mice determined reward location based on self-motion-related information. In the gain-modulated visually cued trials, the visual cue was visible ahead of the mouse when it reached the expected reward location based on physical distance traveled. Interestingly, in these trials, the gain-modulated neurons showed no significant response near the expected reward onset. Instead, these neurons responded at the actual reward location, which was demarcated by the visual cue (Figures 3E and 3F), indicating that in these trials visual inputs dominated the responses of these neurons. Correspondingly, mice also licked at the actual reward location indicated by the landmark (Figure 3G). These results indicate that, in the presence of the visual cue, mice primarily relied on visual information to identify the reward location. Similarly, visual inputs related to the landmark dominated the responses of V1 neurons. Our results demonstrate a recruitment of the majority of V1 layer 2/3 neurons to task-relevant activity while animals learned to locate a reward in a virtual environment. We show that a subset of neurons responded to the specific spatial location associated with an expected reward. In the absence of a visual cue, this neuronal representation of reward location relied on self-motion-related inputs and correlated with behavioral outcome. However, when visual cues were available, both neuronal and behavioral responses were driven by visual information. Importantly, these responses were specific to a rewarded spatial location (i.e., a behaviorally relevant location) and appeared after learning: thus, they correspond to an expectation of a reward at a given location. This differs from a cognitive map, or a comprehensive spatial mapping of the environment, as described in CA1 place cells: in our experimental conditions, we did not observe place cell-like mapping of spatial locations all along the virtual corridor. In the absence of visual landmarks, mice can use different strategies to determine the reward location. One such strategy would be to estimate the distance traveled based on optic flow information provided by the pattern of the virtual corridor. However, when we changed the gain between physical and virtual space, mice licked at the expected location based on the physical distance they had run on the treadmill, as opposed to using optic flow information. The evaluation of the distance to the reward location was thus based on locomotor-related feedback information. Our results are consistent with the hypothesis that, in the absence of visual cues, mice are able to estimate the distance toward a reward based on self-motion feedback information. This result is in line with previous studies showing that mice can use path integration mechanisms to estimate location (Van Cauter et al., 2013Van Cauter T. Camon J. Alvernhe A. Elduayen C. Sargolini F. Save E. Distinct roles of medial and lateral entorhinal cortex in spatial cognition.Cereb. Cortex. 2013; 23: 451-459Crossref PubMed Scopus (118) Google Scholar, Etienne and Jeffery, 2004Etienne A.S. Jeffery K.J. Path integration in mammals.Hippocampus. 2004; 14: 180-192Crossref PubMed Scopus (469) Google Scholar, Tennant et al., 2018Tennant S.A. Fischer L. Garden D.L.F. Gerlei K.Z. Martinez-Gonzalez C. McClure C. Wood E.R. Nolan M.F. Stellate cells in the medial entorhinal cortex are required for spatial learning.Cell Rep. 2018; 22: 1313-1324Abstract Full Text Full Text PDF PubMed Scopus (34) Google Scholar, Campbell et al., 2018Campbell M.G. Ocko S.A. Mallory C.S. Low I.I.C. Ganguli S. Giocomo L.M. Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation.Nat. Neurosci. 2018; 21: 1096-1106Crossref PubMed Scopus (70) Google Scholar). While the encoding of spatial information has been extensively characterized in the hippocampal formation (Dombeck et al., 2010Dombeck D.A. Harvey C.D. Tian L. Looger L.L. Tank D.W. Functional imaging of hippocampal place cells at cellular resolution during virtual navigation.Nat. Neurosci. 2010; 13: 1433-1440Crossref PubMed Scopus (517) Google Scholar, Hartley et al., 2013Hartley T. Lever C. Burgess N. O’Keefe J. Space in the brain: how the hippocampal formation supports spatial cognition.Philos. Trans. R. Soc. Lond. B Biol. Sci. 2013; 369: 20120510Crossref PubMed Scopus (290) Google Scholar), our results show that a subset of V1 neurons receive inputs related to spatial location. This signal could originate from a number of sources. It could be conveyed by top-down cortico-cortical inputs. For example, neurons in the retrosplenial cortex have been shown to encode spatial and navigational signals (Mao et al., 2017Mao D. Kandler S. McNaughton B.L. Bonin V. Sparse orthogonal population representation of spatial context in the retrosplenial cortex.Nat. Commun. 2017; 8: 243Crossref PubMed Scopus (85) Google Scholar). Since retrosplenial cortex is one of the major sources of input to V1 (Leinweber et al., 2017Leinweber M. Ward D.R. Sobczak J.M. Attinger A. Keller G.B. A sensorimotor circuit in mouse cortex for visual flow predictions.Neuron. 2017; 95: 1420-1432.e5Abstract Full Text Full Text PDF PubMed Scopus (138) Google Scholar), it is possible that spatial representations present in retrosplenial cortex are transmitted to a subset of V1 neurons. Another potential source of self-motion-related inputs is the anterior cingulate cortex and premotor areas (A24b/M2). These areas were shown to convey motor-related excitatory inputs to V1 neurons and are thought to carry a prediction of visual flow based on self-motion information (Leinweber et al., 2017Leinweber M. Ward D.R. Sobczak J.M. Attinger A. Keller G.B. A sensorimotor circuit in mouse cortex for visual flow predictions.Neuron. 2017; 95: 1420-1432.e5Abstract Full Text Full Text PDF PubMed Scopus (138) Google Scholar). Spatial signals could also be conveyed to V1 through subcortical inputs. For example, the lateral posterior nucleus of the thalamus has been shown to convey locomotion-related and contextual signals to V1 neurons (Roth et al., 2016Roth M.M. Dahmen J.C. Muir D.R. Imhof F. Martini F.J. Hofer S.B. Thalamic nuclei convey diverse contextual information to layer 1 of visual cortex.Nat. Neurosci. 2016; 19: 299-307Crossref PubMed Scopus (200) Google Scholar). The encoding of behaviorally important spatial locations could either occur in the aforementioned cortical and subcortical areas and be transmitted to V1, or encoding could occur in V1 itself since previous studies have shown neuronal responses to running speed (Erisken et al., 2014Erisken S. Vaiceliunaite A. Jurjut O. Fiorini M. Katzner S. Busse L. Effects of locomotion extend throughout the mouse early visual system.Curr. Biol. 2014; 24: 2899-2907Abstract Full Text Full Text PDF PubMed Scopus (114) Google Scholar, Keller et al., 2012Keller G.B. Bonhoeffer T. Hübener M. Sensorimotor mismatch signals in primary visual cortex of the behaving mouse.Neuron. 2012; 74: 809-815Abstract Full Text Full Text PDF PubMed Scopus (313) Google Scholar, Pakan et al., 2016Pakan J.M. Lowe S.C. Dylda E. Keemink S.W. Currie S.P. Coutts C.A. Rochefort N.L. Behavioral-state modulation of inhibition is context-dependent and cell type specific in mouse visual cortex.eLife. 2016; 5: e14985Crossref PubMed Scopus (137) Google Scholar, Saleem et al., 2013Saleem A.B. Ayaz A. Jeffery K.J. Harris K.D. Carandini M. Integration of visual motion and locomotion in mouse visual cortex.Nat. Neurosci. 2013; 16: 1864-1869Crossref PubMed Scopus (224) Google Scholar) as well as to reward-timing in mouse V1 (Chubykin et al., 2013Chubykin A.A. Roach E.B. Bear M.F. Shuler M.G.H. A cholinergic mechanism for reward timing within primary visual cortex.Neuron. 2013; 77: 723-735Abstract Full Text Full Text PDF PubMed Scopus (120) Google Scholar). Together, these recent studies, and our current results, indicate that information about reward anticipation and motor feedback cues are available directly to V1 and may be used by this primary sensory area to facilitate visual identification of behaviorally relevant environmental cues, which has direct implications for navigation and more generally for visual perception. Our results further show that visual input overrides self-motion-derived estimates of location in V1 neurons. Potential underlying mechanisms may include visual excitatory inputs that dominate self-motion ones or visual inputs that inhibit spatially related information. This process may occur either within V1 or in other brain areas. For instance, it has been shown that the majority of place cells in the hippocampus require visual input to display spatially localized firing within a visual virtual environment (Chen et al., 2013Chen G. King J.A. Burgess N. O’Keefe J. How vision and movement combine in the hippocampal place code.Proc. Natl. Acad. Sci. USA. 2013; 110: 378-383Crossref PubMed Scopus (206) Google Scholar). It was suggested that visual inputs may be conveyed to place cells through neurons found in the su" @default.
- W2891005457 created "2018-09-27" @default.
- W2891005457 creator A5037861106 @default.
- W2891005457 creator A5044336570 @default.
- W2891005457 creator A5073156282 @default.
- W2891005457 creator A5079717198 @default.
- W2891005457 date "2018-09-01" @default.
- W2891005457 modified "2023-10-17" @default.
- W2891005457 title "The Impact of Visual Cues, Reward, and Motor Feedback on the Representation of Behaviorally Relevant Spatial Locations in Primary Visual Cortex" @default.
- W2891005457 cites W1538429890 @default.
- W2891005457 cites W1972869628 @default.
- W2891005457 cites W1976485053 @default.
- W2891005457 cites W1982918139 @default.
- W2891005457 cites W1993355218 @default.
- W2891005457 cites W2040165511 @default.
- W2891005457 cites W2063459859 @default.
- W2891005457 cites W2073472689 @default.
- W2891005457 cites W2074967846 @default.
- W2891005457 cites W2075312701 @default.
- W2891005457 cites W2077869506 @default.
- W2891005457 cites W2103822311 @default.
- W2891005457 cites W2104069712 @default.
- W2891005457 cites W2114828226 @default.
- W2891005457 cites W2128520379 @default.
- W2891005457 cites W2135065060 @default.
- W2891005457 cites W2145780837 @default.
- W2891005457 cites W2146561158 @default.
- W2891005457 cites W2147027939 @default.
- W2891005457 cites W2151990639 @default.
- W2891005457 cites W2209706276 @default.
- W2891005457 cites W2515318112 @default.
- W2891005457 cites W2520592730 @default.
- W2891005457 cites W2581032931 @default.
- W2891005457 cites W2743651283 @default.
- W2891005457 cites W2755486058 @default.
- W2891005457 cites W2783790305 @default.
- W2891005457 cites W2794648976 @default.
- W2891005457 cites W2799340525 @default.
- W2891005457 cites W2883168056 @default.
- W2891005457 cites W2949985012 @default.
- W2891005457 doi "https://doi.org/10.1016/j.celrep.2018.08.010" @default.
- W2891005457 hasPubMedCentralId "https://www.ncbi.nlm.nih.gov/pmc/articles/6137817" @default.
- W2891005457 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/30184487" @default.
- W2891005457 hasPublicationYear "2018" @default.
- W2891005457 type Work @default.
- W2891005457 sameAs 2891005457 @default.
- W2891005457 citedByCount "62" @default.
- W2891005457 countsByYear W28910054572018 @default.
- W2891005457 countsByYear W28910054572019 @default.
- W2891005457 countsByYear W28910054572020 @default.
- W2891005457 countsByYear W28910054572021 @default.
- W2891005457 countsByYear W28910054572022 @default.
- W2891005457 countsByYear W28910054572023 @default.
- W2891005457 crossrefType "journal-article" @default.
- W2891005457 hasAuthorship W2891005457A5037861106 @default.
- W2891005457 hasAuthorship W2891005457A5044336570 @default.
- W2891005457 hasAuthorship W2891005457A5073156282 @default.
- W2891005457 hasAuthorship W2891005457A5079717198 @default.
- W2891005457 hasBestOaLocation W28910054571 @default.
- W2891005457 hasConcept C111370547 @default.
- W2891005457 hasConcept C154945302 @default.
- W2891005457 hasConcept C15744967 @default.
- W2891005457 hasConcept C169760540 @default.
- W2891005457 hasConcept C17744445 @default.
- W2891005457 hasConcept C180747234 @default.
- W2891005457 hasConcept C199539241 @default.
- W2891005457 hasConcept C24998067 @default.
- W2891005457 hasConcept C2776359362 @default.
- W2891005457 hasConcept C2778373776 @default.
- W2891005457 hasConcept C2779345533 @default.
- W2891005457 hasConcept C2780307956 @default.
- W2891005457 hasConcept C3020716817 @default.
- W2891005457 hasConcept C41008148 @default.
- W2891005457 hasConcept C94625758 @default.
- W2891005457 hasConceptScore W2891005457C111370547 @default.
- W2891005457 hasConceptScore W2891005457C154945302 @default.
- W2891005457 hasConceptScore W2891005457C15744967 @default.
- W2891005457 hasConceptScore W2891005457C169760540 @default.
- W2891005457 hasConceptScore W2891005457C17744445 @default.
- W2891005457 hasConceptScore W2891005457C180747234 @default.
- W2891005457 hasConceptScore W2891005457C199539241 @default.
- W2891005457 hasConceptScore W2891005457C24998067 @default.
- W2891005457 hasConceptScore W2891005457C2776359362 @default.
- W2891005457 hasConceptScore W2891005457C2778373776 @default.
- W2891005457 hasConceptScore W2891005457C2779345533 @default.
- W2891005457 hasConceptScore W2891005457C2780307956 @default.
- W2891005457 hasConceptScore W2891005457C3020716817 @default.
- W2891005457 hasConceptScore W2891005457C41008148 @default.
- W2891005457 hasConceptScore W2891005457C94625758 @default.
- W2891005457 hasFunder F4320311904 @default.
- W2891005457 hasFunder F4320320006 @default.
- W2891005457 hasFunder F4320335322 @default.
- W2891005457 hasIssue "10" @default.
- W2891005457 hasLocation W28910054571 @default.
- W2891005457 hasLocation W28910054572 @default.
- W2891005457 hasLocation W28910054573 @default.
- W2891005457 hasLocation W28910054574 @default.
- W2891005457 hasLocation W28910054575 @default.