Matches in SemOpenAlex for { <https://semopenalex.org/work/W2067682418> ?p ?o ?g. }
- W2067682418 endingPage "741" @default.
- W2067682418 startingPage "731" @default.
- W2067682418 abstract "Knowledge about hypothetical outcomes from unchosen actions is beneficial only when such outcomes can be correctly attributed to specific actions. Here we show that during a simulated rock-paper-scissors game, rhesus monkeys can adjust their choice behaviors according to both actual and hypothetical outcomes from their chosen and unchosen actions, respectively. In addition, neurons in both dorsolateral prefrontal cortex and orbitofrontal cortex encoded the signals related to actual and hypothetical outcomes immediately after they were revealed to the animal. Moreover, compared to the neurons in the orbitofrontal cortex, those in the dorsolateral prefrontal cortex were more likely to change their activity according to the hypothetical outcomes from specific actions. Conjunctive and parallel coding of multiple actions and their outcomes in the prefrontal cortex might enhance the efficiency of reinforcement learning and also contribute to their context-dependent memory. Knowledge about hypothetical outcomes from unchosen actions is beneficial only when such outcomes can be correctly attributed to specific actions. Here we show that during a simulated rock-paper-scissors game, rhesus monkeys can adjust their choice behaviors according to both actual and hypothetical outcomes from their chosen and unchosen actions, respectively. In addition, neurons in both dorsolateral prefrontal cortex and orbitofrontal cortex encoded the signals related to actual and hypothetical outcomes immediately after they were revealed to the animal. Moreover, compared to the neurons in the orbitofrontal cortex, those in the dorsolateral prefrontal cortex were more likely to change their activity according to the hypothetical outcomes from specific actions. Conjunctive and parallel coding of multiple actions and their outcomes in the prefrontal cortex might enhance the efficiency of reinforcement learning and also contribute to their context-dependent memory. Monkeys can learn from hypothetical outcomes of unchosen actions Neurons in the prefrontal cortex encode hypothetical outcomes from specific actions Orbitofrontal cortex encodes hypothetical outcomes from multiple actions similarly Activity related to actual and hypothetical outcomes shows a similar time course Human and animals can change their behaviors not only based on the rewarding and aversive consequences of their actions (Thorndike, 1911Thorndike E.L. Animal Intelligence: Experimental Studies. Macmillan, New York1911Crossref Google Scholar), but also by simulating the hypothetical outcomes that could have resulted from alternative unchosen actions (Kahneman and Miller, 1986Kahneman D. Miller D.T. Norm theory: comparing reality to its alternatives.Psychol. Rev. 1986; 93: 136-153Crossref Scopus (1758) Google Scholar, Lee et al., 2005Lee D. McGreevy B.P. Barraclough D.J. Learning and decision making in monkeys during a rock-paper-scissors game.Brain Res. Cogn. Brain Res. 2005; 25: 416-430Crossref PubMed Scopus (79) Google Scholar, Hayden et al., 2009Hayden B.Y. Pearson J.M. Platt M.L. Fictive reward signals in the anterior cingulate cortex.Science. 2009; 324: 948-950Crossref PubMed Scopus (157) Google Scholar). The internal models about the animal's environment necessary for this mental simulation can be acquired without reinforcement (Tolman, 1948Tolman E.C. Cognitive maps in rats and men.Psychol. Rev. 1948; 55: 189-208Crossref PubMed Scopus (3241) Google Scholar, Fiser and Aslin, 2001Fiser J. Aslin R.N. Unsupervised statistical learning of higher-order spatial structures from visual scenes.Psychol. Sci. 2001; 12: 499-504Crossref PubMed Scopus (444) Google Scholar). In particular, the ability to incorporate simultaneously actual and hypothetical outcomes expected from chosen and unchosen actions can facilitate the process of finding optimal strategies during social interactions (Camerer, 2003Camerer C.F. Behavioral Game Theory: Experiments in Strategic Interaction. Princeton Univ. Press, Princeton, NJ2003Google Scholar, Gallagher and Frith, 2003Gallagher H.L. Frith C.D. Functional imaging of ‘theory of mind’.Trends Cogn. Sci. 2003; 7: 77-83Abstract Full Text Full Text PDF PubMed Scopus (1499) Google Scholar, Lee, 2008Lee D. Game theory and neural basis of social decision making.Nat. Neurosci. 2008; 11: 404-409Crossref PubMed Scopus (173) Google Scholar, Behrens et al., 2009Behrens T.E.J. Hunt L.T. Rushworth M.F.S. The computation of social behavior.Science. 2009; 324: 1160-1164Crossref PubMed Scopus (283) Google Scholar), since observed behaviors of other decision makers can provide the information about the hypothetical outcomes from multiple actions. However, learning from both real and hypothetical outcomes is not trivial, because these two different types of information need to be linked to different actions correctly. For example, attributing the hypothetical outcomes from unchosen actions incorrectly to the chosen action would interfere with adaptive behaviors (Walton et al., 2010Walton M.E. Behrens T.E.J. Buckley M.J. Rudebeck P.H. Rushworth M.F.S. Separable learning systems in the macaque brain and the role of orbitofrontal cortex in contingent learning.Neuron. 2010; 65: 927-939Abstract Full Text Full Text PDF PubMed Scopus (255) Google Scholar). Although previous studies have identified neural signals related to hypothetical outcomes in multiple brain areas (Camille et al., 2004Camille N. Coricelli G. Sallet J. Pradat-Diehl P. Duhamel J.R. Sirigu A. The involvement of the orbitofrontal cortex in the experience of regret.Science. 2004; 304: 1167-1170Crossref PubMed Scopus (447) Google Scholar, Coricelli et al., 2005Coricelli G. Critchley H.D. Joffily M. O'Doherty J.P. Sirigu A. Dolan R.J. Regret and its avoidance: a neuroimaging study of choice behavior.Nat. Neurosci. 2005; 8: 1255-1262Crossref PubMed Scopus (417) Google Scholar, Lohrenz et al., 2007Lohrenz T. McCabe K. Camerer C.F. Montague P.R. Neural signature of fictive learning signals in a sequential investment task.Proc. Natl. Acad. Sci. USA. 2007; 104: 9493-9498Crossref PubMed Scopus (199) Google Scholar, Chandrasekhar et al., 2008Chandrasekhar P.V.S. Capra C.M. Moore S. Noussair C. Berns G.S. Neurobiological regret and rejoice functions for aversive outcomes.Neuroimage. 2008; 39: 1472-1484Crossref PubMed Scopus (72) Google Scholar, Fujiwara et al., 2009Fujiwara J. Tobler P.N. Taira M. Iijima T. Tsutsui K. A parametric relief signal in human ventrolateral prefrontal cortex.Neuroimage. 2009; 44: 1163-1170Crossref PubMed Scopus (26) Google Scholar, Hayden et al., 2009Hayden B.Y. Pearson J.M. Platt M.L. Fictive reward signals in the anterior cingulate cortex.Science. 2009; 324: 948-950Crossref PubMed Scopus (157) Google Scholar), they have not revealed signals encoding hypothetical outcomes associated with specific actions. Therefore, the neural substrates necessary for learning from hypothetical outcomes remain unknown. In the present study, we tested whether the information about the actual and hypothetical outcomes from chosen and unchosen actions is properly integrated in the primate prefrontal cortex. In particular, the dorsolateral prefrontal cortex (DLPFC) is integral to binding the sensory inputs in multiple modalities appropriately (Prabhakaran et al., 2000Prabhakaran V. Narayanan K. Zhao Z. Gabrieli J.D.E. Integration of diverse information in working memory within the frontal lobe.Nat. Neurosci. 2000; 3: 85-90Crossref PubMed Scopus (400) Google Scholar), including the contextual information essential for episodic memory (Baddeley, 2000Baddeley A. The episodic buffer: a new component of working memory?.Trends Cogn. Sci. 2000; 4: 417-423Abstract Full Text Full Text PDF PubMed Scopus (3841) Google Scholar, Mitchell and Johnson, 2009Mitchell K.J. Johnson M.K. Source monitoring 15 years later: what have we learned from fMRI about the neural mechanisms of source memory?.Psychol. Bull. 2009; 135: 638-677Crossref PubMed Scopus (424) Google Scholar). DLPFC has also been implicated in processing hypothetical outcomes (Coricelli et al., 2005Coricelli G. Critchley H.D. Joffily M. O'Doherty J.P. Sirigu A. Dolan R.J. Regret and its avoidance: a neuroimaging study of choice behavior.Nat. Neurosci. 2005; 8: 1255-1262Crossref PubMed Scopus (417) Google Scholar, Fujiwara et al., 2009Fujiwara J. Tobler P.N. Taira M. Iijima T. Tsutsui K. A parametric relief signal in human ventrolateral prefrontal cortex.Neuroimage. 2009; 44: 1163-1170Crossref PubMed Scopus (26) Google Scholar) and in model-based reinforcement learning (Gläscher et al., 2010Gläscher J. Daw N. Dayan P. O'Doherty J.P. States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning.Neuron. 2010; 66: 585-595Abstract Full Text Full Text PDF PubMed Scopus (641) Google Scholar). Moreover, DLPFC neurons often change their activity according to the outcomes expected or obtained from specific actions (Watanabe, 1996Watanabe M. Reward expectancy in primate prefrontal neurons.Nature. 1996; 382: 629-632Crossref PubMed Scopus (541) Google Scholar, Leon and Shadlen, 1999Leon M.I. Shadlen M.N. Effect of expected reward magnitude on the response of neurons in the dorsolateral prefrontal cortex of the macaque.Neuron. 1999; 24: 415-425Abstract Full Text Full Text PDF PubMed Scopus (356) Google Scholar, Matsumoto et al., 2003Matsumoto K. Suzuki W. Tanaka K. Neuronal correlates of goal-based motor selection in the prefrontal cortex.Science. 2003; 301: 229-232Crossref PubMed Scopus (313) Google Scholar, Barraclough et al., 2004Barraclough D.J. Conroy M.L. Lee D. Prefrontal cortex and decision making in a mixed-strategy game.Nat. Neurosci. 2004; 7: 404-410Crossref PubMed Scopus (479) Google Scholar, Seo and Lee, 2009Seo H. Lee D. Behavioral and neural changes after gains and losses of conditioned reinforcers.J. Neurosci. 2009; 29: 3627-3641Crossref PubMed Scopus (87) Google Scholar). Therefore, we hypothesized that individual neurons in the DLPFC might encode both actual and hypothetical outcomes resulting from the same actions and provide the substrate for learning the values of both chosen and unchosen actions. The orbitofrontal cortex (OFC) might be also crucial for behavioral adjustment guided by hypothetical outcome (Camille et al., 2004Camille N. Coricelli G. Sallet J. Pradat-Diehl P. Duhamel J.R. Sirigu A. The involvement of the orbitofrontal cortex in the experience of regret.Science. 2004; 304: 1167-1170Crossref PubMed Scopus (447) Google Scholar, Coricelli et al., 2005Coricelli G. Critchley H.D. Joffily M. O'Doherty J.P. Sirigu A. Dolan R.J. Regret and its avoidance: a neuroimaging study of choice behavior.Nat. Neurosci. 2005; 8: 1255-1262Crossref PubMed Scopus (417) Google Scholar). However, how and whether OFC contributes to associating actual and hypothetical outcomes with their corresponding actions remains unclear (Tremblay and Schultz, 1999Tremblay L. Schultz W. Relative reward preference in primate orbitofrontal cortex.Nature. 1999; 398: 704-708Crossref PubMed Scopus (997) Google Scholar, Wallis and Miller, 2003Wallis J.D. Miller E.K. Neuronal activity in primate dorsolateral and orbital prefrontal cortex during performance of a reward preference task.Eur. J. Neurosci. 2003; 18: 2069-2081Crossref PubMed Scopus (469) Google Scholar, Kennerley and Wallis, 2009Kennerley S.W. Wallis J.D. Evaluating choices by single neurons in the frontal lobe: outcome value encoded across multiple decision variables.Eur. J. Neurosci. 2009; 29: 2061-2073Crossref PubMed Scopus (140) Google Scholar, Padoa-Schioppa and Assad, 2006Padoa-Schioppa C. Assad J.A. Neurons in the orbitofrontal cortex encode economic value.Nature. 2006; 441: 223-226Crossref PubMed Scopus (987) Google Scholar, Tsujimoto et al., 2009Tsujimoto S. Genovesio A. Wise S.P. Monkey orbitofrontal cortex encodes response choices near feedback time.J. Neurosci. 2009; 29: 2569-2574Crossref PubMed Scopus (66) Google Scholar, Walton et al., 2010Walton M.E. Behrens T.E.J. Buckley M.J. Rudebeck P.H. Rushworth M.F.S. Separable learning systems in the macaque brain and the role of orbitofrontal cortex in contingent learning.Neuron. 2010; 65: 927-939Abstract Full Text Full Text PDF PubMed Scopus (255) Google Scholar). In the present study, we found that signals related to actual and hypothetical outcomes resulting from specific actions are encoded in both DLPFC and OFC, although OFC neurons tend to encode such outcomes regardless of the animal's actions more than DLPFC neurons. Three monkeys were trained to perform a computer-simulated rock-paper-scissors game task (Figure 1A ). In each trial, the animal was required to shift its gaze from the central fixation target toward one of three green peripheral targets. After the animal fixated its chosen target for 0.5 s, the colors of all three targets changed simultaneously and indicated the outcome of the animal's choice as well as the hypothetical outcomes that the animal could have received from the other two unchosen targets. These outcomes were determined by the payoff matrix of a biased rock-paper-scissors game (Figure 1B). For example, the animal would receive three drops of juice when it beats the computer opponent by choosing the “paper” target (indicated by the red feedback stimulus in Figure 1A, top). The computer opponent simulated a competitive player trying to minimize the animal's expected payoff by exploiting statistical biases in the animal's choice and outcome sequences (see Experimental Procedures). The optimal strategy for this game (Nash, 1950Nash J.F. Equilibrium points in n-person games.Proc. Natl. Acad. Sci. USA. 1950; 36: 48-49Crossref PubMed Google Scholar) is for the animal to choose “rock” with the probability of 0.5 and each of the remaining targets with the probability of 0.25 (see Supplemental Experimental Procedures available online). In this study, the positions of the targets corresponding to rock, paper, and scissors were fixed in a block of trials and changed unpredictably across blocks (Figure S1). The animal's choice behaviors gradually approached the optimal strategies after each block transition, indicating that the animals adjusted their behaviors flexibly (Figure S2A). Theoretically, learning during an iterative game can rely on two different types of feedback. First, decision makers can adjust their choices entirely based on the actual outcomes of their previous choices. Learning algorithms exclusively relying on experienced outcomes are referred to as simple or model-free reinforcement learning (RL) models (Sutton and Barto, 1998Sutton R.S. Barto A.G. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA1998Google Scholar). Second, behavioral changes can be also driven by the simulated or hypothetical outcomes that could have resulted from unchosen actions. For example, during social interactions, hypothetical outcomes can be inferred from the choices of other players, and in game theory, this is referred to as belief learning (BL; Camerer, 2003Camerer C.F. Behavioral Game Theory: Experiments in Strategic Interaction. Princeton Univ. Press, Princeton, NJ2003Google Scholar, Gallagher and Frith, 2003Gallagher H.L. Frith C.D. Functional imaging of ‘theory of mind’.Trends Cogn. Sci. 2003; 7: 77-83Abstract Full Text Full Text PDF PubMed Scopus (1499) Google Scholar, Lee et al., 2005Lee D. McGreevy B.P. Barraclough D.J. Learning and decision making in monkeys during a rock-paper-scissors game.Brain Res. Cogn. Brain Res. 2005; 25: 416-430Crossref PubMed Scopus (79) Google Scholar). More generally, learning algorithms relying on simulated outcomes predicted by the decision maker's internal model about the environment are referred to as model-based reinforcement learning (Sutton and Barto, 1998Sutton R.S. Barto A.G. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA1998Google Scholar). Consistent with the predictions from both models, all the animals tested in our study were more likely to choose the same target again after winning than losing or tying in the previous trial (paired t test, p < 10−13, for all sessions in each animal; Figure 2A ). Moreover, as predicted by the BL model but not by the simple RL model, when the animals lost or tied in a given trial, they were more likely to choose in the next trial what would have been the winning target than the other unchosen target (p < 10−7, for all sessions in each animal; Figure 2B), indicating that the animal's choices were also influenced by the hypothetical outcomes from unchosen actions. To quantify the cumulative effects of hypothetical outcomes on the animal's choices, we estimated learning rates for the actual (αA) and hypothetical (αH) outcomes from chosen and unchosen actions separately using a hybrid learning model that combine the features of both RL and BL (see Experimental Procedures). For all three animals, the learning rates for hypothetical outcomes were significantly greater than zero (two-tailed t test, p < 10−27, for all sessions in each animal), although they were significantly smaller than the learning rates for actual outcomes (paired t test, p < 10−48; see Table S1). According to the Bayesian information criterion (BIC), this hybrid learning model and BL model performed better than the RL model in more than 95% of the sessions for each animal. Therefore, animal's behavior was influenced by hypothetical outcomes, albeit less strongly than by actual outcomes. It should be noted that due to the competitive interaction with the computer opponent, the animals did not increase their reward rate by relying on such learning algorithms. In fact, for two monkeys (Q and S), average payoff decreased significantly as they were more strongly influenced by the actual outcomes from their previous choices (see Figure S2B and Supplemental Experimental Procedures). Average payoff was not significantly related to the learning rates for hypothetical outcomes (Figure S2C). To test whether and how neurons in different regions of the prefrontal cortex modulate their activity according to the hypothetical outcomes from unchosen actions, we recorded the activity of 308 and 201 neurons in the DLPFC and OFC, respectively, during a computer-simulated rock-paper-scissors game. For each neuron, its activity during the 0.5 s feedback period was analyzed by applying a series of nested regression models that included the animal's choice, actual payoff from the chosen target and hypothetical payoff from the unchosen winning target in a loss or tie trial as independent variables (see Experimental Procedures). Effects of actual and hypothetical payoffs were examined separately according to whether they were specific for particular actions or not, by testing whether the regressors corresponding to the actual or hypothetical outcomes from specific actions improve the model fit. In the present study, hypothetical outcomes were varied only for the winning targets during tie or loss trials. Therefore, to avoid the confounding of activity related to actual and hypothetical outcomes from different actions, their effects on neural activity was quantified as the activity changes related to the actual and hypothetical payoffs from winning targets only. Overall, 127 (41.2%) and 91 (45.3%) neurons in DLPFC and OFC, respectively, encoded actual payoffs received by the animal (partial F-test, M3 versus M1, p < 0.05; see Experimental Procedures; see Figure S3). In addition, 63 (20.5%) and 33 (16.4%) neurons in DLPFC and OFC significantly changed their activity related to actual outcomes differently according to the animal's chosen actions (M3 versus M2). Thus, the proportion of neurons encoding actual outcomes was not significantly different for DLPFC and OFC, regardless of whether activity related to outcomes from specific choices were considered separately or not (χ2 test, p > 0.25). Hypothetical payoffs from the winning targets during tie or loss trials were significantly encoded in 66 (21.4%) and 34 (16.9%) neurons in the DLPFC and OFC, respectively (M5 versus M3; see Experimental Procedures). The proportion of neurons encoding hypothetical outcomes was not significantly different for the two areas (χ2 test, p = 0.21). On the other hand, the proportion of neurons significantly changing their activity related to hypothetical outcomes according to the position of the winning target was significantly higher in the DLPFC (n = 53, 17.2%) than in OFC (n = 16, 8.0%; χ2 test, p < 0.005). For example, the DLPFC neuron illustrated in Figure 3A increased its activity during the feedback period according to the hypothetical payoff from the upper winning target (partial F-test, p < 0.05). This activity change was observed within a set of trials in which the animal's choice of a particular target led to loss or tie (Figure 3A, middle and bottom panels in the first column, respectively), and therefore was not due to the animal's choice of a particular action or its actual outcome. The OFC neuron illustrated in Figure 3B also changed its activity significantly according to the hypothetical winning payoffs, which was significantly more pronounced when the winning target was presented to the left (partial F-test, p < 0.05). Nevertheless, the activity related to the hypothetical outcome was qualitatively similar for all three positions of the winning target. The proportion of neurons with significant activity related to hypothetical outcomes was little affected when we controlled for several potential confounding factors, such as the winning payoff expected from the chosen target, the position of the target chosen by the animal in the next trial, and the parameters of saccade during the feedback period of loss trials (Table S2). The results were also largely unaffected when the data were analyzed after removing the first ten trials after each block transition, suggesting that the activity related to hypothetical outcomes were not due to unexpected changes in the payoffs from different target locations. In addition, there was no evidence for anatomical clustering of neurons that showed significant effects of actual or hypothetical outcomes (MANOVA, p > 0.05; Figure 4; Figure S4).Figure 4Anatomical Locations of Neurons with Outcome EffectShow full caption(A) Locations of DLPFC neurons that showed significant changes in their activity related to actual and hypothetical outcome irrespective of whether they were linked to specific choices or not.(B) Locations of OFC neurons. The positions of the recorded neurons were estimated and plotted on a horizontal plane. The neurons shown medially from MOS were not in the ventral surface but in the fundus of MOS. MOS, medial orbital sulcus. LOS, lateral orbital sulcus. TOS, transverse orbital sulcus. The number of neurons recorded from each area is shown. See also Figure S4.View Large Image Figure ViewerDownload Hi-res image Download (PPT) (A) Locations of DLPFC neurons that showed significant changes in their activity related to actual and hypothetical outcome irrespective of whether they were linked to specific choices or not. (B) Locations of OFC neurons. The positions of the recorded neurons were estimated and plotted on a horizontal plane. The neurons shown medially from MOS were not in the ventral surface but in the fundus of MOS. MOS, medial orbital sulcus. LOS, lateral orbital sulcus. TOS, transverse orbital sulcus. The number of neurons recorded from each area is shown. See also Figure S4. To compare the effect size of neural activity related to actual and hypothetical outcomes, the proportion of variance in the spike counts that can be attributed to different outcomes was computed using the coefficient of partial determination (CPD; see Supplemental Experimental Procedures). The effect size of activity related to actual outcome or hypothetical outcome was significantly larger in the OFC than in DLPFC, when the effects of outcomes from different targets were combined (two-tailed t test, p < 0.01; Figure 5A , AON and HON). By contrast, the effect size of activity related to actual or hypothetical outcomes from specific choices was not significantly different for two areas (p > 0.6; Figure 5A, AOC and HOC). For each area, we also examined whether the neural activity is more strongly related to a given type of outcomes (i.e., actual or hypothetical) associated with specific actions or not, using the difference in the CPD computed for all actions and those computed for specific actions. For actual outcomes, OFC neurons tended to encode actual outcomes similarly for all actions more than DLPFC (Figure 5B, AOC−AON; p < 0.01), whereas DLPFC neurons tended to encode hypothetical outcomes from specific actions more than OFC neurons (Figure 5B, HOC−HON; p < 0.01). This difference between DLPFC and OFC was statistically significant for both actual and hypothetical outcomes (2-way ANOVA, area × choice-specificity interaction, p < 0.05). Taken together, these results suggest that both DLPFC and OFC play important roles in monitoring actual and hypothetical outcomes from multiple actions, although OFC neurons tend to encode actual and hypothetical outcomes from multiple actions more similarly than DLPFC neurons. To test whether prefrontal neurons tend to encode actual and hypothetical outcomes from the same action similarly, we estimated the effects of different outcomes separately for individual targets (924 and 603 neuron-target pairs or cases in DLPFC and OFC, respectively; see Experimental Procedures). Overall, 96 (10.4%) and 99 (16.4%) cases in the DLPFC and OFC, respectively, show significant effects of actual outcomes, whereas significant effects of hypothetical outcomes were found in 116 (12.6%) and 66 (11.0%) cases in the DLPFC and OFC. Activity increasing with actual winning payoffs was more common in both areas (63 and 69 cases in DLPFC and OFC, corresponding to 65.6% and 69.7%, respectively; binomial test, p < 0.005), whereas similar trends for the hypothetical outcomes (68 and 38 cases in DLPFC and OFC, corresponding to 58.6% and 57.6%) were not statistically significant. The effect size (standardized regression coefficients, M6; see Experimental Procedures) of actual payoff was larger for the neurons increasing their activity with the winning payoff in both DLPFC (0.361 ± 0.010 versus 0.349 ± 0.011) and OFC (0.425 ± 0.016 versus 0.328 ± 0.017), but this was statistically significant only in the OFC (two-tailed t test, p < 10−3). The effect size of the activity related to hypothetical outcome was also larger for the neurons increasing activity with the hypothetical winning payoff for DLPFC (0.282 ± 0.009 versus 0.253 ± 0.009) and OFC (0.283 ± 0.018 versus 0.248 ± 0.009), but this was significant only for DLPFC (p < 0.05). In addition, neurons in both DLPFC and OFC were significantly more likely to increase their activity with the actual outcomes from multiple targets than expected if the effect of outcomes from individual targets affected the activity of a given neuron independently (binomial test, p < 0.05; Table 1). OFC neurons also tended to increase their activity with the hypothetical outcomes from multiple targets (p < 10−6; Table 1), whereas this tendency was not significant for DLPFC.Table 1Number of Neuron-Target Pairs Showing the Significant Effects of Actual and Hypothetical Outcomes from Different TargetsDLPFCOFCAO versus AOAO+AÕNSAO+AÕNSAO+9110731175AO−–849–941NS––750––446HO versus HOHO+HÕNSHO+HÕNSHO+7311914147HO−–485–643NS––706––492AO versus HOHO+HÕNSHO+HÕNSAO+86112216111AO−103533552NS1188714515245911For either AO or HO, the total number of cases is 3N, where N is the number of neurons, whereas for AO versus HO, this is 6N, since the effects of AO and HO estimated for two different targets are not symmetric (see also Table S3). Open table in a new tab For either AO or HO, the total number of cases is 3N, where N is the number of neurons, whereas for AO versus HO, this is 6N, since the effects of AO and HO estimated for two different targets are not symmetric (see also Table S3). Neural activity leading to the changes in the value functions should change similarly according to the actual and hypothetical outcomes from the same action. Indeed, neurons in both DLPFC and OFC were significantly more likely to increase their activity with both actual and hypothetical outcomes from the same target than expected when the effects of actual and hypothetical outcomes were combined independently (χ2 test, p < 10−3; Table S3). Similarly, the standardized regression coefficients related to the actual and hypothetical outcomes estimated separately for the same target were significantly correlated for the neurons in both areas that showed significant choice-dependent effects of hypothetical outcomes (r = 0.307 and 0.318 for DLPFC and OFC, respectively; p < 0.05). These neurons also tended to change their activity according to the hypothetical outcomes from a given target similarly regardless of the target chosen by the animal, when tested using the standardized regression coefficient for the hypothetical outcome estimated separately for the two remaining choices (r = 0.381 and 0.770, for DLPFC and OFC, p < 0.001; Figure S5). For neurons encoding hypothetical outcomes from specific actions, we also estimated the effects of the hypothetical outcomes from two different targets using a set of trials in which the animal chose the same target (see Figure S5). For DLPFC, the correlation coefficient for these two regression coefficients was not significant (r = −0.042, p = 0.64) and significantly lower than the correlation coefficient computed for the effect" @default.
- W2067682418 created "2016-06-24" @default.
- W2067682418 creator A5051230082 @default.
- W2067682418 creator A5088256925 @default.
- W2067682418 date "2011-05-01" @default.
- W2067682418 modified "2023-10-14" @default.
- W2067682418 title "Distributed Coding of Actual and Hypothetical Outcomes in the Orbital and Dorsolateral Prefrontal Cortex" @default.
- W2067682418 cites W1543279335 @default.
- W2067682418 cites W1562703792 @default.
- W2067682418 cites W1604916109 @default.
- W2067682418 cites W1969827567 @default.
- W2067682418 cites W1976275046 @default.
- W2067682418 cites W1981725931 @default.
- W2067682418 cites W1984871604 @default.
- W2067682418 cites W1991097067 @default.
- W2067682418 cites W2000214310 @default.
- W2067682418 cites W2001488773 @default.
- W2067682418 cites W2003397476 @default.
- W2067682418 cites W2004782727 @default.
- W2067682418 cites W2006800762 @default.
- W2067682418 cites W2010057415 @default.
- W2067682418 cites W2011561395 @default.
- W2067682418 cites W2011971614 @default.
- W2067682418 cites W2024384693 @default.
- W2067682418 cites W2025931876 @default.
- W2067682418 cites W2033172242 @default.
- W2067682418 cites W2037444091 @default.
- W2067682418 cites W2039491568 @default.
- W2067682418 cites W2051215923 @default.
- W2067682418 cites W2055514102 @default.
- W2067682418 cites W2060106129 @default.
- W2067682418 cites W2060723376 @default.
- W2067682418 cites W2061405393 @default.
- W2067682418 cites W2065406768 @default.
- W2067682418 cites W2067050450 @default.
- W2067682418 cites W2068781732 @default.
- W2067682418 cites W2073158860 @default.
- W2067682418 cites W2073665562 @default.
- W2067682418 cites W2073698878 @default.
- W2067682418 cites W2078024799 @default.
- W2067682418 cites W2078352770 @default.
- W2067682418 cites W2084913591 @default.
- W2067682418 cites W2102068574 @default.
- W2067682418 cites W2102803424 @default.
- W2067682418 cites W2117726420 @default.
- W2067682418 cites W2118596861 @default.
- W2067682418 cites W2118691301 @default.
- W2067682418 cites W2119155724 @default.
- W2067682418 cites W2120424352 @default.
- W2067682418 cites W2121370521 @default.
- W2067682418 cites W2125216599 @default.
- W2067682418 cites W2126908031 @default.
- W2067682418 cites W2128413758 @default.
- W2067682418 cites W2129478155 @default.
- W2067682418 cites W2132346372 @default.
- W2067682418 cites W2135116467 @default.
- W2067682418 cites W2136162329 @default.
- W2067682418 cites W2137145004 @default.
- W2067682418 cites W2138221880 @default.
- W2067682418 cites W2145183094 @default.
- W2067682418 cites W2149565728 @default.
- W2067682418 cites W2151137320 @default.
- W2067682418 cites W2163340182 @default.
- W2067682418 cites W2163929762 @default.
- W2067682418 cites W2165440754 @default.
- W2067682418 cites W2165517929 @default.
- W2067682418 cites W2167245828 @default.
- W2067682418 cites W2167362547 @default.
- W2067682418 cites W2170633032 @default.
- W2067682418 doi "https://doi.org/10.1016/j.neuron.2011.03.026" @default.
- W2067682418 hasPubMedCentralId "https://www.ncbi.nlm.nih.gov/pmc/articles/3104017" @default.
- W2067682418 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/21609828" @default.
- W2067682418 hasPublicationYear "2011" @default.
- W2067682418 type Work @default.
- W2067682418 sameAs 2067682418 @default.
- W2067682418 citedByCount "153" @default.
- W2067682418 countsByYear W20676824182012 @default.
- W2067682418 countsByYear W20676824182013 @default.
- W2067682418 countsByYear W20676824182014 @default.
- W2067682418 countsByYear W20676824182015 @default.
- W2067682418 countsByYear W20676824182016 @default.
- W2067682418 countsByYear W20676824182017 @default.
- W2067682418 countsByYear W20676824182018 @default.
- W2067682418 countsByYear W20676824182019 @default.
- W2067682418 countsByYear W20676824182020 @default.
- W2067682418 countsByYear W20676824182021 @default.
- W2067682418 countsByYear W20676824182022 @default.
- W2067682418 countsByYear W20676824182023 @default.
- W2067682418 crossrefType "journal-article" @default.
- W2067682418 hasAuthorship W2067682418A5051230082 @default.
- W2067682418 hasAuthorship W2067682418A5088256925 @default.
- W2067682418 hasBestOaLocation W20676824181 @default.
- W2067682418 hasConcept C105795698 @default.
- W2067682418 hasConcept C15744967 @default.
- W2067682418 hasConcept C169760540 @default.
- W2067682418 hasConcept C169900460 @default.
- W2067682418 hasConcept C179518139 @default.
- W2067682418 hasConcept C180747234 @default.