Matches in SemOpenAlex for { <https://semopenalex.org/work/W4281767460> ?p ?o ?g. }
- W4281767460 endingPage "621" @default.
- W4281767460 startingPage "607" @default.
- W4281767460 abstract "Human belief updating is pervaded by distortions, such as positivity and confirmation bias. Experimental evidence from a variety of tasks and collected in different mammal species suggest that these biases also exist in simple reinforcement learning (RL) contexts. Confirmatory RL generates over-optimistic reward expectations and aberrant preferred response rates. Counter-intuitively, confirmatory RL exhibits statistical advantages over unbiased RL in a variety of learning contexts. Confirmatory RL may contribute to diverse and apparently unrelated behavioral phenomena, such as stickiness to the status quo, overconfidence, and the persistence of (pathological) gambling. Humans do not integrate new information objectively: outcomes carrying a positive affective value and evidence confirming one’s own prior belief are overweighed. Until recently, theoretical and empirical accounts of the positivity and confirmation biases assumed them to be specific to ‘high-level’ belief updates. We present evidence against this account. Learning rates in reinforcement learning (RL) tasks, estimated across different contexts and species, generally present the same characteristic asymmetry, suggesting that belief and value updating processes share key computational principles and distortions. This bias generates over-optimistic expectations about the probability of making the right choices and, consequently, generates over-optimistic reward expectations. We discuss the normative and neurobiological roots of these RL biases and their position within the greater picture of behavioral decision-making theories. Humans do not integrate new information objectively: outcomes carrying a positive affective value and evidence confirming one’s own prior belief are overweighed. Until recently, theoretical and empirical accounts of the positivity and confirmation biases assumed them to be specific to ‘high-level’ belief updates. We present evidence against this account. Learning rates in reinforcement learning (RL) tasks, estimated across different contexts and species, generally present the same characteristic asymmetry, suggesting that belief and value updating processes share key computational principles and distortions. This bias generates over-optimistic expectations about the probability of making the right choices and, consequently, generates over-optimistic reward expectations. We discuss the normative and neurobiological roots of these RL biases and their position within the greater picture of behavioral decision-making theories. the tendency to overweight or selectively sample information that confirms our own beliefs (‘what I believe is true’). Also referred to as prior-biased updating, belief perseverance, or conservatism, among other nomenclatures. a feature of a cognitive process that introduces systematic deviations between state of the world and an internal representation. the tendency to overweight information that confirms our own choice (‘what I did was right’). a model parameter that traditionally indexes the extent to which prediction errors affect future expectations. collection of methods aimed at determining what is the best model in a given dataset combining model fitting and model simulations, to assess, respectively the falsifiability of the rejected models and the parsimony of the accepted one. a statistical method aimed at estimating the values of model parameters that maximize the likelihood of observing the empirical data. Model fitting is not to be confounded with model comparison (see later). the tendency to overweight events with a positive affective valence. In the specific context of RL, it would consist in overweighting positive prediction errors (regardless of them being associated with chosen forgone option). Positivity bias is also sometimes referred to as the good news-bad news effect or preference-biased updating. the discrepancy between an expectation and the reality. In the context of RL, prediction errors are defined as the difference between an expected and an obtained outcome and they therefore have a valence: they are positive when the outcome is better than expected, and they are negative when the outcome is worse than expected." @default.
- W4281767460 created "2022-06-13" @default.
- W4281767460 creator A5028199267 @default.
- W4281767460 creator A5085381543 @default.
- W4281767460 date "2022-07-01" @default.
- W4281767460 modified "2023-10-16" @default.
- W4281767460 title "The computational roots of positivity and confirmation biases in reinforcement learning" @default.
- W4281767460 cites W1526704262 @default.
- W4281767460 cites W1977343123 @default.
- W4281767460 cites W1982670892 @default.
- W4281767460 cites W1991650519 @default.
- W4281767460 cites W1992373090 @default.
- W4281767460 cites W2002453029 @default.
- W4281767460 cites W2004650335 @default.
- W4281767460 cites W2006414608 @default.
- W4281767460 cites W2008565771 @default.
- W4281767460 cites W2013293748 @default.
- W4281767460 cites W2014979870 @default.
- W4281767460 cites W2027916800 @default.
- W4281767460 cites W2030612334 @default.
- W4281767460 cites W2033271727 @default.
- W4281767460 cites W2039051284 @default.
- W4281767460 cites W2039909349 @default.
- W4281767460 cites W2046713808 @default.
- W4281767460 cites W2049856429 @default.
- W4281767460 cites W2054327074 @default.
- W4281767460 cites W2075585362 @default.
- W4281767460 cites W2079840564 @default.
- W4281767460 cites W2099360932 @default.
- W4281767460 cites W2107411295 @default.
- W4281767460 cites W2113338864 @default.
- W4281767460 cites W2116450435 @default.
- W4281767460 cites W2117726420 @default.
- W4281767460 cites W2119170562 @default.
- W4281767460 cites W2120492617 @default.
- W4281767460 cites W2123429050 @default.
- W4281767460 cites W2125735154 @default.
- W4281767460 cites W2163647009 @default.
- W4281767460 cites W2165893637 @default.
- W4281767460 cites W2167693152 @default.
- W4281767460 cites W2170641282 @default.
- W4281767460 cites W2202219582 @default.
- W4281767460 cites W2291814219 @default.
- W4281767460 cites W2494559574 @default.
- W4281767460 cites W2551330732 @default.
- W4281767460 cites W2558439382 @default.
- W4281767460 cites W2576829257 @default.
- W4281767460 cites W2589342340 @default.
- W4281767460 cites W2595891121 @default.
- W4281767460 cites W2606776585 @default.
- W4281767460 cites W2610253745 @default.
- W4281767460 cites W2617744060 @default.
- W4281767460 cites W2675909287 @default.
- W4281767460 cites W2738724892 @default.
- W4281767460 cites W2765918627 @default.
- W4281767460 cites W2789461561 @default.
- W4281767460 cites W2800484587 @default.
- W4281767460 cites W2801070731 @default.
- W4281767460 cites W2886683543 @default.
- W4281767460 cites W2891654265 @default.
- W4281767460 cites W2891808685 @default.
- W4281767460 cites W2892136809 @default.
- W4281767460 cites W2895764062 @default.
- W4281767460 cites W2903344941 @default.
- W4281767460 cites W2938321354 @default.
- W4281767460 cites W2949467728 @default.
- W4281767460 cites W2949941589 @default.
- W4281767460 cites W2973034539 @default.
- W4281767460 cites W2980127121 @default.
- W4281767460 cites W2980127542 @default.
- W4281767460 cites W2987180182 @default.
- W4281767460 cites W2987909474 @default.
- W4281767460 cites W2989963295 @default.
- W4281767460 cites W3001032801 @default.
- W4281767460 cites W3011865677 @default.
- W4281767460 cites W3024532045 @default.
- W4281767460 cites W3029911820 @default.
- W4281767460 cites W3032576810 @default.
- W4281767460 cites W3041007979 @default.
- W4281767460 cites W3042711997 @default.
- W4281767460 cites W3046554509 @default.
- W4281767460 cites W3080172918 @default.
- W4281767460 cites W3088808494 @default.
- W4281767460 cites W3116031588 @default.
- W4281767460 cites W3119029557 @default.
- W4281767460 cites W3121217189 @default.
- W4281767460 cites W3125613397 @default.
- W4281767460 cites W3127546008 @default.
- W4281767460 cites W3132802715 @default.
- W4281767460 cites W3135269564 @default.
- W4281767460 cites W3150230610 @default.
- W4281767460 cites W3168279857 @default.
- W4281767460 cites W3170119827 @default.
- W4281767460 cites W3174736670 @default.
- W4281767460 cites W3180691962 @default.
- W4281767460 cites W3196803013 @default.
- W4281767460 cites W3213180026 @default.
- W4281767460 cites W4210266128 @default.