Matches in SemOpenAlex for { <https://semopenalex.org/work/W2400331045> ?p ?o ?g. }
- W2400331045 abstract "Going to Extremes: The influence of unsupervised categories on the mental caricaturization of faces and asymmetries in perceptual discrimination Andrew T. Hendrickson; Paulo F. Carvalho; Robert L. Goldstone ({athendri, pcarvalh, rgoldsto} @indiana.edu) Department of Psychological and Brain Sciences, Indiana University 1101 East Tenth Street, Bloomington, IN 47405 USA Abstract Category labels and CP. An alternative framework suggests that the presence of category labels, and not perceptual changes, are responsible for CP effects (Pisoni & Tash, 1974). In this view the category label can be seen as an additional feature: entities in different categories have different labels thus having an additional feature unique for each category. This causes similarity to decrease and discrimination accuracy to rise. Items in the same category have the same label and thus either their similarity increases or remains constant leading to discrimination accuracy that does not increase. Hanley and Roberson (2011) point out that the accuracy in assigning category labels is not constant across distance to the category boundary. Items farther away from the boundary are more likely to be categorized correctly than items closer to the category boundary. This viewpoint is consistent with many models of category learning that do not incorporate perceptual learning, including decision boundaries (Ashby & Maddox, 1990) and many exemplar-based (Nosofsky, 1986) models of category learning. Recent re-analysis of traditional Categorical Perception (CP) effects show that the advantage for between category judgments may be due to asymmetries of within-category judgments (Hanley & Roberson, 2011). This has led to the hypothesis that labels cause CP effects via these asymmetries due to category label uncertainty near the category boundary. In Experiment 1 we demonstrate that these “within-category” asymmetries exist before category training begins. Category learning does increase the within-category asymmetry on a category relevant dimension but equally on an irrelevant dimension. Experiment 2 replicates the asymmetry found in Experiment 1 without training and shows that it does not increase with additional exposure in the absence of category training. We conclude that the within-category asymmetry may be a result of unsupervised learning of stimulus clusters that emphasize extreme instances and that category training increases this caricaturization of stimulus representations. Keywords: Categorical Perception, Category Labels, Perceptual Learning, Category Learning, and Language Introduction Categorical perception. Our perceptual systems fail overwhelmingly to be precise replicators of reality in the way a camera or a microphone is, because these systems have not evolved to create a veridical representation of reality. Though constrained by overall neural architecture and the inertia of representations in primary sensory areas (Petrov et al., 2005), our perceptual systems consistently learn to create useful, but potentially distorted, representations of reality (Landy & Goldstone, 2005). Often, this perceptual learning produces experiences that do not reflect the continuous variation of reality. Instead they warp that variability into discrete groupings such that entities that fall within a group are less discriminable than physically equally spaced entities that fall in different groups, a process known as categorical perception (CP; Harnad, 1987). While some of the focus in CP research has been on assessing if particular categories are innate through cross- cultural studies (Kay & Reiger, 2003; Roberson & Davidoff, 2000; Sauter et al., 2011), early studies of CP focused on phonemes (Liberman et al., 1957) which show systematically different category boundaries based on an individual’s native language (Logan et al., 1991). Learned CP has been shown in the visual modality across a variety of dimensions including hue and saturation (Goldstone, 1994), line drawings (Livingston et al., 1998), and morphs between arbitrarily paired faces (Kikutani et al., Within-category discrimination asymmetries. In perceptual discrimination testing in which a target object (X) must be held in memory and compared to itself and a foil object (A and B, respectively), if A is more likely to be assigned the same category label as X than B, then the probability of selecting A as the answer should increase relative to if A and B are equally likely to be assigned to categories. Therefore, when the target object is farther away from the category boundary than the foil and thus more consistently labeled in the category, accuracy will increase because the target object is more likely to be selected. Similarly, when the foil object is farther away, accuracy will decrease because the foil object will be selected more frequently (compared in both cases to cases in which no labeling asymmetry exists). Hanley and Roberson (2011; see also Roberson et al., 2007) find this asymmetric within-category advantage for more perceptually extreme targets across a wide array of stimuli for which CP effects have been shown, including color across cultures (Roberson & Davidoff, 2000; Roberson et al., 2000; Roberson et al., 2005), facial emotions (Roberson et al., 2007), morphed celebrity faces and morphed unfamiliar but trained faces (Kikutani et al., 2008; 2010). They failed to find an advantage for more extreme faces among morphed unfamiliar and either untrained (Kikutani et al., 2008) or covertly exposed (Kikutani et al.," @default.
- W2400331045 created "2016-06-24" @default.
- W2400331045 creator A5026875699 @default.
- W2400331045 creator A5029222386 @default.
- W2400331045 creator A5091642863 @default.
- W2400331045 date "2012-01-01" @default.
- W2400331045 modified "2023-09-26" @default.
- W2400331045 title "Going to Extremes: The influence of unsupervised categories on the mental caricaturization of faces and asymmetries in perceptual discrimination" @default.
- W2400331045 cites W1553212268 @default.
- W2400331045 cites W1989469074 @default.
- W2400331045 cites W1999575787 @default.
- W2400331045 cites W2008252115 @default.
- W2400331045 cites W2037249514 @default.
- W2400331045 cites W2039396020 @default.
- W2400331045 cites W2041453687 @default.
- W2400331045 cites W2049507677 @default.
- W2400331045 cites W2051275520 @default.
- W2400331045 cites W2052765867 @default.
- W2400331045 cites W2084543000 @default.
- W2400331045 cites W2095287180 @default.
- W2400331045 cites W2097871580 @default.
- W2400331045 cites W2111376597 @default.
- W2400331045 cites W2115059673 @default.
- W2400331045 cites W2115256808 @default.
- W2400331045 cites W2117714737 @default.
- W2400331045 cites W2122914621 @default.
- W2400331045 cites W2132089731 @default.
- W2400331045 cites W2136705347 @default.
- W2400331045 cites W2141117074 @default.
- W2400331045 cites W2156975004 @default.
- W2400331045 cites W2170014483 @default.
- W2400331045 cites W2171807643 @default.
- W2400331045 cites W367043469 @default.
- W2400331045 cites W371153924 @default.
- W2400331045 hasPublicationYear "2012" @default.
- W2400331045 type Work @default.
- W2400331045 sameAs 2400331045 @default.
- W2400331045 citedByCount "2" @default.
- W2400331045 countsByYear W24003310452014 @default.
- W2400331045 countsByYear W24003310452019 @default.
- W2400331045 crossrefType "journal-article" @default.
- W2400331045 hasAuthorship W2400331045A5026875699 @default.
- W2400331045 hasAuthorship W2400331045A5029222386 @default.
- W2400331045 hasAuthorship W2400331045A5091642863 @default.
- W2400331045 hasConcept C103278499 @default.
- W2400331045 hasConcept C105795698 @default.
- W2400331045 hasConcept C115961682 @default.
- W2400331045 hasConcept C134306372 @default.
- W2400331045 hasConcept C138885662 @default.
- W2400331045 hasConcept C154945302 @default.
- W2400331045 hasConcept C15744967 @default.
- W2400331045 hasConcept C169760540 @default.
- W2400331045 hasConcept C180747234 @default.
- W2400331045 hasConcept C26760741 @default.
- W2400331045 hasConcept C2776401178 @default.
- W2400331045 hasConcept C33640556 @default.
- W2400331045 hasConcept C33923547 @default.
- W2400331045 hasConcept C41008148 @default.
- W2400331045 hasConcept C41895202 @default.
- W2400331045 hasConcept C5274069 @default.
- W2400331045 hasConcept C62354387 @default.
- W2400331045 hasConcept C94124525 @default.
- W2400331045 hasConcept C99209842 @default.
- W2400331045 hasConceptScore W2400331045C103278499 @default.
- W2400331045 hasConceptScore W2400331045C105795698 @default.
- W2400331045 hasConceptScore W2400331045C115961682 @default.
- W2400331045 hasConceptScore W2400331045C134306372 @default.
- W2400331045 hasConceptScore W2400331045C138885662 @default.
- W2400331045 hasConceptScore W2400331045C154945302 @default.
- W2400331045 hasConceptScore W2400331045C15744967 @default.
- W2400331045 hasConceptScore W2400331045C169760540 @default.
- W2400331045 hasConceptScore W2400331045C180747234 @default.
- W2400331045 hasConceptScore W2400331045C26760741 @default.
- W2400331045 hasConceptScore W2400331045C2776401178 @default.
- W2400331045 hasConceptScore W2400331045C33640556 @default.
- W2400331045 hasConceptScore W2400331045C33923547 @default.
- W2400331045 hasConceptScore W2400331045C41008148 @default.
- W2400331045 hasConceptScore W2400331045C41895202 @default.
- W2400331045 hasConceptScore W2400331045C5274069 @default.
- W2400331045 hasConceptScore W2400331045C62354387 @default.
- W2400331045 hasConceptScore W2400331045C94124525 @default.
- W2400331045 hasConceptScore W2400331045C99209842 @default.
- W2400331045 hasIssue "34" @default.
- W2400331045 hasLocation W24003310451 @default.
- W2400331045 hasOpenAccess W2400331045 @default.
- W2400331045 hasPrimaryLocation W24003310451 @default.
- W2400331045 hasRelatedWork W1843118354 @default.
- W2400331045 hasRelatedWork W1970133251 @default.
- W2400331045 hasRelatedWork W2011331029 @default.
- W2400331045 hasRelatedWork W2018853507 @default.
- W2400331045 hasRelatedWork W2022644699 @default.
- W2400331045 hasRelatedWork W2025617260 @default.
- W2400331045 hasRelatedWork W2029721857 @default.
- W2400331045 hasRelatedWork W2046969032 @default.
- W2400331045 hasRelatedWork W2049507677 @default.
- W2400331045 hasRelatedWork W2059344929 @default.
- W2400331045 hasRelatedWork W2107249503 @default.
- W2400331045 hasRelatedWork W2157827855 @default.
- W2400331045 hasRelatedWork W2395532903 @default.
- W2400331045 hasRelatedWork W2419907448 @default.