Matches in SemOpenAlex for { <https://semopenalex.org/work/W2768164026> ?p ?o ?g. }
Showing items 1 to 95 of
95
with 100 items per page.
- W2768164026 abstract "Spatial Constraints on Visual Statistical Learning of Multi-Element Scenes Christopher M. Conway (cmconway@indiana.edu) Department of Psychological & Brain Sciences, Indiana University, Bloomington, IN 47405 USA Robert L. Goldstone (rgoldsto@indiana.edu) Department of Psychological & Brain Sciences, Indiana University, Bloomington, IN 47405 USA Morten H. Christiansen (mhc27@cornell.edu) Department of Psychology, Cornell University, Ithaca, NY 14853 USA Abstract distributed across a set of exemplars in time and/or space, typically without conscious awareness of what regularities are being learned. SL has been demonstrated across a number of sense modalities and input domains, including speech-like stimuli (Saffran et al., 1996), visual scenes (Fiser & Aslin, 2001), and tactile patterns (Conway & Christiansen, 2005). Because SL appears to make contact with many aspects of perceptual and cognitive processing, understanding the underlying cognitive mechanisms, limitations, and constraints affecting SL is an important research goal. Initial work in SL emphasized its unconstrained, associative nature (e.g., see Frensch, 1998; Olson & Chun, 2002, for discussion). That is, a common assumption has been that statistical relations can be learned between any two or more stimuli regardless of their perceptual characteristics or identity; under this view, there is no reason to believe that learning a pattern involving items A, B, and C should be any easier or harder than learning the relations among A, D, and E. However, recent research has shown that this kind of unconstrained, unselective associative learning process may not be the best characterization of SL (Bonatti, Pena, Nespor, & Mehler, 2005; Conway & Christiansen, 2005; Saffran, 2002; Turk- Browne, Junge, & Scholl, 2005). Instead, factors related to how the sensory and perceptual systems engage SL processes appear to provide important constraints on the learning of environmental structure. In this paper we examine a largely unexplored constraint on visual statistical learning (VSL): the relative spatial arrangement of objects. If VSL operates via unconstrained associative learning mechanisms, we ought to expect that it is the co-occurrence of two objects that is important, not the relative spatial arrangement of those objects. However, another possibility is that VSL is akin to perceptual learning, in which two frequently co-occurring objects can form a new perceptual “unit” (Goldstone, 1998). Such unitization would be highly specific to not only the individual items but to their relative spatial arrangement as well. Before describing the empirical study in full, we first briefly review other work that points toward spatial constraints affecting visual processing. Visual statistical learning allows observers to extract high-level structure from visual scenes (Fiser & Aslin, 2001). Previous work has explored the types of statistical computations afforded but has not addressed to what extent learning results in unbound versus spatially bound representations of element co- occurrences. We explored these two possibilities using an unsupervised learning task with adult participants who observed complex multi-element scenes embedded with consistently paired elements. If learning is mediated by unconstrained associative learning mechanisms, then learning the element pairings may depend only on the co-occurrence of the elements in the scenes, without regard to their specific spatial arrangements. If learning is perceptually constrained, co- occurring elements ought to form perceptual units specific to their observed spatial arrangements. Results showed that participants learned the statistical structure of element co- occurrences in a spatial-specific manner, showing that visual statistical learning is perceptually constrained by spatial grouping principles. Keywords: Visual Statistical Learning, Associative Learning, Perceptual Learning, Spatial Constraints. Introduction Structure abounds in the environment. The sounds, objects, and events that we perceive are not random in nature but rather are coherent and regular. Consider spoken language: phonemes, syllables, and words adhere to a semi-regular structure that can be defined in terms of statistical or probabilistic relationships. The same holds true for almost all aspects of our interaction with the world, whether it be speaking, listening to music, learning a tennis swing, or perceiving complex scenes. How the mind, brain, and body encode and use structure that exists in time and space remains one of the deep mysteries of cognitive science. This issue has begun to be elucidated through the study of “implicit” or “statistical” learning 1 (Cleeremans, Destrebecqz, & Boyer, 1998; Conway & Christiansen, 2006; Reber, 1993; Perruchet & Pacton, 2006; Saffran, Aslin, & Newport, 1996). Statistical learning (SL) involves relatively automatic learning mechanisms that are used to extract regularities and patterns We consider implicit and statistical learning to refer to the same learning ability, which we hereafter refer to simply as statistical learning." @default.
- W2768164026 created "2017-12-04" @default.
- W2768164026 creator A5018719343 @default.
- W2768164026 creator A5029222386 @default.
- W2768164026 creator A5044118756 @default.
- W2768164026 date "2007-01-01" @default.
- W2768164026 modified "2023-09-23" @default.
- W2768164026 title "Spatial Constraints on Visual Statistical Learning of Multi-Element Scenes" @default.
- W2768164026 cites W1578992551 @default.
- W2768164026 cites W1963985178 @default.
- W2768164026 cites W1980862600 @default.
- W2768164026 cites W1982513219 @default.
- W2768164026 cites W1986345213 @default.
- W2768164026 cites W1997313875 @default.
- W2768164026 cites W2001164490 @default.
- W2768164026 cites W2002233694 @default.
- W2768164026 cites W2033537108 @default.
- W2768164026 cites W2051215923 @default.
- W2768164026 cites W2058184541 @default.
- W2768164026 cites W2061445098 @default.
- W2768164026 cites W2072357349 @default.
- W2768164026 cites W2072464426 @default.
- W2768164026 cites W2078451190 @default.
- W2768164026 cites W2080080866 @default.
- W2768164026 cites W2083270362 @default.
- W2768164026 cites W2086378945 @default.
- W2768164026 cites W2111194592 @default.
- W2768164026 cites W2116945498 @default.
- W2768164026 cites W2117542730 @default.
- W2768164026 cites W2125250156 @default.
- W2768164026 cites W2141474346 @default.
- W2768164026 cites W2147832298 @default.
- W2768164026 cites W2153564293 @default.
- W2768164026 cites W2157204889 @default.
- W2768164026 cites W2157394445 @default.
- W2768164026 cites W2557170750 @default.
- W2768164026 cites W2906903179 @default.
- W2768164026 hasPublicationYear "2007" @default.
- W2768164026 type Work @default.
- W2768164026 sameAs 2768164026 @default.
- W2768164026 citedByCount "4" @default.
- W2768164026 countsByYear W27681640262013 @default.
- W2768164026 crossrefType "journal-article" @default.
- W2768164026 hasAuthorship W2768164026A5018719343 @default.
- W2768164026 hasAuthorship W2768164026A5029222386 @default.
- W2768164026 hasAuthorship W2768164026A5044118756 @default.
- W2768164026 hasConcept C15744967 @default.
- W2768164026 hasConcept C159423971 @default.
- W2768164026 hasConcept C169760540 @default.
- W2768164026 hasConcept C169900460 @default.
- W2768164026 hasConcept C180747234 @default.
- W2768164026 hasConcept C188147891 @default.
- W2768164026 hasConcept C202444582 @default.
- W2768164026 hasConcept C26760741 @default.
- W2768164026 hasConcept C2983526489 @default.
- W2768164026 hasConcept C33923547 @default.
- W2768164026 hasConceptScore W2768164026C15744967 @default.
- W2768164026 hasConceptScore W2768164026C159423971 @default.
- W2768164026 hasConceptScore W2768164026C169760540 @default.
- W2768164026 hasConceptScore W2768164026C169900460 @default.
- W2768164026 hasConceptScore W2768164026C180747234 @default.
- W2768164026 hasConceptScore W2768164026C188147891 @default.
- W2768164026 hasConceptScore W2768164026C202444582 @default.
- W2768164026 hasConceptScore W2768164026C26760741 @default.
- W2768164026 hasConceptScore W2768164026C2983526489 @default.
- W2768164026 hasConceptScore W2768164026C33923547 @default.
- W2768164026 hasIssue "29" @default.
- W2768164026 hasLocation W27681640261 @default.
- W2768164026 hasOpenAccess W2768164026 @default.
- W2768164026 hasPrimaryLocation W27681640261 @default.
- W2768164026 hasRelatedWork W101801193 @default.
- W2768164026 hasRelatedWork W107279349 @default.
- W2768164026 hasRelatedWork W1595719361 @default.
- W2768164026 hasRelatedWork W1980862600 @default.
- W2768164026 hasRelatedWork W2013567601 @default.
- W2768164026 hasRelatedWork W2051215923 @default.
- W2768164026 hasRelatedWork W2081965739 @default.
- W2768164026 hasRelatedWork W2132573777 @default.
- W2768164026 hasRelatedWork W2147832298 @default.
- W2768164026 hasRelatedWork W2151178030 @default.
- W2768164026 hasRelatedWork W2155713811 @default.
- W2768164026 hasRelatedWork W2157394445 @default.
- W2768164026 hasRelatedWork W2400015302 @default.
- W2768164026 hasRelatedWork W2582635330 @default.
- W2768164026 hasRelatedWork W2767255620 @default.
- W2768164026 hasRelatedWork W2767306852 @default.
- W2768164026 hasRelatedWork W2767360438 @default.
- W2768164026 hasRelatedWork W2767639433 @default.
- W2768164026 hasRelatedWork W2769700691 @default.
- W2768164026 hasRelatedWork W3021194562 @default.
- W2768164026 hasVolume "29" @default.
- W2768164026 isParatext "false" @default.
- W2768164026 isRetracted "false" @default.
- W2768164026 magId "2768164026" @default.
- W2768164026 workType "article" @default.