Matches in SemOpenAlex for { <https://semopenalex.org/work/W1559894929> ?p ?o ?g. }
Showing items 1 to 91 of
91
with 100 items per page.
- W1559894929 abstract "A computational theory of human perceptual mapping W. K. Yeap (wai.yeap@aut.ac.nz) Centre for Artificial Intelligence Research Auckland University of Technology, New Zealand Abstract However, these views need to be organized into a coherent global map and a method has been suggested. It requires recognizing objects found in the selected views in all the in- between views that have not been selected. These objects allow one to triangulate one’s position in the map and add new views to the map in their appropriate position. The theory has been tested successfully with different implementations on mobile robots and the resulting maps produced were found to exhibit several interesting characteristics of a human perceptual map. This paper presents a new computational theory of how humans integrate successive views to form a perceptual map. Traditionally, this problem has been thought of as a straightforward integration problem whereby position of objects in one view is transformed to the next and combined. However, this step creates a paradoxical situation in human perceptual mapping. On the one hand, the method requires errors to be corrected and the map to be constantly updated, and yet, on the other hand, human perception and memory show a high tolerance for errors and little integration of successive views. A new theory is presented which argues that our perceptual map is computed by combining views only at their limiting points. To do so, one must be able to recognize and track familiar objects across views. The theory has been tested successfully on mobile robots and the lessons learned are discussed. A Perceptual Paradox? Researchers who investigated how spatial memories are organised often suggest the existence of a two-system model: an egocentric model and an allocentric model (Mou, McNamara, Valiquette & Rump, 2004; Burgess, 2006; Rump & McNamara, 2007). These two models are very different implementations of the same basic mathematical model described above and therefore have different costs associated with their use. In particular, the former keeps track of the relationship between the self and all objects perceived. As one moves, one needs to constantly update all objects position in memory with respect to the viewer’s new position. The latter creates a global map of all objects perceived using a frame of reference independent of the viewer’s position. These researchers claimed that the former is best suited for organising information in a perceptual map while the latter is best for a cognitive map. However, little is said about how information encoded in an egocentric perceptual map is transferred into an allocentric cognitive map. If this is achieved via switching frame of reference, then the process is straightforward and from a mathematical standpoint, the two representations are considered equivalent. In this case, a perceptual map is a subset of a cognitive map and holds only the most recently perceived information. Researchers who investigated the nature of cognitive maps from studying resident’s memory of their environment (both adults and children) often emphasized that the map is fragmented, incomplete and imprecise (e.g. Lynch, 1960; Downs & Stea, 1973, Evans, 1980). This does not mean that the map is devoid of metric information but rather, one’s memory of such information is often found to be distorted systematically as a result of applying cognitive organizing principles (Tversky, 1992). Some well-known examples of these distortions include the regularization of turns and angles (Byrne, 1979), and over- and under- estimation of distances due to factors such as direction of travel (Lee, 1970), presence of barriers (Cohen & Weatherford, 1981), and others. More recent studies have also shown that metric Keywords: perceptual map; cognitive map; spatial layout; spatial cognition. Introduction How do humans integrate successive views to form a perceptual map? The latter is a representation of the spatial layout of surfaces/objects perceived in one’s immediate surroundings. That we have such a map is evident in that we do not immediately forget what is out of sight when we turn or move forward (see Glennerster, Hansard & Fitzgibbon (p.205, 2009) for a similar argument). However, researchers studying this problem from four different perspectives, namely how we represent our environmental knowledge (i.e. a cognitive map (Tolman, 1948; O’Keefe & Nadel, 1978)), what frame of references we use, how we see our world, and how robots create a map of their own world, have offered solutions which when taken together create a paradoxical situation. It is noted that because the problem lends itself to a straightforward mathematical solution whereby information in one view is transformed to their respective positions in the next view, much of the current studies implicitly or explicitly assume that a solution to this problem would involve such a step. This step is problematic when used to explain how humans integrate their views and the lack of an alternative method has hampered progress. In this paper, a new computational theory of human perceptual mapping is presented. It abandons the idea of integrating successive views to form a perceptual map. Instead, it argues that what is afforded in a view is an adequate description of the current spatial local environment and hence it does not need to be updated until one moves out of it. Only then, another view is added to the map. As a result, the map is composed of views selected at different times during one’s exploration of the environment." @default.
- W1559894929 created "2016-06-24" @default.
- W1559894929 creator A5062468162 @default.
- W1559894929 date "2011-01-01" @default.
- W1559894929 modified "2023-09-23" @default.
- W1559894929 title "A computational theory of human perceptual mapping" @default.
- W1559894929 cites W1490092700 @default.
- W1559894929 cites W1512876644 @default.
- W1559894929 cites W1525921012 @default.
- W1559894929 cites W1529253181 @default.
- W1559894929 cites W1600594204 @default.
- W1559894929 cites W1983565847 @default.
- W1559894929 cites W1986216856 @default.
- W1559894929 cites W2000214310 @default.
- W1559894929 cites W2011766759 @default.
- W1559894929 cites W2019131167 @default.
- W1559894929 cites W2020977440 @default.
- W1559894929 cites W2023526870 @default.
- W1559894929 cites W2023889744 @default.
- W1559894929 cites W2024054134 @default.
- W1559894929 cites W2027504643 @default.
- W1559894929 cites W2039759576 @default.
- W1559894929 cites W2045921696 @default.
- W1559894929 cites W2049294681 @default.
- W1559894929 cites W2051437476 @default.
- W1559894929 cites W2068466362 @default.
- W1559894929 cites W2070484900 @default.
- W1559894929 cites W2075218762 @default.
- W1559894929 cites W2075288995 @default.
- W1559894929 cites W2077550329 @default.
- W1559894929 cites W2081559874 @default.
- W1559894929 cites W2103692957 @default.
- W1559894929 cites W2106187452 @default.
- W1559894929 cites W2119650349 @default.
- W1559894929 cites W2137989587 @default.
- W1559894929 cites W2150142674 @default.
- W1559894929 cites W2165506449 @default.
- W1559894929 cites W2170789267 @default.
- W1559894929 cites W2625175141 @default.
- W1559894929 hasPublicationYear "2011" @default.
- W1559894929 type Work @default.
- W1559894929 sameAs 1559894929 @default.
- W1559894929 citedByCount "2" @default.
- W1559894929 countsByYear W15598949292013 @default.
- W1559894929 crossrefType "journal-article" @default.
- W1559894929 hasAuthorship W1559894929A5062468162 @default.
- W1559894929 hasConcept C10138342 @default.
- W1559894929 hasConcept C154945302 @default.
- W1559894929 hasConcept C15744967 @default.
- W1559894929 hasConcept C162324750 @default.
- W1559894929 hasConcept C169760540 @default.
- W1559894929 hasConcept C198082294 @default.
- W1559894929 hasConcept C26760741 @default.
- W1559894929 hasConcept C41008148 @default.
- W1559894929 hasConceptScore W1559894929C10138342 @default.
- W1559894929 hasConceptScore W1559894929C154945302 @default.
- W1559894929 hasConceptScore W1559894929C15744967 @default.
- W1559894929 hasConceptScore W1559894929C162324750 @default.
- W1559894929 hasConceptScore W1559894929C169760540 @default.
- W1559894929 hasConceptScore W1559894929C198082294 @default.
- W1559894929 hasConceptScore W1559894929C26760741 @default.
- W1559894929 hasConceptScore W1559894929C41008148 @default.
- W1559894929 hasIssue "33" @default.
- W1559894929 hasLocation W15598949291 @default.
- W1559894929 hasOpenAccess W1559894929 @default.
- W1559894929 hasPrimaryLocation W15598949291 @default.
- W1559894929 hasRelatedWork W1783546280 @default.
- W1559894929 hasRelatedWork W1831404116 @default.
- W1559894929 hasRelatedWork W1975325591 @default.
- W1559894929 hasRelatedWork W2023526870 @default.
- W1559894929 hasRelatedWork W2028372239 @default.
- W1559894929 hasRelatedWork W2039005177 @default.
- W1559894929 hasRelatedWork W2111365701 @default.
- W1559894929 hasRelatedWork W2184846536 @default.
- W1559894929 hasRelatedWork W2201285591 @default.
- W1559894929 hasRelatedWork W2281766186 @default.
- W1559894929 hasRelatedWork W2314568509 @default.
- W1559894929 hasRelatedWork W2580162401 @default.
- W1559894929 hasRelatedWork W2587168202 @default.
- W1559894929 hasRelatedWork W2620874033 @default.
- W1559894929 hasRelatedWork W2765144001 @default.
- W1559894929 hasRelatedWork W2767645018 @default.
- W1559894929 hasRelatedWork W2773219922 @default.
- W1559894929 hasRelatedWork W2966310708 @default.
- W1559894929 hasRelatedWork W53055460 @default.
- W1559894929 hasRelatedWork W6519056 @default.
- W1559894929 hasVolume "33" @default.
- W1559894929 isParatext "false" @default.
- W1559894929 isRetracted "false" @default.
- W1559894929 magId "1559894929" @default.
- W1559894929 workType "article" @default.