Matches in SemOpenAlex for { <https://semopenalex.org/work/W2398728119> ?p ?o ?g. }
Showing items 1 to 80 of
80
with 100 items per page.
- W2398728119 abstract "Producing gestures facilitates encoding of spatial relation Amy Chong (S1155009099@Mailserv.Cuhk.Edu.Hk) Ben Choi (S1155016253@Mailserv.Cuhk.Edu.Hk) Elena Kwong (S0960192@Mailserv.Cuhk.Edu.Hk) Department of Psychology, The Chinese University of Hong Kong, Hong Kong Jennifer Chan (S1155004719@Mailserv.Cuhk.Edu.Hk) Irina Chong (S1155004024@Mailserv.Cuhk.Edu.Hk) Mavis Ip (S1155003387@Mailserv.Cuhk.Edu.Hk) Department of Educational Psychology, The Chinese University of Hong Kong, Hong Kong Christopher Yeung (S1155003299@Mailserv.Cuhk.Edu.Hk) Department of Psychology, The Chinese University of Hong Kong, Hong Kong Wing Chee So (WINGCHEE@Cuhk.Edu.Hk) Department of Educational Psychology, The Chinese University of Hong Kong, Hong Kong Abstract This paper examines whether producing gestures would facilitate encoding of spatial relation in a navigation task. In this experiment, we focused on gestures produced without accompanying speech. Adult participants were asked to study spatial sequence of routes shown in four diagrams, one at a time. Participants rehearsed the routes with gestures, actual hand movements (actually drew the routes on papers), or mental simulation. They then were asked to reconstruct the routes with sticks. Participants who moved their hands (either in the form of gestures or actual drawing) recalled better than those who mentally simulated the routes and those who did not rehearse, suggesting that hand movements produced during rehearsal facilitate encoding of spatial relation. Interestingly, participants who gestured the routes in the air recalled better than those who drew them on papers, suggesting that gesture, as a kind of representational action, exerts more powerful influence on spatial relation encoding. Keywords: Gesture; Spatial Cognition; Action; Encoding; Embodied Cognition. Introduction Spatial knowledge consists of three major skills, including spatial visualization, spatial relation, and spatial orientation (Lohman, 1979). The present study focuses on spatial relation. Understanding relational information enables us to form a spatial representation regarding relation between locations, objects, and paths. Such understanding is particularly useful when we are processing spatial information of how starting points and destinations are considered in relation to one another. Therefore, developing techniques to facilitate encoding of spatial relation has received increasing attention from cognitive and educational psychologists all over the world. In the present study, we examine whether embodied movements like gestures might be effective in encoding spatial relation. Previous research has shown that producing gestures is directly involved in encoding new information but those studies focused on mathematics domain. Children who were told to gesture when explaining their solutions to a math problem benefited more from the subsequent math lesson, compared to children who were told not to gesture (Broaders, Cook, Mitchell, & Goldin-Meadow, 2007). Children who were instructed to reproduce teacher’s gestures while acquiring new mathematics concepts learnt and memorized mathematics knowledge better than did those who were instructed to reproduce teacher’s verbal instructions only (Cook, Mitchell, & Goldin-Meadow, 2008). However, no experimental work has examined whether gestures strengthen spatial relation encoding. Gestures are spontaneous hand movements. They are produced in space, and thus are inherently spatial (McNeill, 1992; 2005). Therefore, learners can exploit the spatial properties of gestures to encode spatial relation between the starting point and destination. For example, when encoding spatial sequence of a route, learners may trace the steps with an index finger in the air by moving it to the right, upwards, and to the right again. In fact, gestures and spatial relation are tightly linked. Previous studies have shown that speakers produce co- speech gestures (gestures that are co-occurring with speech) when they convey spatial relation to listeners in speech. For example, they use co-speech gestures to depict spatial layout of an area (Emmorey, Tversky & Taylor, 2000) and spatial arrangement of objects (Sauter, Uttal, Alman, Goldin- Meadow, & Levine, 2012). In addition, previous studies have reported that speakers produce co-speech gestures frequently when they are identifying spatial relation between two characters in narratives (So, Coppola, Liccidarello, & Goldin-Meadow, 2005; So, Kita, & Goldin- Meadow, 2009). They also increase gesture production" @default.
- W2398728119 created "2016-06-24" @default.
- W2398728119 creator A5007926839 @default.
- W2398728119 creator A5017242038 @default.
- W2398728119 creator A5027383703 @default.
- W2398728119 creator A5041614674 @default.
- W2398728119 creator A5054254115 @default.
- W2398728119 creator A5055096066 @default.
- W2398728119 creator A5072007761 @default.
- W2398728119 creator A5088125611 @default.
- W2398728119 date "2013-01-01" @default.
- W2398728119 modified "2023-09-28" @default.
- W2398728119 title "Producing gestures facilitates encoding of spatial relation." @default.
- W2398728119 cites W1546175221 @default.
- W2398728119 cites W1554891029 @default.
- W2398728119 cites W1708911126 @default.
- W2398728119 cites W1967490595 @default.
- W2398728119 cites W1981519548 @default.
- W2398728119 cites W1988585419 @default.
- W2398728119 cites W1991568221 @default.
- W2398728119 cites W2023015865 @default.
- W2398728119 cites W2025668003 @default.
- W2398728119 cites W2035125704 @default.
- W2398728119 cites W2057024948 @default.
- W2398728119 cites W2062758338 @default.
- W2398728119 cites W2063784304 @default.
- W2398728119 cites W2090278863 @default.
- W2398728119 cites W2106128184 @default.
- W2398728119 cites W2110358380 @default.
- W2398728119 cites W2113928197 @default.
- W2398728119 cites W2118565653 @default.
- W2398728119 cites W2126379829 @default.
- W2398728119 cites W2127508752 @default.
- W2398728119 cites W2134774969 @default.
- W2398728119 cites W2135000845 @default.
- W2398728119 cites W2143740992 @default.
- W2398728119 cites W2150375089 @default.
- W2398728119 cites W2157405771 @default.
- W2398728119 cites W2319623371 @default.
- W2398728119 cites W3021182681 @default.
- W2398728119 hasPublicationYear "2013" @default.
- W2398728119 type Work @default.
- W2398728119 sameAs 2398728119 @default.
- W2398728119 citedByCount "0" @default.
- W2398728119 crossrefType "journal-article" @default.
- W2398728119 hasAuthorship W2398728119A5007926839 @default.
- W2398728119 hasAuthorship W2398728119A5017242038 @default.
- W2398728119 hasAuthorship W2398728119A5027383703 @default.
- W2398728119 hasAuthorship W2398728119A5041614674 @default.
- W2398728119 hasAuthorship W2398728119A5054254115 @default.
- W2398728119 hasAuthorship W2398728119A5055096066 @default.
- W2398728119 hasAuthorship W2398728119A5072007761 @default.
- W2398728119 hasAuthorship W2398728119A5088125611 @default.
- W2398728119 hasConcept C125411270 @default.
- W2398728119 hasConcept C138885662 @default.
- W2398728119 hasConcept C15744967 @default.
- W2398728119 hasConcept C180747234 @default.
- W2398728119 hasConcept C207347870 @default.
- W2398728119 hasConcept C25343380 @default.
- W2398728119 hasConcept C41008148 @default.
- W2398728119 hasConcept C41895202 @default.
- W2398728119 hasConcept C77088390 @default.
- W2398728119 hasConceptScore W2398728119C125411270 @default.
- W2398728119 hasConceptScore W2398728119C138885662 @default.
- W2398728119 hasConceptScore W2398728119C15744967 @default.
- W2398728119 hasConceptScore W2398728119C180747234 @default.
- W2398728119 hasConceptScore W2398728119C207347870 @default.
- W2398728119 hasConceptScore W2398728119C25343380 @default.
- W2398728119 hasConceptScore W2398728119C41008148 @default.
- W2398728119 hasConceptScore W2398728119C41895202 @default.
- W2398728119 hasConceptScore W2398728119C77088390 @default.
- W2398728119 hasIssue "35" @default.
- W2398728119 hasLocation W23987281191 @default.
- W2398728119 hasOpenAccess W2398728119 @default.
- W2398728119 hasPrimaryLocation W23987281191 @default.
- W2398728119 hasVolume "35" @default.
- W2398728119 isParatext "false" @default.
- W2398728119 isRetracted "false" @default.
- W2398728119 magId "2398728119" @default.
- W2398728119 workType "article" @default.