Matches in SemOpenAlex for { <https://semopenalex.org/work/W2765889110> ?p ?o ?g. }
Showing items 1 to 97 of
97
with 100 items per page.
- W2765889110 abstract "Agents and Affordances: Listeners Look for What They Don’t Hear Caitlin M. Fausey (cmfausey@psych.stanford.edu) Department of Psychology, Stanford University Stanford, CA 94305 USA Teenie Matlock (tmatlock@ucmerced.edu) Cognitive Science Program, University of California, Merced Merced, CA 95344 USA Daniel C. Richardson (dcr@ucsc.edu) Department of Psychology, University of California, Santa Cruz Santa Cruz, CA 95064 USA Abstract language comprehension and judgment tasks. For example, when discriminating sensible from nonsensical sentences, participants answered fastest when the location of the response was consistent with the movement described by the sensible sentence, as in pressing a button close to the body after reading “open the drawer” (Glenberg & Kaschak, 2002). In a part-judgment task, participants were faster to verify parts toward the upper half of objects when they made responses requiring upward movement, and lower parts with downward movement (Borghi, Glenberg & Kaschak, 2004). Evidence for tight links between semantic and motor representations of object affordances has been found when movements themselves are the dependent measure. Creem and Proffitt (2001) found different grasping behavior when participants either did or did not concurrently perform a semantic task while grasping. Without an additional task (or with an unrelated spatial task), participants grasped objects such as combs, spatulas and paintbrushes by their handles. When completing a concurrent semantic task, this normal grasping behavior was disrupted. This effect suggests that normal object-directed movement relies on semantic knowledge about object affordances. Even when no overt response is required of experimental participants, representations of object affordances may still be active. One source of evidence in support of this claim is the finding that neural circuits that are activated during grasping are also activated when people simply view manipulable objects (Chao & Martin, 2000). Additional evidence that suggests an automatic activation of knowledge about object affordances comes from eyetracking studies. Affordances and eyetracking. Eyetracking provides one measure of how people integrate background knowledge, language and visual information in real time. Researchers have studied the interaction of eye movements and linguistic processing in various ways: Many studies have examined the contribution of eye movements to resolving ambiguities in sentence understanding (e.g., Tanenhaus, Spivey- Knowlton, Eberhard & Sedivy, 1995) while others have reversed the question and examined the influence of language itself on visual processing (e.g., Richardson & Matlock, 2007). How do implicit aspects of language guide overt perceptual behavior? In this eyetracking study, we examined whether different ways of describing objects and actions would influence the visual processing of objects with affordances. Specifically, we were interested in the effect of different information about the agent of an action. English-speaking adults viewed objects with interactive regions, such as handles, knobs or buttons. Participants viewed each object after listening to a sentence with or without information about an agent. Participants were faster to fixate the interactive region of objects after hearing non-agentive language than after hearing agentive language, as if they were searching to fill an “agent information gap”. These results may inform theories about how global knowledge and local linguistic information mutually determine visual inspection of objects. Keywords: Affordances; Language-mediated eye movements Introduction Much of our everyday understanding of physical objects is grounded in affordances. This includes tacit knowledge about how objects are canonically oriented, what they are used for and, critically, how we interact with them. We know, for instance, that pitchers have handles for pouring, cars have steering wheels for driving and guns have triggers for shooting. The current study examines object affordances at the interface of language and visual processing. Do different linguistic environments change how people visually inspect objects that afford human action? Specifically, how might language that differentially codes for agency guide attention to interactive regions of these objects? The notion that visual, motor and linguistic representations are tightly linked has received empirical support in recent years (e.g., Barsalou, 1999; Glenberg, 1997; Pecher & Zwaan, 2005). For example, Tucker and Ellis (1998) found that people were faster to judge whether a cup was right side up or upside down when the cup handle was on the same side of the screen as the hand with which they made their response than when the handle was on the opposite side of the response hand. Glenberg and colleagues have observed similar “action compatibility effects” in" @default.
- W2765889110 created "2017-11-10" @default.
- W2765889110 creator A5007536904 @default.
- W2765889110 creator A5020019379 @default.
- W2765889110 creator A5045768804 @default.
- W2765889110 date "2007-01-01" @default.
- W2765889110 modified "2023-09-26" @default.
- W2765889110 title "Agents and Affordances: Listeners Look for What They Don't Hear" @default.
- W2765889110 cites W1543588706 @default.
- W2765889110 cites W1966612647 @default.
- W2765889110 cites W1973362993 @default.
- W2765889110 cites W2020755048 @default.
- W2765889110 cites W2052229204 @default.
- W2765889110 cites W2053617512 @default.
- W2765889110 cites W2055574465 @default.
- W2765889110 cites W2059731297 @default.
- W2765889110 cites W2062489145 @default.
- W2765889110 cites W2064290905 @default.
- W2765889110 cites W2077116199 @default.
- W2765889110 cites W2143740992 @default.
- W2765889110 cites W2149496207 @default.
- W2765889110 cites W2150375089 @default.
- W2765889110 cites W2167293745 @default.
- W2765889110 cites W2170080641 @default.
- W2765889110 cites W2184811281 @default.
- W2765889110 cites W2767792555 @default.
- W2765889110 hasPublicationYear "2007" @default.
- W2765889110 type Work @default.
- W2765889110 sameAs 2765889110 @default.
- W2765889110 citedByCount "0" @default.
- W2765889110 crossrefType "journal-article" @default.
- W2765889110 hasAuthorship W2765889110A5007536904 @default.
- W2765889110 hasAuthorship W2765889110A5020019379 @default.
- W2765889110 hasAuthorship W2765889110A5045768804 @default.
- W2765889110 hasConcept C107038049 @default.
- W2765889110 hasConcept C127413603 @default.
- W2765889110 hasConcept C138885662 @default.
- W2765889110 hasConcept C154945302 @default.
- W2765889110 hasConcept C15744967 @default.
- W2765889110 hasConcept C180747234 @default.
- W2765889110 hasConcept C188147891 @default.
- W2765889110 hasConcept C194995250 @default.
- W2765889110 hasConcept C199360897 @default.
- W2765889110 hasConcept C201995342 @default.
- W2765889110 hasConcept C2777530160 @default.
- W2765889110 hasConcept C2780226923 @default.
- W2765889110 hasConcept C2780451532 @default.
- W2765889110 hasConcept C2781238097 @default.
- W2765889110 hasConcept C41008148 @default.
- W2765889110 hasConcept C46312422 @default.
- W2765889110 hasConcept C511192102 @default.
- W2765889110 hasConceptScore W2765889110C107038049 @default.
- W2765889110 hasConceptScore W2765889110C127413603 @default.
- W2765889110 hasConceptScore W2765889110C138885662 @default.
- W2765889110 hasConceptScore W2765889110C154945302 @default.
- W2765889110 hasConceptScore W2765889110C15744967 @default.
- W2765889110 hasConceptScore W2765889110C180747234 @default.
- W2765889110 hasConceptScore W2765889110C188147891 @default.
- W2765889110 hasConceptScore W2765889110C194995250 @default.
- W2765889110 hasConceptScore W2765889110C199360897 @default.
- W2765889110 hasConceptScore W2765889110C201995342 @default.
- W2765889110 hasConceptScore W2765889110C2777530160 @default.
- W2765889110 hasConceptScore W2765889110C2780226923 @default.
- W2765889110 hasConceptScore W2765889110C2780451532 @default.
- W2765889110 hasConceptScore W2765889110C2781238097 @default.
- W2765889110 hasConceptScore W2765889110C41008148 @default.
- W2765889110 hasConceptScore W2765889110C46312422 @default.
- W2765889110 hasConceptScore W2765889110C511192102 @default.
- W2765889110 hasIssue "29" @default.
- W2765889110 hasLocation W27658891101 @default.
- W2765889110 hasOpenAccess W2765889110 @default.
- W2765889110 hasPrimaryLocation W27658891101 @default.
- W2765889110 hasRelatedWork W116591313 @default.
- W2765889110 hasRelatedWork W2011348940 @default.
- W2765889110 hasRelatedWork W2036130142 @default.
- W2765889110 hasRelatedWork W205311872 @default.
- W2765889110 hasRelatedWork W2066289073 @default.
- W2765889110 hasRelatedWork W2191940983 @default.
- W2765889110 hasRelatedWork W2337307553 @default.
- W2765889110 hasRelatedWork W2748027738 @default.
- W2765889110 hasRelatedWork W2758009964 @default.
- W2765889110 hasRelatedWork W2899332334 @default.
- W2765889110 hasRelatedWork W2899686780 @default.
- W2765889110 hasRelatedWork W2950582775 @default.
- W2765889110 hasRelatedWork W3124132502 @default.
- W2765889110 hasRelatedWork W3185801809 @default.
- W2765889110 hasRelatedWork W3186521937 @default.
- W2765889110 hasRelatedWork W3192452456 @default.
- W2765889110 hasRelatedWork W3211737272 @default.
- W2765889110 hasRelatedWork W606357357 @default.
- W2765889110 hasRelatedWork W76692423 @default.
- W2765889110 hasRelatedWork W2303000835 @default.
- W2765889110 hasVolume "29" @default.
- W2765889110 isParatext "false" @default.
- W2765889110 isRetracted "false" @default.
- W2765889110 magId "2765889110" @default.
- W2765889110 workType "article" @default.