Matches in SemOpenAlex for { <https://semopenalex.org/work/W2783001876> ?p ?o ?g. }
Showing items 1 to 69 of
69
with 100 items per page.
- W2783001876 abstract "Perceptually Grounded Word Meaning Acquisition: A Computational Model Claudius Gl¨aser (claudius.glaeser@honda-ri.de) Honda Research Institute Europe Carl-Legien-Strasse 30, 63073 Offenbach, Germany Frank Joublin (frank.joublin@honda-ri.de) Honda Research Institute Europe Carl-Legien-Strasse 30, 63073 Offenbach, Germany Abstract We present a computational model for the incremental acqui- sition of word meanings. Inspired by Complementary Learn- ing Systems theory the model comprises different compo- nents which are specifically tailored to satisfy the contradictory needs of (1) rapid memorization of word-scene associations and (2) statistical feature extraction to reveal word meanings. Both components are recurrently coupled to achieve a memory consolidation. This process reflects itself in a gradual transfer of the knowledge about a word’s meaning into the extracted features. Thereby, the internal representation of a word be- comes more efficient and robust. We present simulation results for a visual scene description task in which words describing the relations between objects have been trained. This includes relations in size, color, and position. The results demonstrate our model’s capability to acquire word meanings from few training exemplars. We further show that the model correctly extracts word meaning-relevant features and therefore percep- tually grounds the words. Keywords: Word Learning; Computational Model; Comple- mentary Learning Systems; Categorization Introduction When hearing a novel word, a language learner has to as- sociate the word with its meaning. Establishing such word- meaning mappings is an inherently difficult task as the learner initially cannot know to what the word refers to. Quine (1960) illustrated this problem with the example of a stranger who hears a native saying ”gavagai” after seeing a rabbit. How can the stranger determine the meaning of ”gavagai”? It may refer to the rabbit, a part of the rabbit, its color, any fast mov- ing animal, or even that a rabbit is tasty. This problem, usu- ally referred to as referential uncertainty, cannot be solved from a single word-scene pairing. Rather the use of the word in different contexts enables the learner to extract its mean- ing. Nevertheless, children learn the meaning of words from few exposures to them. They rapidly construct hypotheses about word meanings, which may initially be linked to spe- cific contexts in which the words occurred. Over time, how- ever, children generalize among different observations, even though this may result in an overextension of a word’s use (MacWhinney, 1998). This remarkable ability of children has been subject to many studies and resulted in numerous theories on early word learning. In this paper we present a computational model for the in- cremental acquisition of word meanings which is inspired by the learning capabilities of children. More precisely, the sys- tem has been designed to rapidly build internal representa- tions of words from few training samples. The thus acquired knowledge can be used to generalize to previously unseen scenes. Moreover, the framework is endowed with a learn- ing mechanism that extracts features which are relevant to the core meaning of a word. This is done by exploiting the statistical evidence which resides from a word’s use in dif- ferent contexts. Our model tightly couples the rapid memo- rization of word-scene associations with the statistical feature extraction. This results in learning dynamics which resemble a gradual knowledge transfer and consolidation. We will present experimental results which validate the model. Therefore, the model has been applied in a simulated visual scene description task where words for the relations between pairs of geometric objects have been trained. This includes relations in position, color, and size. The results from this experiment illustrate that our model rapidly acquires word meanings from few training exemplars and further ex- tracts word meaning-relevant features. The remainder of this paper is organized as follows. Next, we will review existing approaches for word meaning acqui- sition and relate our model to them. Afterwards, we will state contradictory needs that computational models have to sat- isfy. We proceed with the presentation of our computational model and subsequently show experimental results for it. Fi- nally, we give a summary and outline our future work. Related Work Existing computational models address different levels of ref- erential uncertainty. Firstly, there are approaches which con- sider the problem of how a learner establishes a mapping be- tween words and a set of pre-defined meanings (e.g. Siskind, 1996; K. Smith, Smith, Blythe, & Vogt, 2006; Fontanari, Tikhanoff, Cangelosi, Ilin, & Perlovsky, 2009). In these models the first occurrence of a word typically induces mul- tiple hypotheses about its meaning. These hypotheses be- come subsequently pruned either by incorporating learning constraints (Markman, 1990) or via cross-situational learn- ing (L. Smith & Yu, 2008) - a technique making use of the sta- tistical evidence across many individually ambiguous word- scene pairings. However, these models disregard the fact that learners can seldom rely on a set of pre-established con- cepts. Word meanings rather become flexibly constructed and shaped through language use (Boroditsky, 2001)." @default.
- W2783001876 created "2018-01-26" @default.
- W2783001876 creator A5042767682 @default.
- W2783001876 creator A5046877330 @default.
- W2783001876 date "2010-01-01" @default.
- W2783001876 modified "2023-09-23" @default.
- W2783001876 title "Perceptually Grounded Word Meaning Acquisition: A Computational Model - eScholarship" @default.
- W2783001876 hasPublicationYear "2010" @default.
- W2783001876 type Work @default.
- W2783001876 sameAs 2783001876 @default.
- W2783001876 citedByCount "0" @default.
- W2783001876 crossrefType "journal-article" @default.
- W2783001876 hasAuthorship W2783001876A5042767682 @default.
- W2783001876 hasAuthorship W2783001876A5046877330 @default.
- W2783001876 hasConcept C138885662 @default.
- W2783001876 hasConcept C154945302 @default.
- W2783001876 hasConcept C15744967 @default.
- W2783001876 hasConcept C162324750 @default.
- W2783001876 hasConcept C187736073 @default.
- W2783001876 hasConcept C204321447 @default.
- W2783001876 hasConcept C2780451532 @default.
- W2783001876 hasConcept C2780876879 @default.
- W2783001876 hasConcept C41008148 @default.
- W2783001876 hasConcept C41895202 @default.
- W2783001876 hasConcept C542102704 @default.
- W2783001876 hasConcept C90805587 @default.
- W2783001876 hasConcept C94124525 @default.
- W2783001876 hasConceptScore W2783001876C138885662 @default.
- W2783001876 hasConceptScore W2783001876C154945302 @default.
- W2783001876 hasConceptScore W2783001876C15744967 @default.
- W2783001876 hasConceptScore W2783001876C162324750 @default.
- W2783001876 hasConceptScore W2783001876C187736073 @default.
- W2783001876 hasConceptScore W2783001876C204321447 @default.
- W2783001876 hasConceptScore W2783001876C2780451532 @default.
- W2783001876 hasConceptScore W2783001876C2780876879 @default.
- W2783001876 hasConceptScore W2783001876C41008148 @default.
- W2783001876 hasConceptScore W2783001876C41895202 @default.
- W2783001876 hasConceptScore W2783001876C542102704 @default.
- W2783001876 hasConceptScore W2783001876C90805587 @default.
- W2783001876 hasConceptScore W2783001876C94124525 @default.
- W2783001876 hasIssue "32" @default.
- W2783001876 hasLocation W27830018761 @default.
- W2783001876 hasOpenAccess W2783001876 @default.
- W2783001876 hasPrimaryLocation W27830018761 @default.
- W2783001876 hasRelatedWork W144381363 @default.
- W2783001876 hasRelatedWork W1488195538 @default.
- W2783001876 hasRelatedWork W1499050190 @default.
- W2783001876 hasRelatedWork W1605995731 @default.
- W2783001876 hasRelatedWork W165330127 @default.
- W2783001876 hasRelatedWork W207795388 @default.
- W2783001876 hasRelatedWork W2082821553 @default.
- W2783001876 hasRelatedWork W2097173997 @default.
- W2783001876 hasRelatedWork W2113135711 @default.
- W2783001876 hasRelatedWork W2251130882 @default.
- W2783001876 hasRelatedWork W2401823607 @default.
- W2783001876 hasRelatedWork W2403975755 @default.
- W2783001876 hasRelatedWork W2405727694 @default.
- W2783001876 hasRelatedWork W2408454585 @default.
- W2783001876 hasRelatedWork W2506338715 @default.
- W2783001876 hasRelatedWork W2615352406 @default.
- W2783001876 hasRelatedWork W2773044646 @default.
- W2783001876 hasRelatedWork W2945944040 @default.
- W2783001876 hasRelatedWork W3753517 @default.
- W2783001876 hasRelatedWork W58377777 @default.
- W2783001876 hasVolume "32" @default.
- W2783001876 isParatext "false" @default.
- W2783001876 isRetracted "false" @default.
- W2783001876 magId "2783001876" @default.
- W2783001876 workType "article" @default.