Matches in SemOpenAlex for { <https://semopenalex.org/work/W58377777> ?p ?o ?g. }
Showing items 1 to 96 of
96
with 100 items per page.
- W58377777 abstract "Informative Communication in Word Production and Word Learning Michael C. Frank, Noah D. Goodman, Peter Lai, and Joshua B. Tenenbaum {mcfrank, ndg, peterlai, jbt}@mit.edu Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Abstract Language does not directly code facts about the world. In- stead, speakers and listeners rely on shared assumptions to al- low them to communicate more efficiently. Writers like Grice and Sperber & Wilson have proposed that communication is assumed to be “informative” or “relevant,” but the predictions of these accounts are often informal or post-hoc. Here we pro- pose a formal analogue to these accounts: that communicators choose what they want to say by how informative it would be about their intended meaning. We derive quantitative predic- tions about how this assumption would be used in language production and learning and test these predictions via two ex- periments. This work takes a first step towards formalizing the pragmatic assumptions necessary for effective communication in under-constrained, real-world situations. Keywords: Language acquisition; Bayesian modeling; Com- munication Introduction How does language work to communicate information from one person to another? Perhaps language is simply a code for facts about the world. On this kind of coding view of communication, all the information necessary to understand an utterance is contained within it. Speakers utter linguistic expressions equivalent to their intended meanings and listen- ers simply decode these expressions to recover their content. There are a profusion of examples of language use, however, which can be natural and easy to understand but are not easily explained by a naive coding model: (1) The statement “I ate some of the cookies.” (Intended meaning: I ate some and not all of the cookies). (2) The declaration “No.” (Intended meaning: I can tell you want to pinch him, but don’t do it). (3) The contextual introduction of a new word “Can I have the glorzit?” (Intended meaning: pass me that thing, which happens to be called a “glorzit”). Philosophers and linguists interested in this problem have suggested that language relies on shared assumptions about the nature of the communicative task. Grice (1975) proposed that speakers follow (and are assumed by comprehenders to follow) a set of maxims, such as “be relevant”, or “make your contribution to the conversation as informative as neces- sary.” Sperber & Wilson (1986) have suggested that there is a shared “Principle of Relevance” which underlies communica- tion. Clark (1996) has argued that communication proceeds by reference to a shared “common ground.” Though these proposals differ in their details, they share a basic assumption that communicators are not simply cod- ing and decoding meanings. Instead, listeners are making in- ferences about speakers’ intentions, taking into account the words they utter and the context of their utterances. This kind of intentional inference framework for language seems much more promising for explaining phenomena like (1-3). But although these ideas seem intuitively correct, the difficulty of formalizing notions like “relevance” has largely kept them from making contact with computational theories of language use and acquisition. The goal of this paper is to begin to address this issue by proposing a computational framework for intentional infer- ence. This framework relies on a shared assumption that com- munications are informative given the context. Although the basis of our framework is general, making predictions within it requires a model of the space of possible meanings and how they map to natural language expressions. Thus, in or- der to make a first test of our framework, we study simple games that are similar to the “language games” proposed by Wittgenstein (1953). In the language games we study, the shared task of commu- nicators is to identify an object from a set using one or a few words. This very restricted task allows us to define the possi- ble meanings that communicators entertain. We then use our framework to make predictions about the meaning and use of single words. This move allows us to define an intuitive mapping between words and meanings: that a word stands for the subset of the context it picks out (its extension). Al- though these two simplifications do bring our tasks further away from natural language use, they also allow us to derive strong quantitative predictions from our framework. The outline of the paper is as follows. We first use our framework to derive predictions for speakers and language learners who assume informative communication in an infer- ential framework. We then test our framework as an account of two different kinds of tasks. Experiment 1 examines, in a simple survey task, whether learners who are inferring the meaning of a novel word assume that speakers are being in- formative in choosing the word they produce. Experiment 2 tests whether, in a more naturalistic production task, speak- ers’ word choice is in fact related to the informativeness of the word they pick. Modeling Informative Communication Consider the context in Figure 1, representing the context in a language game. Imagine an English speaker in this game who is told to use a single word to point out the red circle." @default.
- W58377777 created "2016-06-24" @default.
- W58377777 creator A5001961716 @default.
- W58377777 creator A5057019106 @default.
- W58377777 creator A5071093940 @default.
- W58377777 creator A5083910193 @default.
- W58377777 date "2009-01-01" @default.
- W58377777 modified "2023-09-23" @default.
- W58377777 title "Informative communication in word production and word learning" @default.
- W58377777 cites W1571929606 @default.
- W58377777 cites W1969787028 @default.
- W58377777 cites W2016429292 @default.
- W58377777 cites W2027796863 @default.
- W58377777 cites W2099111195 @default.
- W58377777 cites W2134145060 @default.
- W58377777 cites W2141038596 @default.
- W58377777 cites W2264742718 @default.
- W58377777 hasPublicationYear "2009" @default.
- W58377777 type Work @default.
- W58377777 sameAs 58377777 @default.
- W58377777 citedByCount "18" @default.
- W58377777 countsByYear W583777772012 @default.
- W58377777 countsByYear W583777772014 @default.
- W58377777 countsByYear W583777772016 @default.
- W58377777 countsByYear W583777772017 @default.
- W58377777 countsByYear W583777772019 @default.
- W58377777 countsByYear W583777772020 @default.
- W58377777 countsByYear W583777772021 @default.
- W58377777 crossrefType "journal-article" @default.
- W58377777 hasAuthorship W58377777A5001961716 @default.
- W58377777 hasAuthorship W58377777A5057019106 @default.
- W58377777 hasAuthorship W58377777A5071093940 @default.
- W58377777 hasAuthorship W58377777A5083910193 @default.
- W58377777 hasConcept C11693617 @default.
- W58377777 hasConcept C138885662 @default.
- W58377777 hasConcept C154945302 @default.
- W58377777 hasConcept C15744967 @default.
- W58377777 hasConcept C169760540 @default.
- W58377777 hasConcept C169900460 @default.
- W58377777 hasConcept C195324797 @default.
- W58377777 hasConcept C204321447 @default.
- W58377777 hasConcept C2775852435 @default.
- W58377777 hasConcept C2776264592 @default.
- W58377777 hasConcept C2777415597 @default.
- W58377777 hasConcept C2780876879 @default.
- W58377777 hasConcept C41008148 @default.
- W58377777 hasConcept C41895202 @default.
- W58377777 hasConcept C542102704 @default.
- W58377777 hasConcept C74672266 @default.
- W58377777 hasConcept C89267518 @default.
- W58377777 hasConceptScore W58377777C11693617 @default.
- W58377777 hasConceptScore W58377777C138885662 @default.
- W58377777 hasConceptScore W58377777C154945302 @default.
- W58377777 hasConceptScore W58377777C15744967 @default.
- W58377777 hasConceptScore W58377777C169760540 @default.
- W58377777 hasConceptScore W58377777C169900460 @default.
- W58377777 hasConceptScore W58377777C195324797 @default.
- W58377777 hasConceptScore W58377777C204321447 @default.
- W58377777 hasConceptScore W58377777C2775852435 @default.
- W58377777 hasConceptScore W58377777C2776264592 @default.
- W58377777 hasConceptScore W58377777C2777415597 @default.
- W58377777 hasConceptScore W58377777C2780876879 @default.
- W58377777 hasConceptScore W58377777C41008148 @default.
- W58377777 hasConceptScore W58377777C41895202 @default.
- W58377777 hasConceptScore W58377777C542102704 @default.
- W58377777 hasConceptScore W58377777C74672266 @default.
- W58377777 hasConceptScore W58377777C89267518 @default.
- W58377777 hasIssue "31" @default.
- W58377777 hasLocation W583777771 @default.
- W58377777 hasOpenAccess W58377777 @default.
- W58377777 hasPrimaryLocation W583777771 @default.
- W58377777 hasRelatedWork W144381363 @default.
- W58377777 hasRelatedWork W1488195538 @default.
- W58377777 hasRelatedWork W1495039577 @default.
- W58377777 hasRelatedWork W1533917153 @default.
- W58377777 hasRelatedWork W165330127 @default.
- W58377777 hasRelatedWork W1969787028 @default.
- W58377777 hasRelatedWork W1974955132 @default.
- W58377777 hasRelatedWork W1993979041 @default.
- W58377777 hasRelatedWork W2002206192 @default.
- W58377777 hasRelatedWork W2018725327 @default.
- W58377777 hasRelatedWork W2024754409 @default.
- W58377777 hasRelatedWork W2141038596 @default.
- W58377777 hasRelatedWork W2144874553 @default.
- W58377777 hasRelatedWork W2151516755 @default.
- W58377777 hasRelatedWork W2264742718 @default.
- W58377777 hasRelatedWork W2396032199 @default.
- W58377777 hasRelatedWork W2401314204 @default.
- W58377777 hasRelatedWork W2577547690 @default.
- W58377777 hasRelatedWork W2588887699 @default.
- W58377777 hasRelatedWork W2783001876 @default.
- W58377777 hasVolume "31" @default.
- W58377777 isParatext "false" @default.
- W58377777 isRetracted "false" @default.
- W58377777 magId "58377777" @default.
- W58377777 workType "article" @default.