Matches in SemOpenAlex for { <https://semopenalex.org/work/W49630116> ?p ?o ?g. }
Showing items 1 to 84 of
84
with 100 items per page.
- W49630116 abstract "Everyday knowledge about living things, physical objects and the beliefs and desires of other people appears to be organized into sophisticated systems that are often called intuitive theories. Two long term goals for psychological research are to understand how these theories are mentally represented, and how they are acquired. We argue that the language of thought hypothesis can help to address both questions. First, compositional languages can capture the content of intuitive theories. Second, any compositional language will generate an account of theory learning which predicts that theories with short descriptions tend to be preferred. We describe a computational framework that captures both ideas, and compare its predictions to behavioral data from a simple theory learning task. Any comprehensive account of human knowledge must acknowledge two principles. First, everyday knowledge is more than a list of isolated facts, and much of it appears to be organized into richly structured systems that are sometimes called intuitive theories. Even young children, for instance, have systematic beliefs about domains including folk physics, folk biology, and folk psychology [10]. Second, some aspects of these theories appear to be learned. Developmental psychologists have explored how intuitive theories emerge over the first decade of life, and at least some of these changes appear to result from learning. Although theory learning raises some challenging problems, two computational principles that may support this ability have been known for many years. First, a theory-learning system must be able to represent the content of any theory that it acquires. A learner that cannot represent a given system of concepts is clearly unable to learn this system from data. Second, there will always be many systems of concepts that are compatible with any given data set, and a learner must rely on some a priori ordering of the set of possible theories to decide which candidate is best [5, 9]. Loosely speaking, this ordering can be identified with a simplicity measure, or a prior distribution over the space of possible theories. There is at least one natural way to connect these two computational principles. Suppose that intuitive theories are represented in a “language of thought:” a language that allows complex concepts to be represented as combinations of simpler concepts [5]. A compositional language provides a straightforward way to construct sophisticated theories, but also provides a natural ordering over the resulting space of theories: the a priori probability of a theory can be identified with its length in this representation language [3, 7]. Combining this prior distribution with an engine for Bayesian inference immediately leads to a computational account of how theories might be learned. There may be other ways to explain how people represent and acquire complex systems of knowledge, but it is striking that the “language of thought” hypothesis can help to explain both knowledge representation and learning. This paper describes a computational framework that helps to explain how theories are acquired, and that can be used to evaluate different proposals about the language of thought. Our approach builds on previous discussions of concept learning that have explored the link between compositional representations and inductive inference. Two recent approaches propose that concepts are represented in a form of propositional logic, and that the a priori plausibility of an inductive hypothesis is related to the length of its representation in this language [4, 6]. Our approach is similar in spirit, but is motivated in part by the need for languages that are richer than propositional logic. The framework we present is extremely general, and is compatible with virtually any representation language, including various forms of predicate logic. Methods for learning predicate logic theories have previously been explored in the field of Inductive Logic Programming, and we recently proposed a theory-learning model that is inspired by this tradition [7]. Our current approach is motivated by similar goals, but is better able to account for the discovery of abstract theoretical laws. The next section describes our computational framework, and introduces a specific logical language that we will consider throughout. Our framework allows relatively sophisticated theories to be represented and learned, but we evaluate it here by applying it to a simple learning problem, and comparing its predictions with human inductive inferences. A Bayesian approach to theory discovery Suppose that a learner observes some of the relationships that hold among a fixed, finite set of entities, and wishes to discover a theory that accounts for these data. Suppose, for instance, that the entities are thirteen adults from a remote tribe (a through m), and that the data specify that the spouse relation (S(·, ·)) is true of some pairs (Figure 1). One candidate theory states that S(·, ·) is a symmetric relation, that some of the individuals are male (M(·)), that marriages are permitted only between males and non-males, and that males may take multiple spouses but non-males may have only one spouse (Figure 1b). Other theories are possible, including the theory which states only that S(·, ·) is symmetric. Accounts of theory learning should distinguish between at least three kinds of entities: theories, models, and data. A theory is a set of statements that captures constraints on the possible configurations of the world. For instance, the theory in Figure 1b rules out configurations where the spouse" @default.
- W49630116 created "2016-06-24" @default.
- W49630116 creator A5001961716 @default.
- W49630116 creator A5071093940 @default.
- W49630116 creator A5080087902 @default.
- W49630116 date "2008-01-01" @default.
- W49630116 modified "2023-09-26" @default.
- W49630116 title "Theory Acquisition and the Language of Thought" @default.
- W49630116 cites W1778685146 @default.
- W49630116 cites W2081981374 @default.
- W49630116 cites W2093777421 @default.
- W49630116 cites W2098314833 @default.
- W49630116 cites W2142336306 @default.
- W49630116 cites W2146722303 @default.
- W49630116 cites W2169382901 @default.
- W49630116 cites W2799061192 @default.
- W49630116 hasPublicationYear "2008" @default.
- W49630116 type Work @default.
- W49630116 sameAs 49630116 @default.
- W49630116 citedByCount "17" @default.
- W49630116 countsByYear W496301162012 @default.
- W49630116 countsByYear W496301162014 @default.
- W49630116 countsByYear W496301162016 @default.
- W49630116 countsByYear W496301162017 @default.
- W49630116 countsByYear W496301162018 @default.
- W49630116 crossrefType "journal-article" @default.
- W49630116 hasAuthorship W49630116A5001961716 @default.
- W49630116 hasAuthorship W49630116A5071093940 @default.
- W49630116 hasAuthorship W49630116A5080087902 @default.
- W49630116 hasConcept C111472728 @default.
- W49630116 hasConcept C138885662 @default.
- W49630116 hasConcept C145420912 @default.
- W49630116 hasConcept C15744967 @default.
- W49630116 hasConcept C162324750 @default.
- W49630116 hasConcept C180747234 @default.
- W49630116 hasConcept C187736073 @default.
- W49630116 hasConcept C188147891 @default.
- W49630116 hasConcept C2779018934 @default.
- W49630116 hasConcept C2780451532 @default.
- W49630116 hasConcept C2780586882 @default.
- W49630116 hasConcept C41008148 @default.
- W49630116 hasConcept C74672266 @default.
- W49630116 hasConcept C92393732 @default.
- W49630116 hasConceptScore W49630116C111472728 @default.
- W49630116 hasConceptScore W49630116C138885662 @default.
- W49630116 hasConceptScore W49630116C145420912 @default.
- W49630116 hasConceptScore W49630116C15744967 @default.
- W49630116 hasConceptScore W49630116C162324750 @default.
- W49630116 hasConceptScore W49630116C180747234 @default.
- W49630116 hasConceptScore W49630116C187736073 @default.
- W49630116 hasConceptScore W49630116C188147891 @default.
- W49630116 hasConceptScore W49630116C2779018934 @default.
- W49630116 hasConceptScore W49630116C2780451532 @default.
- W49630116 hasConceptScore W49630116C2780586882 @default.
- W49630116 hasConceptScore W49630116C41008148 @default.
- W49630116 hasConceptScore W49630116C74672266 @default.
- W49630116 hasConceptScore W49630116C92393732 @default.
- W49630116 hasLocation W496301161 @default.
- W49630116 hasOpenAccess W49630116 @default.
- W49630116 hasPrimaryLocation W496301161 @default.
- W49630116 hasRelatedWork W1488309867 @default.
- W49630116 hasRelatedWork W1580530448 @default.
- W49630116 hasRelatedWork W1594369375 @default.
- W49630116 hasRelatedWork W1990150421 @default.
- W49630116 hasRelatedWork W2030127770 @default.
- W49630116 hasRelatedWork W2045656233 @default.
- W49630116 hasRelatedWork W2081981374 @default.
- W49630116 hasRelatedWork W2118373646 @default.
- W49630116 hasRelatedWork W2119017831 @default.
- W49630116 hasRelatedWork W2123713131 @default.
- W49630116 hasRelatedWork W2136445846 @default.
- W49630116 hasRelatedWork W2142336306 @default.
- W49630116 hasRelatedWork W2162188269 @default.
- W49630116 hasRelatedWork W2169382901 @default.
- W49630116 hasRelatedWork W2184999107 @default.
- W49630116 hasRelatedWork W2278310934 @default.
- W49630116 hasRelatedWork W2411711353 @default.
- W49630116 hasRelatedWork W2589653196 @default.
- W49630116 hasRelatedWork W3140968660 @default.
- W49630116 hasRelatedWork W2175859969 @default.
- W49630116 isParatext "false" @default.
- W49630116 isRetracted "false" @default.
- W49630116 magId "49630116" @default.
- W49630116 workType "article" @default.