Matches in SemOpenAlex for { <https://semopenalex.org/work/W104421572> ?p ?o ?g. }
Showing items 1 to 68 of
68
with 100 items per page.
- W104421572 abstract "EUREKA is a problem-solving system that operates through a form of analogical reasoning. The system was designed to study how relatively low-level memory, reasoning, and learning mechanisms can account for high-level learning in human problem solvers. Thus, EUREKA’s design has focused on issues of memory representation and retrieval of analogies, at the expense of complex problem-solving ability or sophisticated analogical elaboration techniques. Two computational systems for analogical reasoning, ARCS/ACME and MAC/FAC, are relatively powerful and well-known in the cognitive science literature. However, they have not addressed issues of learning, and they have not been implemented in the context of a performance task that can dictate what makes an analogy “good”. Thus, it appears that these different research directions have much to offer each other. We describe the EUREKA system and compare its analogical retrieval mechanism with those in ARCS and MAC/FAC. We then discuss the issues involved in incorporating ARCS and MAC/FAC into a learning problem solver such as EUREKA. We are interested in the low-level memory, learning, and reasoning processes that give rise to improvement in problemsolving behavior over time. EUREKA is the problem-solving architecture we are using to study these processes. An explicit assumption within EUREKA’s design is that all processes are aspects of analogical reasoning. In addition, we designed the system so that the low-level retrieval and matching processes would dominate its behavior. The system does not possess or learn the types of high-level control knowledge found in other problem-solving systems. Our intent is to investigate how much of human learning in problem solving can be modeled with such low-level mechanisms. This paper presents an overview of EUREKA’s architecture and some of the learning results it accounts for. We then turn our attention to two well-known analogical retrieval mechanisms in the cognitive science literature. ARCS (Thagard, Holyoak, Nelson, & Gochfeld, 1990) and MAC/FAC (Gentner & Forbus, 1991) model psychological findings on analogical retrieval and reasoning. However, neither has been examined in the context of a problem-solving system, or in a system that learns with experience. The remainder of the paper focuses on the issues of analogical retrieval and learning, and discusses the possibilities of incorporating these alternative analogical retrieval mechanisms into a problem-solving system. Terminology Before continuing, it is worth defining some terms to avoid future confusion. For analogical reasoning, a basic unit of knowledge is the analogical case, which is further decomposed into a set of concepts and relations between those concepts. For our purposes, every analogical case corresponds to a problem situation. A problem situation is a specific set of relations describing a state of the world, together with a set of goal relations that should be achievable by applying a sequence of operators to that state. Note that cases in EUREKA are a bit different from those in case-based reasoning, where “case” typically denotes an entire problem solution. At any given time, EUREKA will have a current problem situation, for which it must decide on an operator to apply. This is the target problem situation. The analogical reasoning process is generally divided into three stages. First, a retrieval mechanism identifies a number of candidate sources from the potential analogies stored in memory. Next, the set of candidate sources undergo further elaboration to fill out the potential mappings between each source and the target. Finally, evaluation of each candidate source determines how well each candidate will serve as an analogical source for the target. Let us now turn to a description of EUREKA in these terms. An overview of EUREKA Jones (1993) presents the computational details of EUREKA, but here we provide a general overview of the system. EUREKA adopts a reasoning formulation called flexible meansends analysis (Jones & VanLehn, 1994; Langley & Allen, 1991). As described above, each problem situation includes a current world state and a set of goal conditions to which the state should be transformed. Operator selection creates a goal to apply a particular operator to the current state of the problem situation. If the preconditions of the operator can all be matched to the current state, the operator executes, leading to a new problem situation with a different state but the same goals. Otherwise, the system sets up a new problem situation with the same current state, but with the operator’s preconditions as the new goals. EUREKA then treats this new problem situation in a recursive manner. The difference between flexible means-ends analysis and standard means-ends analysis (Ernst & Newell, 1969; Fikes & Nilsson, 1971) is that the flexible form does not require selected operators to apply directly to the current goal conditions (i.e., it is not necessary that the selected operator obviously “reduce any differences”). Rather than using this heuristic to limit search, EUREKA relies on its retrieval and learning mechanisms to control which operators are suggested to apply to any particular problem situation. Because operator selection depends on the entire problem situation (and not just the goals), EUREKA can blend goal-driven and opportunistic behavior when appropriate. Every time EUREKA generates a new problem situation, it stores a representation of the situation (as well as the operator the led to this situation) into its long-term semantic network. Each object and relation in a problem situationbecomes a node in the semantic network. In addition, the network stores nodes representing instances of architecturally defined concepts, such as problem situations and operators. Items are never deleted from long-term memory, and memories are never stored in an abstract form. Rather, the semantic memory stores all the specific problem situations that it encounters. Situations become linked together in memory when they share objects, relations, or object types. If a particular concept from a problem situation already exists in memory, EUREKA increases the trace strengths of the links from the concept, rather than adding a new copy of the concept. When EUREKA is working on a particular problem situation, it must select an operator to apply to the problem. To this end, EUREKA retrieves a subset of the stored problem situations from long-term memory. This small set of candidate sources is further elaborated and evaluated, to see which would provide the best candidate analogy for the current problem situation. EUREKA chooses one candidate stochastically, based on the evaluation score, and identifies the operator associated with that source analogy. Finally, the system creates a goal to apply to the newly mapped operator to the current state. EUREKA proceeds in this manner until it solves the problem or the current solutionpath fails (by exceeding a time limit or detecting a cycle in the solution path). Upon failure, EUREKA does not have the luxury of backtracking, which would allow the system to search the problem space systematically and possibly exhaustively. Rather, EUREKA begins the problem anew from the initial problem situation. The inability to backtrack systematically greatly hinders the system’s ability to solve problems, but we feel that this is a psychologically plausible limitation. The limitation also places further importance on effective learning. The combination of EUREKA’s learning mechanisms and its stochastic selection process encourage the system to explore alternative solution paths on subseqent attempts to solve a problem. However, there is no guarantee that a previous search will not be duplicated. If the system fails to find a solution after a preset number of attempts (50 in our experiments), it abandons the problem completely. Analogical retrieval in EUREKA EUREKA’s analogical reasoner incorporates two stages. The Table 1: EUREKA’s algorithm for spreading activation. Let ACTIVATION_THRESHOLD be 0.01; Let DAMPING_FACTOR be 0.4; Let INITIAL_ACTIVATION be 1.0;" @default.
- W104421572 created "2016-06-24" @default.
- W104421572 creator A5076552726 @default.
- W104421572 creator A5081275482 @default.
- W104421572 date "2007-01-01" @default.
- W104421572 modified "2023-09-27" @default.
- W104421572 title "Retrieval and Learning in Analogical Problem Solving" @default.
- W104421572 cites W1519594419 @default.
- W104421572 cites W1567972407 @default.
- W104421572 cites W1601488840 @default.
- W104421572 cites W1606754314 @default.
- W104421572 cites W1743328599 @default.
- W104421572 cites W1994925058 @default.
- W104421572 cites W202407119 @default.
- W104421572 cites W2078294512 @default.
- W104421572 cites W2108270708 @default.
- W104421572 cites W2118554906 @default.
- W104421572 cites W2145454741 @default.
- W104421572 cites W2151115426 @default.
- W104421572 cites W2156159857 @default.
- W104421572 cites W2402164836 @default.
- W104421572 cites W3139316869 @default.
- W104421572 cites W3200958767 @default.
- W104421572 cites W3202912068 @default.
- W104421572 cites W74704794 @default.
- W104421572 cites W81339653 @default.
- W104421572 cites W87778534 @default.
- W104421572 hasPublicationYear "2007" @default.
- W104421572 type Work @default.
- W104421572 sameAs 104421572 @default.
- W104421572 citedByCount "5" @default.
- W104421572 countsByYear W1044215722012 @default.
- W104421572 crossrefType "journal-article" @default.
- W104421572 hasAuthorship W104421572A5076552726 @default.
- W104421572 hasAuthorship W104421572A5081275482 @default.
- W104421572 hasConcept C154945302 @default.
- W104421572 hasConcept C204321447 @default.
- W104421572 hasConcept C41008148 @default.
- W104421572 hasConceptScore W104421572C154945302 @default.
- W104421572 hasConceptScore W104421572C204321447 @default.
- W104421572 hasConceptScore W104421572C41008148 @default.
- W104421572 hasLocation W1044215721 @default.
- W104421572 hasOpenAccess W104421572 @default.
- W104421572 hasPrimaryLocation W1044215721 @default.
- W104421572 hasRelatedWork W1018163281 @default.
- W104421572 hasRelatedWork W145012249 @default.
- W104421572 hasRelatedWork W1506565537 @default.
- W104421572 hasRelatedWork W1537184953 @default.
- W104421572 hasRelatedWork W166584829 @default.
- W104421572 hasRelatedWork W181073337 @default.
- W104421572 hasRelatedWork W2078451758 @default.
- W104421572 hasRelatedWork W2085621600 @default.
- W104421572 hasRelatedWork W2098443349 @default.
- W104421572 hasRelatedWork W2147446588 @default.
- W104421572 hasRelatedWork W2229002394 @default.
- W104421572 hasRelatedWork W236064050 @default.
- W104421572 hasRelatedWork W2402701428 @default.
- W104421572 hasRelatedWork W2469488073 @default.
- W104421572 hasRelatedWork W2506513026 @default.
- W104421572 hasRelatedWork W2905723176 @default.
- W104421572 hasRelatedWork W3103281223 @default.
- W104421572 hasRelatedWork W3137420402 @default.
- W104421572 hasRelatedWork W836707 @default.
- W104421572 hasRelatedWork W11517654 @default.
- W104421572 isParatext "false" @default.
- W104421572 isRetracted "false" @default.
- W104421572 magId "104421572" @default.
- W104421572 workType "article" @default.