Matches in SemOpenAlex for { <https://semopenalex.org/work/W344869927> ?p ?o ?g. }
Showing items 1 to 68 of
68
with 100 items per page.
- W344869927 abstract "Graph search has been employed by many AI techniques and applications. A natural way to improve the efficiency of search is to utilize advanced, more powerful computing platforms. However, expensive computing infrastructures, such as supercomputers and large-scale clusters, are traditionally available to only a limited number of projects and researchers. As a results, most AI applications, with access to only commodity computers and clusters, cannot benefit from the efficiency improvements of high-performance parallel search algorithms. Cloud computing provides an attractive, highly accessible alternative to other traditional highperformance computing platforms. In this paper, we first show that the run-time of our stochastic search algorithm in planning is a heavy-tailed distribution, which Type of Report: Other Department of Computer Science & Engineering Washington University in St. Louis Campus Box 1045 St. Louis, MO 63130 ph: (314) 935-6160 ied and applied to several areas of automated planning, such as sampling possible trajectories in probabilistic planning (Bryce, Kambhampati, and Smith 2006) and robot motion planning (LaValle 2006). (Fern, Yoon, and Givan 2004) uses random walk exploration to lean domain-specific control knowledge. This paper generally has two contributions. First, we show that the run-time distribution Monte Carlo Random Walk (MRW) algorithm in planning is a heavytailed distribution, which has a remarkable variability. Second, we propose a parallel MRW algorithm which takes advantage of short runs in thus heavy-tailed distribution. Our parallel MRW algorithm is a parallel stochastic search which use low frequency communication, even no communication between computing nodes which is perfectly suitable for cloud computing architecture. The remainder of this paper is organized as follows: Section briefly reviews the cloud computing, SAS+ formalism of classic planning and Monte Carlo Random Walk method. Section explains details of MRW algorithm and its heavy-tailed run-time distribution. Section introduces basic parallel MRW algorithm and a communication technique to improve the efficiency and probability of solving problems. Section discusses the performance on standard planning benchmarks from the IPC-4 competition. Section contains concluding remarks and some potential directions for future work. Background Cloud computing SAS+ Formalism In this paper, we work on the SAS+ formalism (Jonsson and Backstrom 1998) of classical planning. In the following, we review this formalism and introduce our notations. Definition 1. A SAS+ planning task Π is defined as a tuple {X,O, S, sI , sG}. • X = {x1, · · · , xN} is a set of multi-valued state variables, each with an associated finite domain Dom(xi). • O is a set of actions and each action o ∈ O is a tuple (pre(o), eff(o)), where both pre(o) and eff(o) define some partial assignments of variables in the form xi = vi, vi ∈ Dom(xi). sG is a partial assignment that defines the goal. • S is the set of states. A state s ∈ S is a full assignment to all the state variables. sI ∈ S is the initial state. A state s is a goal state if sG ⊆ s. For a given state s and an action o, when all variable assignments in pre(o) are met in state s, action o is applicable at state s. After applying o to s, the state variable assignment will be changed to a new state s according to eff(o): the state variables that appear in eff(o) will be changed to the assignments in eff(o) while other state variables remain the same. We denote the resulting state of applying an applicable action o to s as s = apply(s, o). apply(s, o) is undefined if o is not applicable at S. The planning task is to find a plan, a sequence of actions that transits the initial state sI to a goal state that includes sG. An important structure for a given SAS+ task is the domain transition graph defined as follows. Definition 2. For a SAS+ planning task, each state variable xi, i = 1, · · · , N corresponds to a domain transition graph (DTG) Gi, a directed graph with a vertex set V (Gi) = Dom(xi) ∪ v0, where v0 is a special vertex, and an edge set E(Gi) determined by the following. • If there is an action o such that (xi = vi) ∈ pre(o) and (xi = v ′ i) ∈ eff(o), then (vi, v ′ i) belongs to E(Gi) and we say that o is associated with the edge ei = (vi, v ′ i) (denoted as o ⊢ ei). It is conventional to call the edges in DTGs as transitions. • If there is an action o such that (xi = v ′ i) ∈ eff(o) and no assignment to xi is in pre(o), then (v0, v ′ i) belongs to E(Gi) and we say that o is associated with the transition ei = (v0, v ′ i) (denoted as o ⊢ ei). Intuitively, a SAS+ task can be decomposed into multiple objects, each corresponding to one DTG, which models the transitions of the possible values of that object. Monte-Carlo Random Walk In Monte-Carlo Random Walk planning (Nakhost and Mller 2009), fast Monte-Carlo random walks are used for exploring the neighborhood of a search state. A relatively large set of states S in the neighborhood of the current state s0 is sampled before greedily selecting a most promising next state s ∈ S. For example, a new random walk starts from s0, builds a sequence of actions o0 → o1 → ... → ok and changes s0 to s. At the end of the random walk, s is evaluated by a heuristic function h, for instance by the FF heuristic, and added to S. When a stopping criterion is satisfied, the algorithm chooses a state in S with the minimum h-value to replace s0. The MRW method uniformly deals with both problems of local search methods: it quickly escapes from local minima and can recover from areas where the evaluation is poor. The MRW method does not rely on any assumptions about the local properties of the search space or heuristic function. Monte-Carlo Random Walk Search Alogorithm 1 shows the framework of Monte-Carlo Random Walk method. Given a SAS+ planning problem Π, MRW search builds a chain of states sI → s1 → ... → sn such that sI is the initial state, sn is a goal state, and each transition si → si+1 uses an action sequence found by RandomWalk exploring the neighborhood of si (Line 9). MRW search fails to find a solution when the minimum obtained h-value does not improve within MAX STEPS times, or si is a dead-end Algorithm 1: MRW(Π) Input: SAS+ planning problem Π Output: a solution plan s ← sI ; 1" @default.
- W344869927 created "2016-06-24" @default.
- W344869927 creator A5021424158 @default.
- W344869927 creator A5024923124 @default.
- W344869927 creator A5028755343 @default.
- W344869927 creator A5047318884 @default.
- W344869927 date "2010-01-01" @default.
- W344869927 modified "2023-09-23" @default.
- W344869927 title "Cloud Computing for Scalable Planning by Stochastic Search" @default.
- W344869927 cites W1964094258 @default.
- W344869927 cites W2015420561 @default.
- W344869927 cites W2095709533 @default.
- W344869927 cites W2102567944 @default.
- W344869927 cites W2128361632 @default.
- W344869927 cites W2611243847 @default.
- W344869927 cites W84074851 @default.
- W344869927 cites W99755553 @default.
- W344869927 cites W1980073965 @default.
- W344869927 cites W2611957926 @default.
- W344869927 doi "https://doi.org/10.7936/k7jd4v06" @default.
- W344869927 hasPublicationYear "2010" @default.
- W344869927 type Work @default.
- W344869927 sameAs 344869927 @default.
- W344869927 citedByCount "0" @default.
- W344869927 crossrefType "journal-article" @default.
- W344869927 hasAuthorship W344869927A5021424158 @default.
- W344869927 hasAuthorship W344869927A5024923124 @default.
- W344869927 hasAuthorship W344869927A5028755343 @default.
- W344869927 hasAuthorship W344869927A5047318884 @default.
- W344869927 hasConcept C111919701 @default.
- W344869927 hasConcept C2522767166 @default.
- W344869927 hasConcept C41008148 @default.
- W344869927 hasConcept C48044578 @default.
- W344869927 hasConcept C77088390 @default.
- W344869927 hasConcept C79974875 @default.
- W344869927 hasConceptScore W344869927C111919701 @default.
- W344869927 hasConceptScore W344869927C2522767166 @default.
- W344869927 hasConceptScore W344869927C41008148 @default.
- W344869927 hasConceptScore W344869927C48044578 @default.
- W344869927 hasConceptScore W344869927C77088390 @default.
- W344869927 hasConceptScore W344869927C79974875 @default.
- W344869927 hasLocation W3448699271 @default.
- W344869927 hasOpenAccess W344869927 @default.
- W344869927 hasPrimaryLocation W3448699271 @default.
- W344869927 hasRelatedWork W1552810675 @default.
- W344869927 hasRelatedWork W1898392233 @default.
- W344869927 hasRelatedWork W1961501816 @default.
- W344869927 hasRelatedWork W1970199308 @default.
- W344869927 hasRelatedWork W2028159002 @default.
- W344869927 hasRelatedWork W2038722889 @default.
- W344869927 hasRelatedWork W225519807 @default.
- W344869927 hasRelatedWork W2295860214 @default.
- W344869927 hasRelatedWork W2401889159 @default.
- W344869927 hasRelatedWork W2474843036 @default.
- W344869927 hasRelatedWork W2475785066 @default.
- W344869927 hasRelatedWork W2952763248 @default.
- W344869927 hasRelatedWork W2963382491 @default.
- W344869927 hasRelatedWork W2963792447 @default.
- W344869927 hasRelatedWork W2964005090 @default.
- W344869927 hasRelatedWork W2964120271 @default.
- W344869927 hasRelatedWork W3004466950 @default.
- W344869927 hasRelatedWork W3008738889 @default.
- W344869927 hasRelatedWork W3046624484 @default.
- W344869927 hasRelatedWork W3212762553 @default.
- W344869927 isParatext "false" @default.
- W344869927 isRetracted "false" @default.
- W344869927 magId "344869927" @default.
- W344869927 workType "article" @default.