Matches in SemOpenAlex for { <https://semopenalex.org/work/W4287663655> ?p ?o ?g. }
Showing items 1 to 69 of
69
with 100 items per page.
- W4287663655 abstract "Several applications in the scientific simulation of physical systems can be formulated as control/optimization problems. The computational models for such systems generally contain hyperparameters, which control solution fidelity and computational expense. The tuning of these parameters is non-trivial and the general approach is to manually `spot-check' for good combinations. This is because optimal hyperparameter configuration search becomes impractical when the parameter space is large and when they may vary dynamically. To address this issue, we present a framework based on deep reinforcement learning (RL) to train a deep neural network agent that controls a model solve by varying parameters dynamically. First, we validate our RL framework for the problem of controlling chaos in chaotic systems by dynamically changing the parameters of the system. Subsequently, we illustrate the capabilities of our framework for accelerating the convergence of a steady-state CFD solver by automatically adjusting the relaxation factors of discretized Navier-Stokes equations during run-time. The results indicate that the run-time control of the relaxation factors by the learned policy leads to a significant reduction in the number of iterations for convergence compared to the random selection of the relaxation factors. Our results point to potential benefits for learning adaptive hyperparameter learning strategies across different geometries and boundary conditions with implications for reduced computational campaign expenses. footnote{Data and codes available at url{https://github.com/Romit-Maulik/PAR-RL}}" @default.
- W4287663655 created "2022-07-25" @default.
- W4287663655 creator A5028149900 @default.
- W4287663655 creator A5048243433 @default.
- W4287663655 date "2020-09-21" @default.
- W4287663655 modified "2023-10-16" @default.
- W4287663655 title "Distributed deep reinforcement learning for simulation control" @default.
- W4287663655 doi "https://doi.org/10.48550/arxiv.2009.10306" @default.
- W4287663655 hasPublicationYear "2020" @default.
- W4287663655 type Work @default.
- W4287663655 citedByCount "0" @default.
- W4287663655 crossrefType "posted-content" @default.
- W4287663655 hasAuthorship W4287663655A5028149900 @default.
- W4287663655 hasAuthorship W4287663655A5048243433 @default.
- W4287663655 hasBestOaLocation W42876636551 @default.
- W4287663655 hasConcept C119857082 @default.
- W4287663655 hasConcept C126255220 @default.
- W4287663655 hasConcept C134306372 @default.
- W4287663655 hasConcept C154945302 @default.
- W4287663655 hasConcept C15744967 @default.
- W4287663655 hasConcept C162324750 @default.
- W4287663655 hasConcept C199360897 @default.
- W4287663655 hasConcept C2776029896 @default.
- W4287663655 hasConcept C2777303404 @default.
- W4287663655 hasConcept C2778770139 @default.
- W4287663655 hasConcept C33923547 @default.
- W4287663655 hasConcept C41008148 @default.
- W4287663655 hasConcept C50522688 @default.
- W4287663655 hasConcept C50644808 @default.
- W4287663655 hasConcept C73000952 @default.
- W4287663655 hasConcept C77805123 @default.
- W4287663655 hasConcept C8642999 @default.
- W4287663655 hasConcept C91765299 @default.
- W4287663655 hasConcept C97541855 @default.
- W4287663655 hasConceptScore W4287663655C119857082 @default.
- W4287663655 hasConceptScore W4287663655C126255220 @default.
- W4287663655 hasConceptScore W4287663655C134306372 @default.
- W4287663655 hasConceptScore W4287663655C154945302 @default.
- W4287663655 hasConceptScore W4287663655C15744967 @default.
- W4287663655 hasConceptScore W4287663655C162324750 @default.
- W4287663655 hasConceptScore W4287663655C199360897 @default.
- W4287663655 hasConceptScore W4287663655C2776029896 @default.
- W4287663655 hasConceptScore W4287663655C2777303404 @default.
- W4287663655 hasConceptScore W4287663655C2778770139 @default.
- W4287663655 hasConceptScore W4287663655C33923547 @default.
- W4287663655 hasConceptScore W4287663655C41008148 @default.
- W4287663655 hasConceptScore W4287663655C50522688 @default.
- W4287663655 hasConceptScore W4287663655C50644808 @default.
- W4287663655 hasConceptScore W4287663655C73000952 @default.
- W4287663655 hasConceptScore W4287663655C77805123 @default.
- W4287663655 hasConceptScore W4287663655C8642999 @default.
- W4287663655 hasConceptScore W4287663655C91765299 @default.
- W4287663655 hasConceptScore W4287663655C97541855 @default.
- W4287663655 hasLocation W42876636551 @default.
- W4287663655 hasOpenAccess W4287663655 @default.
- W4287663655 hasPrimaryLocation W42876636551 @default.
- W4287663655 hasRelatedWork W10786582 @default.
- W4287663655 hasRelatedWork W1279312 @default.
- W4287663655 hasRelatedWork W1407330 @default.
- W4287663655 hasRelatedWork W2203340 @default.
- W4287663655 hasRelatedWork W256534 @default.
- W4287663655 hasRelatedWork W2683128 @default.
- W4287663655 hasRelatedWork W361876 @default.
- W4287663655 hasRelatedWork W8539471 @default.
- W4287663655 hasRelatedWork W8636990 @default.
- W4287663655 hasRelatedWork W1512708 @default.
- W4287663655 isParatext "false" @default.
- W4287663655 isRetracted "false" @default.
- W4287663655 workType "article" @default.