Matches in SemOpenAlex for { <https://semopenalex.org/work/W2951278471> ?p ?o ?g. }
Showing items 1 to 83 of
83
with 100 items per page.
- W2951278471 endingPage "3290" @default.
- W2951278471 startingPage "3283" @default.
- W2951278471 abstract "Reinforcement Learning (RL) has achieved impressive performance in many complex environments due to the integration with Deep Neural Networks (DNNs). At the same time, Genetic Algorithms (GAs), often seen as a competing approach to RL, had limited success in scaling up to the DNNs required to solve challenging tasks. Contrary to this dichotomic view, in the physical world, evolution and learning are complementary processes that continuously interact. The recently proposed Evolutionary Reinforcement Learning (ERL) framework has demonstrated mutual benefits to performance when combining the two methods. However, ERL has not fully addressed the scalability problem of GAs. In this paper, we show that this problem is rooted in an unfortunate combination of a simple genetic encoding for DNNs and the use of traditional biologically-inspired variation operators. When applied to these encodings, the standard operators are destructive and cause catastrophic forgetting of the traits the networks acquired. We propose a novel algorithm called Proximal Distilled Evolutionary Reinforcement Learning (PDERL) that is characterised by a hierarchical integration between evolution and learning. The main innovation of PDERL is the use of learning-based variation operators that compensate for the simplicity of the genetic representation. Unlike traditional operators, our proposals meet the functional requirements of variation operators when applied on directly-encoded DNNs. We evaluate PDERL in five robot locomotion settings from the OpenAI gym. Our method outperforms ERL, as well as two state-of-the-art RL algorithms, PPO and TD3, in all tested environments." @default.
- W2951278471 created "2019-06-27" @default.
- W2951278471 creator A5049090076 @default.
- W2951278471 creator A5053576615 @default.
- W2951278471 creator A5056748708 @default.
- W2951278471 date "2020-04-03" @default.
- W2951278471 modified "2023-10-04" @default.
- W2951278471 title "Proximal Distilled Evolutionary Reinforcement Learning" @default.
- W2951278471 doi "https://doi.org/10.1609/aaai.v34i04.5728" @default.
- W2951278471 hasPublicationYear "2020" @default.
- W2951278471 type Work @default.
- W2951278471 sameAs 2951278471 @default.
- W2951278471 citedByCount "25" @default.
- W2951278471 countsByYear W29512784712019 @default.
- W2951278471 countsByYear W29512784712020 @default.
- W2951278471 countsByYear W29512784712021 @default.
- W2951278471 countsByYear W29512784712022 @default.
- W2951278471 countsByYear W29512784712023 @default.
- W2951278471 crossrefType "journal-article" @default.
- W2951278471 hasAuthorship W2951278471A5049090076 @default.
- W2951278471 hasAuthorship W2951278471A5053576615 @default.
- W2951278471 hasAuthorship W2951278471A5056748708 @default.
- W2951278471 hasBestOaLocation W29512784711 @default.
- W2951278471 hasConcept C111472728 @default.
- W2951278471 hasConcept C119857082 @default.
- W2951278471 hasConcept C121332964 @default.
- W2951278471 hasConcept C125411270 @default.
- W2951278471 hasConcept C138885662 @default.
- W2951278471 hasConcept C154945302 @default.
- W2951278471 hasConcept C159149176 @default.
- W2951278471 hasConcept C199505168 @default.
- W2951278471 hasConcept C2777212361 @default.
- W2951278471 hasConcept C2778334786 @default.
- W2951278471 hasConcept C2780586882 @default.
- W2951278471 hasConcept C41008148 @default.
- W2951278471 hasConcept C41895202 @default.
- W2951278471 hasConcept C44870925 @default.
- W2951278471 hasConcept C48044578 @default.
- W2951278471 hasConcept C50644808 @default.
- W2951278471 hasConcept C7149132 @default.
- W2951278471 hasConcept C77088390 @default.
- W2951278471 hasConcept C97541855 @default.
- W2951278471 hasConceptScore W2951278471C111472728 @default.
- W2951278471 hasConceptScore W2951278471C119857082 @default.
- W2951278471 hasConceptScore W2951278471C121332964 @default.
- W2951278471 hasConceptScore W2951278471C125411270 @default.
- W2951278471 hasConceptScore W2951278471C138885662 @default.
- W2951278471 hasConceptScore W2951278471C154945302 @default.
- W2951278471 hasConceptScore W2951278471C159149176 @default.
- W2951278471 hasConceptScore W2951278471C199505168 @default.
- W2951278471 hasConceptScore W2951278471C2777212361 @default.
- W2951278471 hasConceptScore W2951278471C2778334786 @default.
- W2951278471 hasConceptScore W2951278471C2780586882 @default.
- W2951278471 hasConceptScore W2951278471C41008148 @default.
- W2951278471 hasConceptScore W2951278471C41895202 @default.
- W2951278471 hasConceptScore W2951278471C44870925 @default.
- W2951278471 hasConceptScore W2951278471C48044578 @default.
- W2951278471 hasConceptScore W2951278471C50644808 @default.
- W2951278471 hasConceptScore W2951278471C7149132 @default.
- W2951278471 hasConceptScore W2951278471C77088390 @default.
- W2951278471 hasConceptScore W2951278471C97541855 @default.
- W2951278471 hasIssue "04" @default.
- W2951278471 hasLocation W29512784711 @default.
- W2951278471 hasLocation W29512784712 @default.
- W2951278471 hasOpenAccess W2951278471 @default.
- W2951278471 hasPrimaryLocation W29512784711 @default.
- W2951278471 hasRelatedWork W1999726363 @default.
- W2951278471 hasRelatedWork W2026024497 @default.
- W2951278471 hasRelatedWork W2176531811 @default.
- W2951278471 hasRelatedWork W2186275702 @default.
- W2951278471 hasRelatedWork W2230011309 @default.
- W2951278471 hasRelatedWork W3188940671 @default.
- W2951278471 hasRelatedWork W4247102092 @default.
- W2951278471 hasRelatedWork W4307934390 @default.
- W2951278471 hasRelatedWork W4319083788 @default.
- W2951278471 hasRelatedWork W4381586542 @default.
- W2951278471 hasVolume "34" @default.
- W2951278471 isParatext "false" @default.
- W2951278471 isRetracted "false" @default.
- W2951278471 magId "2951278471" @default.
- W2951278471 workType "article" @default.