Matches in SemOpenAlex for { <https://semopenalex.org/work/W4287723908> ?p ?o ?g. }
Showing items 1 to 68 of
68
with 100 items per page.
- W4287723908 abstract "One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for regret, sample complexity, state-space coverage or model estimation, we need to strike a different exploration-exploitation trade-off. In this paper, we propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An objective-specific algorithm that (adaptively) prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., a simulator of the environment); 2) An objective-agnostic sample collection exploration strategy responsible for generating the prescribed samples as fast as possible. Building on recent methods for exploration in the stochastic shortest path problem, we first provide an algorithm that, given as input the number of samples $b(s,a)$ needed in each state-action pair, requires $tilde{O}(B D + D^{3/2} S^2 A)$ time steps to collect the $B=sum_{s,a} b(s,a)$ desired samples, in any unknown communicating MDP with $S$ states, $A$ actions and diameter $D$. Then we show how this general-purpose exploration algorithm can be paired with objective-specific strategies that prescribe the sample requirements to tackle a variety of settings -- e.g., model estimation, sparse reward discovery, goal-free cost-free exploration in communicating MDPs -- for which we obtain improved or novel sample complexity guarantees." @default.
- W4287723908 created "2022-07-26" @default.
- W4287723908 creator A5014791481 @default.
- W4287723908 creator A5070500506 @default.
- W4287723908 creator A5071798388 @default.
- W4287723908 creator A5091526684 @default.
- W4287723908 date "2020-07-13" @default.
- W4287723908 modified "2023-09-27" @default.
- W4287723908 title "A Provably Efficient Sample Collection Strategy for Reinforcement Learning" @default.
- W4287723908 doi "https://doi.org/10.48550/arxiv.2007.06437" @default.
- W4287723908 hasPublicationYear "2020" @default.
- W4287723908 type Work @default.
- W4287723908 citedByCount "0" @default.
- W4287723908 crossrefType "posted-content" @default.
- W4287723908 hasAuthorship W4287723908A5014791481 @default.
- W4287723908 hasAuthorship W4287723908A5070500506 @default.
- W4287723908 hasAuthorship W4287723908A5071798388 @default.
- W4287723908 hasAuthorship W4287723908A5091526684 @default.
- W4287723908 hasBestOaLocation W42877239081 @default.
- W4287723908 hasConcept C105795698 @default.
- W4287723908 hasConcept C11413529 @default.
- W4287723908 hasConcept C119857082 @default.
- W4287723908 hasConcept C126255220 @default.
- W4287723908 hasConcept C136197465 @default.
- W4287723908 hasConcept C154945302 @default.
- W4287723908 hasConcept C185592680 @default.
- W4287723908 hasConcept C198531522 @default.
- W4287723908 hasConcept C2778445095 @default.
- W4287723908 hasConcept C33923547 @default.
- W4287723908 hasConcept C41008148 @default.
- W4287723908 hasConcept C43617362 @default.
- W4287723908 hasConcept C48103436 @default.
- W4287723908 hasConcept C50817715 @default.
- W4287723908 hasConcept C72434380 @default.
- W4287723908 hasConcept C97541855 @default.
- W4287723908 hasConceptScore W4287723908C105795698 @default.
- W4287723908 hasConceptScore W4287723908C11413529 @default.
- W4287723908 hasConceptScore W4287723908C119857082 @default.
- W4287723908 hasConceptScore W4287723908C126255220 @default.
- W4287723908 hasConceptScore W4287723908C136197465 @default.
- W4287723908 hasConceptScore W4287723908C154945302 @default.
- W4287723908 hasConceptScore W4287723908C185592680 @default.
- W4287723908 hasConceptScore W4287723908C198531522 @default.
- W4287723908 hasConceptScore W4287723908C2778445095 @default.
- W4287723908 hasConceptScore W4287723908C33923547 @default.
- W4287723908 hasConceptScore W4287723908C41008148 @default.
- W4287723908 hasConceptScore W4287723908C43617362 @default.
- W4287723908 hasConceptScore W4287723908C48103436 @default.
- W4287723908 hasConceptScore W4287723908C50817715 @default.
- W4287723908 hasConceptScore W4287723908C72434380 @default.
- W4287723908 hasConceptScore W4287723908C97541855 @default.
- W4287723908 hasLocation W42877239081 @default.
- W4287723908 hasLocation W42877239082 @default.
- W4287723908 hasOpenAccess W4287723908 @default.
- W4287723908 hasPrimaryLocation W42877239081 @default.
- W4287723908 hasRelatedWork W1925875298 @default.
- W4287723908 hasRelatedWork W3049166411 @default.
- W4287723908 hasRelatedWork W3191556308 @default.
- W4287723908 hasRelatedWork W4226283576 @default.
- W4287723908 hasRelatedWork W4287688416 @default.
- W4287723908 hasRelatedWork W4287723908 @default.
- W4287723908 hasRelatedWork W4292701710 @default.
- W4287723908 hasRelatedWork W4319083788 @default.
- W4287723908 hasRelatedWork W4320487853 @default.
- W4287723908 hasRelatedWork W4376653367 @default.
- W4287723908 isParatext "false" @default.
- W4287723908 isRetracted "false" @default.
- W4287723908 workType "article" @default.