Matches in SemOpenAlex for { <https://semopenalex.org/work/W2913811934> ?p ?o ?g. }
Showing items 1 to 62 of
62
with 100 items per page.
- W2913811934 endingPage "599" @default.
- W2913811934 startingPage "583" @default.
- W2913811934 abstract "The core of Reinforcement learning lies in learning from experiences. The performance of the agent is hugely impacted by the training conditions, reward functions and exploration policies. Deep Deterministic Policy Gradient (DDPG) is a well known approach to solve continuous control problems in RL. We use DDPG with intelligent choice of reward function and exploration policy to learn various driving behaviors (Lanekeeping, Overtaking, Blocking, Defensive, Opportunistic) for a simulated car in unstructured environments. In cluttered scenes, where the opponent agents are not following any driving pattern, it is difficult to anticipate their behavior and henceforth decide our agent’s actions. DDPG enables us to propose a solution which requires only the sensor information at current time step to predict the action to be taken. Our main contribution is generating a behavior based motion model for simulated cars, which plans for every instant." @default.
- W2913811934 created "2019-02-21" @default.
- W2913811934 creator A5012183445 @default.
- W2913811934 creator A5075816776 @default.
- W2913811934 date "2019-01-01" @default.
- W2913811934 modified "2023-09-25" @default.
- W2913811934 title "Learning Driving Behaviors for Automated Cars in Unstructured Environments" @default.
- W2913811934 cites W1599632510 @default.
- W2913811934 cites W1952057695 @default.
- W2913811934 cites W2017957151 @default.
- W2913811934 cites W2041225042 @default.
- W2913811934 cites W2041911815 @default.
- W2913811934 cites W2094387729 @default.
- W2913811934 cites W2101771142 @default.
- W2913811934 cites W2119112357 @default.
- W2913811934 cites W2164424353 @default.
- W2913811934 cites W2164569010 @default.
- W2913811934 cites W2296073425 @default.
- W2913811934 cites W2575705757 @default.
- W2913811934 cites W2582616844 @default.
- W2913811934 cites W2596750703 @default.
- W2913811934 cites W2604173613 @default.
- W2913811934 cites W2896066033 @default.
- W2913811934 cites W2962851396 @default.
- W2913811934 cites W2963833733 @default.
- W2913811934 cites W4232280717 @default.
- W2913811934 cites W2602167503 @default.
- W2913811934 doi "https://doi.org/10.1007/978-3-030-11021-5_36" @default.
- W2913811934 hasPublicationYear "2019" @default.
- W2913811934 type Work @default.
- W2913811934 sameAs 2913811934 @default.
- W2913811934 citedByCount "2" @default.
- W2913811934 countsByYear W29138119342020 @default.
- W2913811934 countsByYear W29138119342022 @default.
- W2913811934 crossrefType "book-chapter" @default.
- W2913811934 hasAuthorship W2913811934A5012183445 @default.
- W2913811934 hasAuthorship W2913811934A5075816776 @default.
- W2913811934 hasConcept C127413603 @default.
- W2913811934 hasConcept C22212356 @default.
- W2913811934 hasConcept C41008148 @default.
- W2913811934 hasConceptScore W2913811934C127413603 @default.
- W2913811934 hasConceptScore W2913811934C22212356 @default.
- W2913811934 hasConceptScore W2913811934C41008148 @default.
- W2913811934 hasLocation W29138119341 @default.
- W2913811934 hasOpenAccess W2913811934 @default.
- W2913811934 hasPrimaryLocation W29138119341 @default.
- W2913811934 hasRelatedWork W2093578348 @default.
- W2913811934 hasRelatedWork W2350741829 @default.
- W2913811934 hasRelatedWork W2358668433 @default.
- W2913811934 hasRelatedWork W2376932109 @default.
- W2913811934 hasRelatedWork W2382290278 @default.
- W2913811934 hasRelatedWork W2390279801 @default.
- W2913811934 hasRelatedWork W2748952813 @default.
- W2913811934 hasRelatedWork W2766271392 @default.
- W2913811934 hasRelatedWork W2899084033 @default.
- W2913811934 hasRelatedWork W3004735627 @default.
- W2913811934 isParatext "false" @default.
- W2913811934 isRetracted "false" @default.
- W2913811934 magId "2913811934" @default.
- W2913811934 workType "book-chapter" @default.