Matches in SemOpenAlex for { <https://semopenalex.org/work/W3206843203> ?p ?o ?g. }
Showing items 1 to 74 of
74
with 100 items per page.
- W3206843203 abstract "Planning and decision-making of autonomous driving is an active and challenging topic currently. Deep reinforcement learning-based approaches seek to solve the problem in an end-to-end manner, but generally require a large amount of sample data and confronted with high dimensionality of input data and complex models, which lead to slow convergence and cannot learn effectively with noisy data. Most of deep reinforcement learning-based approaches use a sample reward function. Due to the complicated and volatile traffic scenarios, these approaches cannot satisfy the driving policy requirement. To address the issues, a multi-sensing and multi-constraint reward function (MSMC-SAC) based deep reinforcement learning method is proposed. The inputs of the proposed method include front-view image, point cloud from LiDAR, as well as the bird's-eye view generated from the perception results. The multi-sensing input is first passed to an encoding network to obtain the representation in latent space and then forward to a SAC-based learning module. A multiple rewards function considering various constraints, such as the error of transverse-longitudinal distance and heading angle, smoothness, velocity, and the possibility of collision, is designed. The performance of the proposed method in different typical traffic scenarios is validated with CARLA [1]. The effects of multiple reward functions are compared. The simulation results show that the presented approach can learn the driving policies in many complex scenarios, such as straight ahead, passing the intersections, and making turning, and outperforms against the existing typical deep reinforcement learning methods." @default.
- W3206843203 created "2021-10-25" @default.
- W3206843203 creator A5023197773 @default.
- W3206843203 creator A5033581609 @default.
- W3206843203 creator A5036499113 @default.
- W3206843203 creator A5059219510 @default.
- W3206843203 date "2021-01-01" @default.
- W3206843203 modified "2023-10-16" @default.
- W3206843203 title "A Multi-sensing Input and Multi-constraint Reward Mechanism Based Deep Reinforcement Learning Method for Self-driving Policy Learning" @default.
- W3206843203 cites W1968962398 @default.
- W3206843203 cites W2032924574 @default.
- W3206843203 cites W2082764616 @default.
- W3206843203 cites W2159956441 @default.
- W3206843203 cites W2343568200 @default.
- W3206843203 cites W2905173465 @default.
- W3206843203 cites W2963917788 @default.
- W3206843203 cites W2968983352 @default.
- W3206843203 doi "https://doi.org/10.1007/978-3-030-89092-6_63" @default.
- W3206843203 hasPublicationYear "2021" @default.
- W3206843203 type Work @default.
- W3206843203 sameAs 3206843203 @default.
- W3206843203 citedByCount "0" @default.
- W3206843203 crossrefType "book-chapter" @default.
- W3206843203 hasAuthorship W3206843203A5023197773 @default.
- W3206843203 hasAuthorship W3206843203A5033581609 @default.
- W3206843203 hasAuthorship W3206843203A5036499113 @default.
- W3206843203 hasAuthorship W3206843203A5059219510 @default.
- W3206843203 hasConcept C108583219 @default.
- W3206843203 hasConcept C111030470 @default.
- W3206843203 hasConcept C119857082 @default.
- W3206843203 hasConcept C127413603 @default.
- W3206843203 hasConcept C14036430 @default.
- W3206843203 hasConcept C154945302 @default.
- W3206843203 hasConcept C185592680 @default.
- W3206843203 hasConcept C198531522 @default.
- W3206843203 hasConcept C2776036281 @default.
- W3206843203 hasConcept C41008148 @default.
- W3206843203 hasConcept C43617362 @default.
- W3206843203 hasConcept C78458016 @default.
- W3206843203 hasConcept C78519656 @default.
- W3206843203 hasConcept C86803240 @default.
- W3206843203 hasConcept C97541855 @default.
- W3206843203 hasConceptScore W3206843203C108583219 @default.
- W3206843203 hasConceptScore W3206843203C111030470 @default.
- W3206843203 hasConceptScore W3206843203C119857082 @default.
- W3206843203 hasConceptScore W3206843203C127413603 @default.
- W3206843203 hasConceptScore W3206843203C14036430 @default.
- W3206843203 hasConceptScore W3206843203C154945302 @default.
- W3206843203 hasConceptScore W3206843203C185592680 @default.
- W3206843203 hasConceptScore W3206843203C198531522 @default.
- W3206843203 hasConceptScore W3206843203C2776036281 @default.
- W3206843203 hasConceptScore W3206843203C41008148 @default.
- W3206843203 hasConceptScore W3206843203C43617362 @default.
- W3206843203 hasConceptScore W3206843203C78458016 @default.
- W3206843203 hasConceptScore W3206843203C78519656 @default.
- W3206843203 hasConceptScore W3206843203C86803240 @default.
- W3206843203 hasConceptScore W3206843203C97541855 @default.
- W3206843203 hasLocation W32068432031 @default.
- W3206843203 hasOpenAccess W3206843203 @default.
- W3206843203 hasPrimaryLocation W32068432031 @default.
- W3206843203 hasRelatedWork W10379689 @default.
- W3206843203 hasRelatedWork W12291563 @default.
- W3206843203 hasRelatedWork W14471487 @default.
- W3206843203 hasRelatedWork W4412456 @default.
- W3206843203 hasRelatedWork W6915741 @default.
- W3206843203 hasRelatedWork W8447228 @default.
- W3206843203 hasRelatedWork W868042 @default.
- W3206843203 hasRelatedWork W9860846 @default.
- W3206843203 hasRelatedWork W9942637 @default.
- W3206843203 hasRelatedWork W13071157 @default.
- W3206843203 isParatext "false" @default.
- W3206843203 isRetracted "false" @default.
- W3206843203 magId "3206843203" @default.
- W3206843203 workType "book-chapter" @default.