SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Li D, Okhrin O. Transp. Res. C Emerg. Technol. 2023; 147: e103987.

Copyright

(Copyright © 2023, Elsevier Publishing)

DOI

10.1016/j.trc.2022.103987

PMID

unavailable

Abstract

In the autonomous driving field, fusion of human knowledge into Deep Reinforcement Learning (DRL) is often based on the human demonstration recorded in a simulated environment. This limits the generalization and the feasibility of application in real-world traffic. We propose a two-stage DRL method to train a car-following agent, that modifies the policy by leveraging the real-world human driving experience and achieves performance superior to the pure DRL agent. Training a DRL agent is done within CARLA framework with Robot Operating System (ROS). For evaluation, we designed different driving scenarios to compare the proposed two-stage DRL car-following agent with other agents. After extracting the "good" behavior from the human driver, the agent becomes more efficient and reasonable, which makes this autonomous agent more suitable to Human-Robot Interaction (HRI) traffic.


Language: en

Keywords

Car-following model; CARLA; DRL; Real driving dataset; ROS

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print