SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Ying J, Feng Y. Transp. Res. Rec. 2022; 2676(7): 186-198.

Copyright

(Copyright © 2022, Transportation Research Board, National Research Council, National Academy of Sciences USA, Publisher SAGE Publishing)

DOI

10.1177/03611981221077263

PMID

unavailable

Abstract

Connected and automated vehicles (CAVs) extend urban traffic control from temporal to spatiotemporal by enabling the control of CAV trajectories. Most of the existing studies on CAV trajectory planning only consider longitudinal behaviors (i.e., in-lane driving), or assume that the lane changing can be done instantaneously. The resultant CAV trajectories are not realistic and cannot be executed at the vehicle level. The aim of this paper is to propose a full trajectory planning model that considers both in-lane driving and lane changing maneuvers. The trajectory generation problem is modeled as an optimization problem and the cost function considers multiple driving features including safety, efficiency, and comfort. Ten features are selected in the cost function to capture both in-lane driving and lane changing behaviors. One major challenge in generating a trajectory that reflects certain driving policies is to balance the weights of different features in the cost function. To address this challenge, it is proposed to optimize the weights of the cost function by imitation learning. Maximum entropy inverse reinforcement learning is applied to obtain the optimal weight for each feature and then CAV trajectories are generated with the learned weights. Experiments using the Next Generation Simulation (NGSIM) dataset show that the generated trajectory is very close to the original trajectory with regard to the Euclidean distance displacement, with a mean average error of less than 1 m. Meanwhile, the generated trajectories can maintain safety gaps with surrounding vehicles and have comparable fuel consumption.


Language: en

Keywords

imitation learning; in-lane driving; lane changing; maximum entropy inverse reinforcement learning; vehicle trajectory planning

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print