SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Shi Y, Liu Y, Qi Y, Han Q. Sensors (Basel) 2022; 22(3): e779.

Copyright

(Copyright © 2022, MDPI: Multidisciplinary Digital Publishing Institute)

DOI

10.3390/s22030779

PMID

35161523

Abstract

To control autonomous vehicles (AVs) in urban unsignalized intersections is a challenging problem, especially in a hybrid traffic environment where self-driving vehicles coexist with human driving vehicles. In this study, a coordinated control method with proximal policy optimization (PPO) in Vehicle-Road-Cloud Integration System (VRCIS) is proposed, where this control problem is formulated as a reinforcement learning (RL) problem. In this system, vehicles and everything (V2X) was used to keep communication between vehicles, and vehicle wireless technology can detect vehicles that use vehicles and infrastructure (V2I) wireless communication, thereby achieving a cost-efficient method. Then, the connected and autonomous vehicle (CAV) defined in the VRCIS learned a policy to adapt to human driving vehicles (HDVs) across the intersection safely by reinforcement learning (RL). We have developed a valid, scalable RL framework, which can communicate topologies that may be dynamic traffic. Then, state, action and reward of RL are designed according to urban unsignalized intersection problem. Finally, how to deploy within the RL framework was described, and several experiments with this framework were undertaken to verify the effectiveness of the proposed method.


Language: en

Keywords

Communication; Humans; Learning; *Accidents, Traffic; *Automobile Driving; connected and autonomous vehicles; reinforcement learning; urban unsignalized intersection

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print