SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Pham DT, Tran PN, Alam S, Duong V, Delahaye D. Transp. Res. C Emerg. Technol. 2022; 135: e103463.

Copyright

(Copyright © 2022, Elsevier Publishing)

DOI

10.1016/j.trc.2021.103463

PMID

unavailable

Abstract

With the continuous growth in the air transportation demand, air traffic controllers will have to handle increased traffic and consequently, more potential conflicts. This gives rise to the need for conflict resolution advisory tools that can perform well in high-density traffic scenarios given a noisy environment. Unlike model-based approaches, learning-based approaches can take advantage of historical traffic data and flexibly encapsulate environmental uncertainty. In this study, we propose a reinforcement learning approach that is capable of resolving conflicts, in the presence of traffic and inherent uncertainties in conflict resolution maneuvers, without the need for prior knowledge about a set of rules mapping from conflict scenarios to expected actions. The conflict resolution task is formulated as a decision-making problem in a large and complex action space. The research also includes the development of a learning environment, scenario state representation, reward function, and a reinforcement learning algorithm inspired from Q-learning and Deep Deterministic Policy Gradient algorithms. The proposed algorithm, with two stages decision-making process, is used to train an agent that can serves as an advisory tool for air traffic controllers in resolving air traffic conflicts where it can learn from historical data by evolving overtime. Our findings show that the proposed model gives the agent the capability to suggest high quality conflict resolutions under different environmental conditions. It outperforms two baseline algorithms. The trained model has high performance under low uncertainty level (success rate ≥95% ) and medium uncertainty level (success rate ≥87%) with high traffic density. The detailed analysis of different impact factors such as environment's uncertainty and traffic density on learning performance are investigated and discussed. The environment's uncertainty is the most important factor which affects the performance. Moreover, the combination of high-density traffic and high uncertainty will be the challenge for any learning models.


Language: en

Keywords

Actor-critic; Air traffic control; Conflict resolution; Deep deterministic policy gradient; Learning environment; Reinforcement learning

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print