SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Kim SS, Gwak IY, Lee SW. IEEE Trans. Intel. Transp. Syst. 2020; 21(6): 2522-2533.

Copyright

(Copyright © 2020, IEEE (Institute of Electrical and Electronics Engineers))

DOI

10.1109/TITS.2019.2919920

PMID

unavailable

Abstract

The continuous orientation estimation of a moving pedestrian is a crucial issue in autonomous driving that requires the detection of a pedestrian intending to cross a road. It is still a challenging task owing to several reasons, including the diversity of pedestrian appearances, the subtle pose difference between adjacent orientations, and similar poses with different orientations such as axisymmetric orientations. These problems render the task highly difficult. Recent studies involving convolutional neural networks (CNNs) have attempted to solve these problems. However, their performance is still far from satisfactory for application in intelligent vehicles. In this paper, we propose a CNN-based two-stream network for continuous orientation estimation. The network can learn representations based on the spatial co-occurrence of visual patterns among pedestrians. To boost estimation performance, we applied a coarse-to-fine learning approach that consists of two learning stages. We investigated continuous orientation performance on the TUD Multiview Pedestrian dataset and the KITTI dataset and compared them with the state-of-the-art methods. The results show that our method outperforms other existing methods.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print