SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Liang J, Qiao YL, Guan T, Manocha D. IEEE Robot. Autom. Lett. 2021; 6(4): 6148-6155.

Copyright

(Copyright © 2021, Institute of Electrical and Electronics Engineers)

DOI

10.1109/LRA.2021.3090660

PMID

unavailable

Abstract

We present a modified velocity-obstacle (VO) algorithm that uses probabilistic partial observations of the environment to compute velocities and navigate a robot to a target. Our system uses commodity visual sensors, including a mono-camera and a 2D Lidar, to explicitly predict the velocities and positions of surrounding obstacles through optical flow estimation, object detection, and sensor fusion. A key aspect of our work is coupling the perception (OF: optical flow) and planning (VO) components for reliable navigation. Overall, our OF-VO algorithm using learning-based perception and model-based planning methods offers better performance than prior algorithms in terms of navigation time and success rate of collision avoidance. Our method also provides bounds on the probabilistic collision avoidance algorithm. We highlight the realtime performance of OF-VO on a Turtlebot navigating among pedestrians in both simulated and real-world scenes. A demo video is available at https://gamma.umd.edu/ofvo/


Language: en

Keywords

Autonomous vehicle navigation; Cameras; collision avoidance; Collision avoidance; Laser radar; Navigation; Optical imaging; Optical sensors; Robots; vision-based navigation

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print