SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Syed A, Morris BT. Mach. Vis. Appl. 2023; 34(2): e23.

Copyright

(Copyright © 2023, Holtzbrinck Springer Nature Publishing Group)

DOI

10.1007/s00138-022-01357-z

PMID

36712952

PMCID

PMC9870204

Abstract

Understanding pedestrian motion is critical for many real-world applications, e.g., autonomous driving and social robot navigation. It is a challenging problem since autonomous agents require complete understanding of its surroundings including complex spatial, social and scene dependencies. In trajectory prediction research, spatial and social interactions are widely studied while scene understanding has received less attention. In this paper, we study the effectiveness of different encoding mechanisms to understand the influence of the scene on pedestrian trajectories. We leverage a recurrent Variational Autoencoder to encode pedestrian motion history, its social interaction with other pedestrians and semantic scene information. We then evaluate the performance on various public datasets, such as ETH-UCY, Stanford Drone and Grand Central Station. Experimental results show that utilizing a fully segmented map, for explicit scene semantics, out performs other variants of scene representations (semantic and CNN embedding) for trajectory prediction tasks.


Language: en

Keywords

Deep learning; Semantic segmentation; Trajectory prediction; Variational auto encoder

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print