SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Sandini G, Noceti N, Vignolo A, Sciutti A, Rea F, Verri A, Odone F. J. Vis. 2015; 15(12): e497.

Copyright

(Copyright © 2015, Association for Research in Vision and Ophthalmology)

DOI

10.1167/15.12.497

PMID

26326185

Abstract

Understanding actions and intentions of others is at the basis of social communication in humans and relies on our ability to associate the view of other's actions to (the view of) our own motor acts. In this work, we focus on the low level, bottom up, detection and segmentation of biological motion, and present a computational model to extract perspective invariant visual features based on known regularities of body motion trajectories. The segmentation approach presented does not require any a-priori knowledge of the kinematics of the body and relies on rather coarse early visual features. Specifically we focus on features describing an invariant property of biological movements, known as the Two-Thirds Power Law, which relates the instantaneous velocity and local curvature of the trajectory of moving body parts. Starting from video streams acquired during different kind of motions, the algorithm initially computes the optical flow and detects the regions where the motion is occurring by computing a saliency motion map. Then it describes the regions in the visual stream with low-level features corresponding to the computational counterparts of the quantities involved in the Two-Thirds Power Law. Finally, on the basis of these quantities, we demonstrate the validity of this approach in distinguishing biological motion from dynamic events due to non-biological phenomena. The model proposed represents a pre-categorical biological motion segmentation tool exploiting the regularities of human motion to build and tune low-level visual feature extraction which does not require a priori knowledge of the scene and of the kinematics of the body and guarantees view invariance. Hence it is an ideal tool to support the matching between visual information extracted during action execution and action observation at the basis of human ability to understand others' actions. Meeting abstract presented at VSS 2015.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print