SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Jodoin PM, Saligrama V, Konrad J. IEEE Trans. Image Process. 2012; 21(9): 4244-4255.

Copyright

(Copyright © 2012, IEEE (Institute of Electrical and Electronics Engineers))

DOI

10.1109/TIP.2012.2199326

PMID

22614646

Abstract

Background subtraction has been a driving engine for many computer vision and video analytics tasks. Although its many variants exist, they all share the underlying assumption that photometric scene properties are either static or exhibit temporal stationarity. While this works in many applications, the model fails when one is interested in discovering changes in scene dynamics instead of changes in scenes photometric properties; the detection of unusual pedestrian or motor traffic patterns are but two examples. We propose a new model and computational framework that assume the dynamics of a scene, not its photometry, to be stationary, i.e., a dynamic background serves as the reference for the dynamics of an observed scene. Central to our approach is the concept of an event, that we define as short-term scene dynamics captured over a time window at a specific spatial location in the camera field of view. Unlike in our earlier work, we compute events by time-aggregating vector object descriptors that can combine multiple features, such as object size, direction of movement, speed, etc. We characterize events probabilistically, but use low-memory, low-complexity surrogates in a practical implementation. Using these surrogates amounts to behavior subtraction, a new algorithm for effective and efficient temporal anomaly detection and localization. Behavior subtraction is resilient to spurious background motion, such as one due to camera jitter, and is content-blind, i.e., it works equally well on humans, cars, animals, and other objects in both uncluttered and highly-cluttered scenes. Clearly, treating video as a collection of events rather than colored pixels opens new possibilities for video analytics.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print