SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Altun M, Celenk M. IEEE Trans. Intel. Transp. Syst. 2017; 18(12): 3398-3407.

Copyright

(Copyright © 2017, IEEE (Institute of Electrical and Electronics Engineers))

DOI

10.1109/TITS.2017.2688352

PMID

unavailable

Abstract

This paper aims to develop a vision-based driver assistance system for scene awareness using video frames obtained from a dashboard camera. A saliency image map is devised with features pertinent to the driving scene. This saliency map mimics the human contour and motion sensitive visual perception by extracting spatial, spectral, and temporal information from the input frames and applying entropy driven image-context-feature data fusion. The resultant fusion output comprises high-level descriptors for still segment boundaries and non-stationary object appearance. Following the segmentation and foreground object detection stage, an adaptive maximum likelihood classifier selects road surface regions. The proposed scene driven vision system improves the driver's situational awareness by enabling adaptive road surface classification. As experimental results demonstrate, context-aware low-level to high-level information fusion based on human vision model produces superior segmentation, tracking, and classification results that lead to high- level abstraction of driving scene.


Language: en

Keywords

adaptive maximum likelihood classifier; adaptive road surface classification; autonomous driving; cameras; classification results; computer vision; content analysis; Content analysis; context-aware low-level-high-level information fusion; dashboard camera; driver assistance system; driver information systems; driving scene; entropy; Entropy; entropy driven image-context-feature data fusion; entropy-driven context-feature fusion; feature extraction; Feature extraction; foreground object detection stage; high-level abstraction; high-level descriptors; human contour; human vision model; image classification; Image color analysis; image segmentation; image sequences; input frames; motion sensitive visual perception; object detection; Optical imaging; Optical sensors; resultant fusion output; Road safety; Road scene; road scene content analysis; road surface regions; saliency image map; saliency map; scene awareness; scene driven vision system; segment boundaries; segmentation; sensor fusion; spatial information; spectral information; temporal information; traffic engineering computing; video frames; video signal processing; visual perception

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print