SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Liu T, Stathaki T. Front. Neurorobotics 2018; 12: e64.

Affiliation

Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom.

Copyright

(Copyright © 2018, Frontiers Research Foundation)

DOI

10.3389/fnbot.2018.00064

PMID

30344486

PMCID

PMC6182048

Abstract

Convolutional neural networks (CNN) have enabled significant improvements in pedestrian detection owing to the strong representation ability of the CNN features. However, it is generally difficult to reduce false positives on hard negative samples such as tree leaves, traffic lights, poles, etc. Some of these hard negatives can be removed by making use of high level semantic vision cues. In this paper, we propose a region-based CNN method which makes use of semantic cues for better pedestrian detection. Our method extends the Faster R-CNN detection framework by adding a branch of network for semantic image segmentation. The semantic network aims to compute complementary higher level semantic features to be integrated with the convolutional features. We make use of multi-resolution feature maps extracted from different network layers in order to ensure good detection accuracy for pedestrians at different scales. Boosted forest is used for training the integrated features in a cascaded manner for hard negatives mining. Experiments on the Caltech pedestrian dataset show improvements on detection accuracy with the semantic network. With the deep VGG16 model, our pedestrian detection method achieves robust detection performance on the Caltech dataset.


Language: en

Keywords

convolutional neural network; deep learning; pedestrian detection; region proposal; semantic segmentation

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print