SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Song Z, Tuo Y. Sensors (Basel) 2021; 21(16): e5614.

Copyright

(Copyright © 2021, MDPI: Multidisciplinary Digital Publishing Institute)

DOI

10.3390/s21165614

PMID

unavailable

Abstract

Flood depth monitoring is crucial for flood warning systems and damage control, especially in the event of an urban flood. Existing gauge station data and remote sensing data still has limited spatial and temporal resolution and coverage. Therefore, to expand flood depth data source taking use of online image resources in an efficient manner, an automated, low-cost, and real-time working frame called FloodMask was developed to obtain flood depth from online images containing flooded traffic signs. The method was built on the deep learning framework of Mask R-CNN (regional convolutional neural network), trained by collected and manually annotated traffic sign images. Following further the proposed image processing frame, flood depth data were retrieved more efficiently than manual estimations. As the main results, the flood depth estimates from images (without any mirror reflection and other inference problems) have an average error of 0.11 m, when compared to human visual inspection measurements. This developed method can be further coupled with street CCTV cameras, social media photos, and on-board vehicle cameras to facilitate the development of a smart city with a prompt and efficient flood monitoring system. In future studies, distortion and mirror reflection should be tackled properly to increase the quality of the flood depth estimates.


Language: en

Keywords

deep learning; computer vision; flood depth; flood monitoring; instance segmentation; water level

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print