SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Zhukov A, Rivero A, Benois-Pineau J, Zemmari A, Mosbah M. Sensors (Basel) 2024; 24(4).

Copyright

(Copyright © 2024, MDPI: Multidisciplinary Digital Publishing Institute)

DOI

10.3390/s24041171

PMID

38400331

PMCID

PMC10892099

Abstract

Defect detection on rail lines is essential for ensuring safe and efficient transportation. Current image analysis methods with deep neural networks (DNNs) for defect detection often focus on the defects themselves while ignoring the related context. In this work, we propose a fusion model that combines both a targeted defect search and a context analysis, which is seen as a multimodal fusion task. Our model performs rule-based decision-level fusion, merging the confidence scores of multiple individual models to classify rail-line defects. We call the model "hybrid" in the sense that it is composed of supervised learning components and rule-based fusion. We first propose an improvement to existing vision-based defect detection methods by incorporating a convolutional block attention module (CBAM) in the you only look once (YOLO) versions 5 (YOLOv5) and 8 (YOLOv8) architectures for the detection of defects and contextual image elements. This attention module is applied at different detection scales. The domain-knowledge rules are applied to fuse the detection results. Our method demonstrates improvements over baseline models in vision-based defect detection. The model is open for the integration of modalities other than an image, e.g., sound and accelerometer data.


Language: en

Keywords

attention models; fusion; image sensors; object recognition

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print