SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Wang J, Yu J, He Z. Appl. Intell (Dordr.) 2022; 52(2): 1362-1375.

Copyright

(Copyright © 2022, Springer)

DOI

10.1007/s10489-021-02496-y

PMID

unavailable

Abstract

Channel attention mechanisms have attracted more and more researchers because of their generality and effectiveness in deep convolutional neural networks(DCNNs). However, the signal encoding methods of the current popular channel attention mechanisms are limited. For example, SENet uses the full-connection method to encode channel relevance, which is parameters-costly; ECANet uses 1D-Convolution to encode channel relevance, which is parameter fewer but can only encode per k adjacent channels in a fixed scale. This paper proposes a novel dilated efficient channel attention module(DECA), which consists of a novel multi-scale channel encoding method and a novel channel relevance feature fusion method. We empirically show that different scale channel relevance also contributes to performance, and fusing various scale channel relevance features can obtain more powerful channel feature representation. Besides, we widely use the weight-sharing method in the DECA module to make it more efficient. Specifically, we have applied our module to the real-life fire image detection task to evaluate its effectiveness. Extensive experiments on different backbone depths, detectors, and fire datasets have shown that the average performance boost of DECA module is more than 4.5% compare to the baselines. Meanwhile, DECA outperforms other state-of-art attention modules while keeping lower or comparable parameters in the experiments. The experimental results on different datasets also shown that the DECA module holds great generalization ability.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print