We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article


Lei B, Liu X, Liang S, Hang W, Wang Q, Choi KS, Qin J. IEEE Trans. Neural Syst. Rehabil. Eng. 2019; 27(3): 497-506.


(Copyright © 2019, IEEE (Institute of Electrical and Electronics Engineers))






Brain-computer interfaces (BCIs) based on motor imagery (MI) have been widely used to support the rehabilitation of motor functions of upper limbs rather than lower limbs. This is probably because it is more difficult to detect brain activities of lower limb MI. In order to reliably detect the brain activities of lower limbs to restore or improve the walking ability of the disabled, we propose a new paradigm of walking imagery (WI) in a virtual environment (VE) in order to elicit reliable brain activities and achieve a significant training effect. First, we extract and fuse both spatial and time-frequency features as a multi-view feature (MVF) to represent the patterns in the brain activity. Second, we design a multi-view multi-level deep polynomial network (MMDPN) to explore the complementarity among the features so as to improve the detection of walking from an idle state. Our extensive experimental results show that the VE-based paradigm significantly performs better than the traditional text-based paradigm. In addition, the VE-based paradigm can effectively help users to modulate brain activities and improve the quality of electroencephalography signals. We also observe that the MMDPN outperforms other deep learning methods in terms of classification performance.

Language: en


All SafetyLit records are available for automatic download to Zotero & Mendeley