Daniel Karl I. Weidele, Hendrik Strobelt, et al.
SysML 2019
A brain machine interface (BMI) based on motor imagery (MI) enables the control of devices using brain signals while the subject imagines performing a movement. It plays a key role in prosthesis control and motor rehabilitation. To improve user comfort, preserve data privacy, and reduce the system s latency, a new trend in wearable BMIs is to execute algorithms on low-power microcontroller units (MCUs) embedded on edge devices to process the electroencephalographic (EEG) data in real-time close to the sensors. However, most of the classification models presented in the literature are too resource-demanding for low-power MCUs. This article proposes an efficient convolutional neural network (CNN) for EEG-based MI classification that achieves comparable accuracy while being orders of magnitude less resource-demanding and significantly more energy-efficient than state-of-the-art (SoA) models. To further reduce the model complexity, we propose an automatic channel selection method based on spatial filters and quantize both weights and activations to 8-bit precision with negligible accuracy loss. Finally, we implement and evaluate the proposed models on leading-edge parallel ultralow-power (PULP) MCUs. The final two-class solution consumes as little as 30 μ J/inference with a runtime of 2.95 ms/inference and an accuracy of 82.51% while using 6.4× fewer EEG channels, becoming the new SoA for embedded MI-BMI and defining a new Pareto frontier in the three-way trade-off among accuracy, resource cost, and power usage.
Daniel Karl I. Weidele, Hendrik Strobelt, et al.
SysML 2019
Arnon Amir, Michael Lindenbaum
IEEE Transactions on Pattern Analysis and Machine Intelligence
Joxan Jaffar
Journal of the ACM
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019