TY - JOUR
T1 - GlobalFusion
T2 - A global attentional deep learning framework for multisensor information fusion
AU - Liu, Shengzhong
AU - Yao, Shuochao
AU - Li, Jinyang
AU - Liu, Dongxin
AU - Wang, Tianshi
AU - Shao, Huajie
AU - Abdelzaher, Tarek
N1 - Publisher Copyright:
© 2020 Association for Computing Machinery.
PY - 2020/3/18
Y1 - 2020/3/18
N2 - The paper enhances deep-neural-network-based inference in sensing applications by introducing a lightweight attention mechanism called the global attention module for multi-sensor information fusion. This mechanism is capable of utilizing information collected from higher layers of the neural network to selectively amplify the influence of informative features and suppress unrelated noise at the fusion layer. We successfully integrate this mechanism into a new end-to-end learning framework, called GlobalFusion, where two global attention modules are deployed for spatial fusion and sensing modality fusion, respectively. Through an extensive evaluation on four public human activity recognition (HAR) datasets, we successfully demonstrate the effectiveness of GlobalFusion at improving information fusion quality. The new approach outperforms the state-of-the-art algorithms on all four datasets with a clear margin. We also show that the learned attention weights agree well with human intuition. We then validate the efficiency of GlobalFusion by testing its inference time and energy consumption on commodity IoT devices. Only a negligible overhead is induced by the global attention modules.
AB - The paper enhances deep-neural-network-based inference in sensing applications by introducing a lightweight attention mechanism called the global attention module for multi-sensor information fusion. This mechanism is capable of utilizing information collected from higher layers of the neural network to selectively amplify the influence of informative features and suppress unrelated noise at the fusion layer. We successfully integrate this mechanism into a new end-to-end learning framework, called GlobalFusion, where two global attention modules are deployed for spatial fusion and sensing modality fusion, respectively. Through an extensive evaluation on four public human activity recognition (HAR) datasets, we successfully demonstrate the effectiveness of GlobalFusion at improving information fusion quality. The new approach outperforms the state-of-the-art algorithms on all four datasets with a clear margin. We also show that the learned attention weights agree well with human intuition. We then validate the efficiency of GlobalFusion by testing its inference time and energy consumption on commodity IoT devices. Only a negligible overhead is induced by the global attention modules.
KW - Internet of Things (IoT)
KW - Multisensor information fusion
KW - Neural networks
UR - http://www.scopus.com/inward/record.url?scp=85089768158&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089768158&partnerID=8YFLogxK
U2 - 10.1145/3380999
DO - 10.1145/3380999
M3 - Article
AN - SCOPUS:85089768158
SN - 2474-9567
VL - 4
JO - Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
JF - Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
IS - 1
M1 - 3380999
ER -