GlobalFusion: A global attentional deep learning framework for multisensor information fusion

Shengzhong Liu, Shuochao Yao, Jinyang Li, Dongxin Liu, Tianshi Wang, Huajie Shao, Tarek Abdelzaher

Research output: Contribution to journalArticlepeer-review


The paper enhances deep-neural-network-based inference in sensing applications by introducing a lightweight attention mechanism called the global attention module for multi-sensor information fusion. This mechanism is capable of utilizing information collected from higher layers of the neural network to selectively amplify the influence of informative features and suppress unrelated noise at the fusion layer. We successfully integrate this mechanism into a new end-to-end learning framework, called GlobalFusion, where two global attention modules are deployed for spatial fusion and sensing modality fusion, respectively. Through an extensive evaluation on four public human activity recognition (HAR) datasets, we successfully demonstrate the effectiveness of GlobalFusion at improving information fusion quality. The new approach outperforms the state-of-the-art algorithms on all four datasets with a clear margin. We also show that the learned attention weights agree well with human intuition. We then validate the efficiency of GlobalFusion by testing its inference time and energy consumption on commodity IoT devices. Only a negligible overhead is induced by the global attention modules.

Original languageEnglish (US)
Article number3380999
JournalProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Issue number1
StatePublished - Mar 18 2020


  • Internet of Things (IoT)
  • Multisensor information fusion
  • Neural networks

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Hardware and Architecture
  • Human-Computer Interaction


Dive into the research topics of 'GlobalFusion: A global attentional deep learning framework for multisensor information fusion'. Together they form a unique fingerprint.

Cite this