DeepSense: A unified deep learning framework for time-series mobile sensing data processing

Shuochao Yao, Shaohan Hu, Yiran Zhao, Aston Zhang, Tarek Abdelzaher

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Mobile sensing and computing applications usually require time-series inputs from sensors, such as accelerometers, gyroscopes, and magnetometers. Some applications, such as tracking, can use sensed acceleration and rate of rotation to calculate displacement based on physical system models. Other applications, such as activity recognition, extract manually designed features from sensor inputs for classification. Such applications face two challenges. On one hand, on-device sensor measurements are noisy. For many mobile applications, it is hard to find a distribution that exactly describes the noise in practice. Unfortunately, calculating target quantities based on physical system and noise models is only as accurate as the noise assumptions. Similarly, in classification applications, although manually designed features have proven to be effective, it is not always straightforward to find the most robust features to accommodate diverse sensor noise patterns and heterogeneous user behaviors. To this end, we propose DeepSense, a deep learning framework that directly addresses the aforementioned noise and feature customization challenges in a unified manner. DeepSense integrates convolutional and recurrent neural networks to exploit local interactions among similar mobile sensors, merge local interactions of different sensory modalities into global interactions, and extract temporal relationships to model signal dynamics. DeepSense thus provides a general signal estimation and classification framework that accommodates a wide range of applications. We demonstrate the effectiveness of DeepSense using three representative and challenging tasks: car tracking with motion sensors, heterogeneous human activity recognition, and user identification with biometric motion analysis. DeepSense significantly outperforms the state-of-the-art methods for all three tasks. In addition, we show that DeepSense is feasible to implement on smartphones and embedded devices thanks to its moderate energy consumption and low latency.

Original languageEnglish (US)
Title of host publication26th International World Wide Web Conference, WWW 2017
PublisherInternational World Wide Web Conferences Steering Committee
Pages351-360
Number of pages10
ISBN (Print)9781450349130
DOIs
StatePublished - Jan 1 2017
Event26th International World Wide Web Conference, WWW 2017 - Perth, Australia
Duration: Apr 3 2017Apr 7 2017

Publication series

Name26th International World Wide Web Conference, WWW 2017

Other

Other26th International World Wide Web Conference, WWW 2017
CountryAustralia
CityPerth
Period4/3/174/7/17

Fingerprint

Time series
Sensors
Recurrent neural networks
Smartphones
Gyroscopes
Magnetometers
Biometrics
Deep learning
Accelerometers
Railroad cars
Energy utilization

Keywords

  • Activity recognition
  • Deep learning
  • Internet of things
  • Mobile computing
  • Mobile sensing
  • Tracking
  • User identification

ASJC Scopus subject areas

  • Software
  • Computer Networks and Communications

Cite this

Yao, S., Hu, S., Zhao, Y., Zhang, A., & Abdelzaher, T. (2017). DeepSense: A unified deep learning framework for time-series mobile sensing data processing. In 26th International World Wide Web Conference, WWW 2017 (pp. 351-360). [3052577] (26th International World Wide Web Conference, WWW 2017). International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/3038912.3052577

DeepSense : A unified deep learning framework for time-series mobile sensing data processing. / Yao, Shuochao; Hu, Shaohan; Zhao, Yiran; Zhang, Aston; Abdelzaher, Tarek.

26th International World Wide Web Conference, WWW 2017. International World Wide Web Conferences Steering Committee, 2017. p. 351-360 3052577 (26th International World Wide Web Conference, WWW 2017).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yao, S, Hu, S, Zhao, Y, Zhang, A & Abdelzaher, T 2017, DeepSense: A unified deep learning framework for time-series mobile sensing data processing. in 26th International World Wide Web Conference, WWW 2017., 3052577, 26th International World Wide Web Conference, WWW 2017, International World Wide Web Conferences Steering Committee, pp. 351-360, 26th International World Wide Web Conference, WWW 2017, Perth, Australia, 4/3/17. https://doi.org/10.1145/3038912.3052577
Yao S, Hu S, Zhao Y, Zhang A, Abdelzaher T. DeepSense: A unified deep learning framework for time-series mobile sensing data processing. In 26th International World Wide Web Conference, WWW 2017. International World Wide Web Conferences Steering Committee. 2017. p. 351-360. 3052577. (26th International World Wide Web Conference, WWW 2017). https://doi.org/10.1145/3038912.3052577
Yao, Shuochao ; Hu, Shaohan ; Zhao, Yiran ; Zhang, Aston ; Abdelzaher, Tarek. / DeepSense : A unified deep learning framework for time-series mobile sensing data processing. 26th International World Wide Web Conference, WWW 2017. International World Wide Web Conferences Steering Committee, 2017. pp. 351-360 (26th International World Wide Web Conference, WWW 2017).
@inproceedings{9db7f63851a34ae98812daddf445e6c6,
title = "DeepSense: A unified deep learning framework for time-series mobile sensing data processing",
abstract = "Mobile sensing and computing applications usually require time-series inputs from sensors, such as accelerometers, gyroscopes, and magnetometers. Some applications, such as tracking, can use sensed acceleration and rate of rotation to calculate displacement based on physical system models. Other applications, such as activity recognition, extract manually designed features from sensor inputs for classification. Such applications face two challenges. On one hand, on-device sensor measurements are noisy. For many mobile applications, it is hard to find a distribution that exactly describes the noise in practice. Unfortunately, calculating target quantities based on physical system and noise models is only as accurate as the noise assumptions. Similarly, in classification applications, although manually designed features have proven to be effective, it is not always straightforward to find the most robust features to accommodate diverse sensor noise patterns and heterogeneous user behaviors. To this end, we propose DeepSense, a deep learning framework that directly addresses the aforementioned noise and feature customization challenges in a unified manner. DeepSense integrates convolutional and recurrent neural networks to exploit local interactions among similar mobile sensors, merge local interactions of different sensory modalities into global interactions, and extract temporal relationships to model signal dynamics. DeepSense thus provides a general signal estimation and classification framework that accommodates a wide range of applications. We demonstrate the effectiveness of DeepSense using three representative and challenging tasks: car tracking with motion sensors, heterogeneous human activity recognition, and user identification with biometric motion analysis. DeepSense significantly outperforms the state-of-the-art methods for all three tasks. In addition, we show that DeepSense is feasible to implement on smartphones and embedded devices thanks to its moderate energy consumption and low latency.",
keywords = "Activity recognition, Deep learning, Internet of things, Mobile computing, Mobile sensing, Tracking, User identification",
author = "Shuochao Yao and Shaohan Hu and Yiran Zhao and Aston Zhang and Tarek Abdelzaher",
year = "2017",
month = "1",
day = "1",
doi = "10.1145/3038912.3052577",
language = "English (US)",
isbn = "9781450349130",
series = "26th International World Wide Web Conference, WWW 2017",
publisher = "International World Wide Web Conferences Steering Committee",
pages = "351--360",
booktitle = "26th International World Wide Web Conference, WWW 2017",

}

TY - GEN

T1 - DeepSense

T2 - A unified deep learning framework for time-series mobile sensing data processing

AU - Yao, Shuochao

AU - Hu, Shaohan

AU - Zhao, Yiran

AU - Zhang, Aston

AU - Abdelzaher, Tarek

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Mobile sensing and computing applications usually require time-series inputs from sensors, such as accelerometers, gyroscopes, and magnetometers. Some applications, such as tracking, can use sensed acceleration and rate of rotation to calculate displacement based on physical system models. Other applications, such as activity recognition, extract manually designed features from sensor inputs for classification. Such applications face two challenges. On one hand, on-device sensor measurements are noisy. For many mobile applications, it is hard to find a distribution that exactly describes the noise in practice. Unfortunately, calculating target quantities based on physical system and noise models is only as accurate as the noise assumptions. Similarly, in classification applications, although manually designed features have proven to be effective, it is not always straightforward to find the most robust features to accommodate diverse sensor noise patterns and heterogeneous user behaviors. To this end, we propose DeepSense, a deep learning framework that directly addresses the aforementioned noise and feature customization challenges in a unified manner. DeepSense integrates convolutional and recurrent neural networks to exploit local interactions among similar mobile sensors, merge local interactions of different sensory modalities into global interactions, and extract temporal relationships to model signal dynamics. DeepSense thus provides a general signal estimation and classification framework that accommodates a wide range of applications. We demonstrate the effectiveness of DeepSense using three representative and challenging tasks: car tracking with motion sensors, heterogeneous human activity recognition, and user identification with biometric motion analysis. DeepSense significantly outperforms the state-of-the-art methods for all three tasks. In addition, we show that DeepSense is feasible to implement on smartphones and embedded devices thanks to its moderate energy consumption and low latency.

AB - Mobile sensing and computing applications usually require time-series inputs from sensors, such as accelerometers, gyroscopes, and magnetometers. Some applications, such as tracking, can use sensed acceleration and rate of rotation to calculate displacement based on physical system models. Other applications, such as activity recognition, extract manually designed features from sensor inputs for classification. Such applications face two challenges. On one hand, on-device sensor measurements are noisy. For many mobile applications, it is hard to find a distribution that exactly describes the noise in practice. Unfortunately, calculating target quantities based on physical system and noise models is only as accurate as the noise assumptions. Similarly, in classification applications, although manually designed features have proven to be effective, it is not always straightforward to find the most robust features to accommodate diverse sensor noise patterns and heterogeneous user behaviors. To this end, we propose DeepSense, a deep learning framework that directly addresses the aforementioned noise and feature customization challenges in a unified manner. DeepSense integrates convolutional and recurrent neural networks to exploit local interactions among similar mobile sensors, merge local interactions of different sensory modalities into global interactions, and extract temporal relationships to model signal dynamics. DeepSense thus provides a general signal estimation and classification framework that accommodates a wide range of applications. We demonstrate the effectiveness of DeepSense using three representative and challenging tasks: car tracking with motion sensors, heterogeneous human activity recognition, and user identification with biometric motion analysis. DeepSense significantly outperforms the state-of-the-art methods for all three tasks. In addition, we show that DeepSense is feasible to implement on smartphones and embedded devices thanks to its moderate energy consumption and low latency.

KW - Activity recognition

KW - Deep learning

KW - Internet of things

KW - Mobile computing

KW - Mobile sensing

KW - Tracking

KW - User identification

UR - http://www.scopus.com/inward/record.url?scp=85029121698&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85029121698&partnerID=8YFLogxK

U2 - 10.1145/3038912.3052577

DO - 10.1145/3038912.3052577

M3 - Conference contribution

AN - SCOPUS:85029121698

SN - 9781450349130

T3 - 26th International World Wide Web Conference, WWW 2017

SP - 351

EP - 360

BT - 26th International World Wide Web Conference, WWW 2017

PB - International World Wide Web Conferences Steering Committee

ER -