Abstract

We introduce sSTNN, a sparse and parallelized version of a spatio-temporal neural network (STNN) that enables training on much larger datasets. First, we introduce the model architecture and discuss the modifications we made to enable the use of a sparse data structure and multi-GPU parallelization. Then we present empirical results that demonstrate sSTNNs ability to train and inference on a dataset 17 times larger than STNN is capable of. Finally, we discuss the effect of sparsification on runtime and present evidence that sSTNN can achieve upwards of 117× reduction in memory usage compared to STNN.

Original languageEnglish (US)
Title of host publicationProceedings - 2022 IEEE International Conference on Big Data, Big Data 2022
EditorsShusaku Tsumoto, Yukio Ohsawa, Lei Chen, Dirk Van den Poel, Xiaohua Hu, Yoichi Motomura, Takuya Takagi, Lingfei Wu, Ying Xie, Akihiro Abe, Vijay Raghavan
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665480451
DOIs
StatePublished - 2022
Event2022 IEEE International Conference on Big Data, Big Data 2022 - Osaka, Japan
Duration: Dec 17 2022Dec 20 2022

Publication series

NameProceedings - 2022 IEEE International Conference on Big Data, Big Data 2022
Volume2022-January

Conference

Conference2022 IEEE International Conference on Big Data, Big Data 2022
Country/TerritoryJapan
CityOsaka
Period12/17/2212/20/22

ASJC Scopus subject areas

  • Modeling and Simulation
  • Computer Networks and Communications
  • Information Systems
  • Information Systems and Management
  • Safety, Risk, Reliability and Quality
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Sparse Spatio-Temporal Neural Network for Large-Scale Forecasting'. Together they form a unique fingerprint.

Cite this