Distributed-memory tensor completion for generalized loss functions in python using new sparse tensor kernels

Navjot Singh, Zecheng Zhang, Xiaoxiao Wu, Naijing Zhang, Siyuan Zhang, Edgar Solomonik

Research output: Contribution to journalArticlepeer-review

Abstract

Tensor computations are increasingly prevalent numerical techniques in data science, but pose unique challenges for high-performance implementation. We provide novel algorithms and systems infrastructure which enable efficient parallel implementation of algorithms for tensor completion with generalized loss functions. Specifically, we consider alternating minimization, coordinate minimization, and a quasi-Newton (generalized Gauss-Newton) method. By extending the Cyclops library, we implement all of these methods in high-level Python syntax. To make possible tensor completion for very sparse tensors, we introduce new multi-tensor primitives, for which we provide specialized parallel implementations. We compare these routines to pairwise contraction of sparse tensors by reduction to hypersparse matrix formats, and find that the multi-tensor routines are more efficient in theoretical cost and execution time in experiments. We provide microbenchmarking results on the Stampede2 supercomputer to demonstrate the efficiency of the new primitives and Cyclops functionality. We then study the performance of the tensor completion methods for a synthetic tensor with 10 billion nonzeros and the Netflix dataset, considering both least squares and Poisson loss functions.

Original languageEnglish (US)
Pages (from-to)269-285
Number of pages17
JournalJournal of Parallel and Distributed Computing
Volume169
DOIs
StatePublished - Nov 2022
Externally publishedYes

Keywords

  • CP decomposition
  • Cyclops tensor framework
  • Tensor completion

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Distributed-memory tensor completion for generalized loss functions in python using new sparse tensor kernels'. Together they form a unique fingerprint.

Cite this