TY - GEN
T1 - AutoHOOT
T2 - 2020 ACM International Conference on Parallel Architectures and Compilation Techniques, PACT 2020
AU - Ma, Linjian
AU - Ye, Jiayu
AU - Solomonik, Edgar
N1 - Linjian Ma and Edgar Solomonik were supported by the US NSF OAC SSI program, award No. 1931258. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. We used XSEDE to employ Stampede2 at the Texas Advanced Computing Center (TACC) through allocation TG-CCR180006.
PY - 2020/9/30
Y1 - 2020/9/30
N2 - High-order optimization methods, including Newton's method andits variants as well as alternating minimization methods, dominatethe optimization algorithms for tensor decompositions and tensornetworks. These tensor methods are used for data analysis andsimulation of quantum systems. In this work, we introduce AutoHOOT, the first automatic differentiation (AD) framework targetingat high-order optimization for tensor computations. AutoHOOTtakes input tensor computation expressions and generates optimized derivative expressions. In particular, AutoHOOT contains anew explicit Jacobian / Hessian expression generation kernel whoseoutputs maintain the input tensors' granularity and are easy to optimize. The expressions are then optimized by both the traditionalcompiler optimization techniques and specific tensor algebra transformations. Experimental results show that AutoHOOT achievescompetitive CPU and GPU performance for both tensor decomposition and tensor network applications compared to existing ADsoftware and other tensor computation libraries with manuallywritten kernels. The tensor methods generated by AutoHOOT arealso well-parallelizable, and we demonstrate good scalability on adistributed memory supercomputer.
AB - High-order optimization methods, including Newton's method andits variants as well as alternating minimization methods, dominatethe optimization algorithms for tensor decompositions and tensornetworks. These tensor methods are used for data analysis andsimulation of quantum systems. In this work, we introduce AutoHOOT, the first automatic differentiation (AD) framework targetingat high-order optimization for tensor computations. AutoHOOTtakes input tensor computation expressions and generates optimized derivative expressions. In particular, AutoHOOT contains anew explicit Jacobian / Hessian expression generation kernel whoseoutputs maintain the input tensors' granularity and are easy to optimize. The expressions are then optimized by both the traditionalcompiler optimization techniques and specific tensor algebra transformations. Experimental results show that AutoHOOT achievescompetitive CPU and GPU performance for both tensor decomposition and tensor network applications compared to existing ADsoftware and other tensor computation libraries with manuallywritten kernels. The tensor methods generated by AutoHOOT arealso well-parallelizable, and we demonstrate good scalability on adistributed memory supercomputer.
KW - Automatic differentiation
KW - Computational graph optimization
KW - Tensor computation
KW - Tensor decomposition
KW - Tensor network
UR - http://www.scopus.com/inward/record.url?scp=85094203518&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85094203518&partnerID=8YFLogxK
U2 - 10.1145/3410463.3414647
DO - 10.1145/3410463.3414647
M3 - Conference contribution
AN - SCOPUS:85094203518
T3 - Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT
SP - 125
EP - 137
BT - PACT 2020 - Proceedings of the ACM International Conference on Parallel Architectures and Compilation Techniques
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 3 October 2020 through 7 October 2020
ER -