TY - JOUR
T1 - Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee
AU - Liu, Yuanshi
AU - Fang, Cong
AU - Zhang, Tong
N1 - C. Fang was supported by National Key R&D Program of China (2022ZD0114902), the NSF China (No. 62376008) and Wudao Foundation. T. Zhang was supported by the General Research Fund (GRF) of Hong Kong (No. 16310222).
PY - 2023
Y1 - 2023
N2 - This paper focuses on the high-dimensional sampling of log-concave distributions with composite structures: p∗(dx) ∝ exp(−g(x) − f(x)dx. We develop a double randomization technique, which leads to a fast underdamped Langevin algorithm with a dimension-independent convergence guarantee. We prove that the algorithm enjoys an overall (equation presented)iteration complexity to reach an ϵ-tolerated sample whose distribution p admits W2(p, p∗) ≤ ϵ. Here, H is an upper bound of the Hessian matrices for f and does not explicitly depend on dimension d. For the posterior sampling over linear models with normalized data, we show a clear superiority of convergence rate which is dimension-free and outperforms the previous best-known results by a d1/3 factor. The analysis to achieve a faster convergence rate brings new insights into high-dimensional sampling.
AB - This paper focuses on the high-dimensional sampling of log-concave distributions with composite structures: p∗(dx) ∝ exp(−g(x) − f(x)dx. We develop a double randomization technique, which leads to a fast underdamped Langevin algorithm with a dimension-independent convergence guarantee. We prove that the algorithm enjoys an overall (equation presented)iteration complexity to reach an ϵ-tolerated sample whose distribution p admits W2(p, p∗) ≤ ϵ. Here, H is an upper bound of the Hessian matrices for f and does not explicitly depend on dimension d. For the posterior sampling over linear models with normalized data, we show a clear superiority of convergence rate which is dimension-free and outperforms the previous best-known results by a d1/3 factor. The analysis to achieve a faster convergence rate brings new insights into high-dimensional sampling.
UR - http://www.scopus.com/inward/record.url?scp=85188672449&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85188672449&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85188672449
SN - 1049-5258
VL - 36
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 37th Conference on Neural Information Processing Systems, NeurIPS 2023
Y2 - 10 December 2023 through 16 December 2023
ER -