Abstract
This paper focuses on the high-dimensional sampling of log-concave distributions with composite structures: p∗(dx) ∝ exp(−g(x) − f(x)dx. We develop a double randomization technique, which leads to a fast underdamped Langevin algorithm with a dimension-independent convergence guarantee. We prove that the algorithm enjoys an overall (equation presented)iteration complexity to reach an ϵ-tolerated sample whose distribution p admits W2(p, p∗) ≤ ϵ. Here, H is an upper bound of the Hessian matrices for f and does not explicitly depend on dimension d. For the posterior sampling over linear models with normalized data, we show a clear superiority of convergence rate which is dimension-free and outperforms the previous best-known results by a d1/3 factor. The analysis to achieve a faster convergence rate brings new insights into high-dimensional sampling.
Original language | English (US) |
---|---|
Journal | Advances in Neural Information Processing Systems |
Volume | 36 |
State | Published - 2023 |
Externally published | Yes |
Event | 37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States Duration: Dec 10 2023 → Dec 16 2023 |
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Signal Processing