Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee

Yuanshi Liu, Cong Fang, Tong Zhang

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper focuses on the high-dimensional sampling of log-concave distributions with composite structures: p(dx) ∝ exp(−g(x) − f(x)dx. We develop a double randomization technique, which leads to a fast underdamped Langevin algorithm with a dimension-independent convergence guarantee. We prove that the algorithm enjoys an overall (equation presented)iteration complexity to reach an ϵ-tolerated sample whose distribution p admits W2(p, p) ≤ ϵ. Here, H is an upper bound of the Hessian matrices for f and does not explicitly depend on dimension d. For the posterior sampling over linear models with normalized data, we show a clear superiority of convergence rate which is dimension-free and outperforms the previous best-known results by a d1/3 factor. The analysis to achieve a faster convergence rate brings new insights into high-dimensional sampling.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Externally publishedYes
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: Dec 10 2023Dec 16 2023

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee'. Together they form a unique fingerprint.

Cite this