AI summaryⓘ
The authors developed a new method to sample from complicated probability distributions made by combining two parts, where one part is smooth and easy to handle and the other can be trickier but you can use a special tool called a restricted Gaussian oracle to deal with it. Their method works well when the combined distribution is strongly convex (a kind of nice shape) and achieves good accuracy after a number of steps similar to best known methods for simpler cases. They also show that their approach can be extended to less nice situations, like when the distribution isn't strongly convex but satisfies certain mathematical inequalities, or when the smooth part is not perfectly smooth but changes at a limited rate.
log-concave distributionstrongly convexgradient evaluationrestricted Gaussian oracleproximal operatortotal variation distancePoincaré inequalitylog-Sobolev inequalityLipschitz continuitysampling algorithms
Abstract
We propose an algorithm to sample from composite log-concave distributions over $\mathbb{R}^d$, i.e., densities of the form $π\propto e^{-f-g}$, assuming access to gradient evaluations of $f$ and a restricted Gaussian oracle (RGO) for $g$. The latter requirement means that we can easily sample from the density $\text{RGO}_{g,h,y}(x) \propto \exp(-g(x) -\frac{1}{2h}||y-x||^2)$, which is the sampling analogue of the proximal operator for $g$. If $f + g$ is $α$-strongly convex and $f$ is $β$-smooth, our sampler achieves $\varepsilon$ error in total variation distance in $\widetilde{\mathcal O}(κ\sqrt d \log^4(1/\varepsilon))$ iterations where $κ:= β/α$, which matches prior state-of-the-art results for the case $g=0$. We further extend our results to cases where (1) $π$ is non-log-concave but satisfies a Poincaré or log-Sobolev inequality, and (2) $f$ is non-smooth but Lipschitz.