Machine Learning & Scientific Computing Series
Optimal Langevin Samplers
Sampling from a probability distribution in a high dimensional spaces is a standard problem in computational statistical mechanics, Bayesian statistics and other applications. A standard approach for doing this is by constructing an appropriate Markov process that is ergodic with respect to the measure from which we wish to sample. There are (infinitely) many different Markov processes that are ergodic with respect to the same measure and a natural question is how to choose the process that is optimal, according to an appropriate optimality criterion. In this talk we will consider this problem for sampling schemes based on Langevin-type diffusions. We will consider nonreversible dynamics, preconditioned dynamics and interacting particle-based sampling schemes. We will show that the appropriate choice of the nonreversible drift, of the preconditioning and of the interaction between agents can lead to sampling schemes with better properties than the standard Langevin-based (MALA) sampling scheme, in the sense of accelerating convergence to the target measure and of reducing the asymptotic variance.
Contact: Diana Bohler at 6263951768 firstname.lastname@example.org
For more information visit: https://caltech.zoom.us/j/86320762422?pwd=RDlFZXUrUnhUN2ovZmJsUjBNcEhRUT09