Andre Wibisono is an assistant professor in the Department of Computer Science at Yale University, with a secondary appointment in the Department of Statistics and Data Science. His research interests are in the design and analysis of algorithms for machine learning, in particular for problems in optimization, sampling, and game theory. He received his BS degrees from MIT; his Masters degrees from MIT and from UC Berkeley; and his PhD in Computer Science from UC Berkeley. Before joining Yale in 2021, he has done postdoctoral research at UW Madison and Georgia Tech.
Talk: On Independent Samples along the Langevin Dynamics and Algorithm
Abstract: Sampling from a probability distribution is a fundamental algorithmic task, and one way to do that is to run a Markov chain. The mixing time of a Markov chain characterizes how long we should run the Markov chain until the random variable converges to the stationary distribution, and it has been well-studied. In this talk, we discuss the “independence time” of a Markov chain, which is how long we should run the Markov chain until the initial and final random variables have small mutual information, which means they are approximately independent. We study this question for two popular Markov chains: the Langevin dynamics in continuous time, and the Unadjusted Langevin Algorithm in discrete time. When the target distribution is strongly log-concave, we prove that the mutual information between the initial and final random variables decreases exponentially fast along both Markov chains. These convergence rates are tight, and lead to an estimate of the independence times which are similar to the mixing time guarantees of these Markov chains. To prove our results, we use strong data processing inequality and the regularity properties of these Markov chains. Based on joint work with Jiaming Liang and Siddharth Mitra, https://arxiv.org/abs/2402.17067.