Skip to main content
Foundations of Information, Networks, and Decision Systems

Talk Information 09/26/2024

Title: Connections Between Gradient Based Optimization, Sampling and Lyapunov Functions

Speaker: Karthik Sridharan
Date and Time: 09/26/2024 4:10PM ET
Location: Rhodes 310 and Zoom

Abstract: The similarity between Gradient based optimization and learning methods like gradient descent and stochastic gradient descent algorithms and gradient based sampling algorithms like Langevin Monte Carlo sampling methods have always been glaring. Often analysis for these methods also share many commonalities. The most general analysis for Langevin based sampling methods use functional inequalities such as Poincare inequality and Log-Sobolev Inequalities. In this talk, we will explore this connection via the lens of existence of Lyapunov Functions. This connection will prove useful since it helps us expand our knowledge both in gradient based optimization problems and for Langevin Sampling problems as well. Specifically, going from sampling to optimization, we show that functional inequalities like Poincare inequality and Log-Sobolev Inequalities, holding for the Gibbs measure of some function F implies that optimization and stochastic optimization for this F using SGLD works (even in the stochastic setting). Similarly in the optimization to Sampling direction we show that optimizability using Gradient descent of a function F implies Langevin sampling methods at appropriate temperatures from Gibbs measure of F is successful when F is unimodal. At the heart of these results lies the construction of appropriate Lyapunov functions we show exist.

Bio: Karthik Sridharan is currently an Associate Professor in the Computer Science Department at Cornell University. His research interests span the theory of machine learning, stochastic optimization, online learning, reinforcement learning. He is the recipient of Sloan Fellowship and NSF Career awards.