**Title:** Adversarial Filtering and Inverse Reinforcement Learning**Speaker: **Vikram Krishnamurthy**Date and Time: **11/18/2021 4:10PM ET**Location:** Phillips 233 and Zoom

**Abstract:** Inverse reinforcement learning aims to estimate the utility function of a decision maker by observing its decisions. This talk presents three approaches to inverse reinforcement learning. The first approach discusses inverse (adversarial) filtering problems – reconstructing the posterior distribution given noisy information. The second approach discusses passive Langevin dynamics stochastic approximation to reconstruct utility functions given noisy gradient information. Although the proof of weak convergence involves the so-called martingale problem of Stroock and Varadhan, we show that much intuition can be gleaned from simple multi-time scale averaging theory arguments. We also briefly discuss a third approach that uses Afriat’s theorem and generalizations to reconstruct the utility function of a decision maker. **Bio:** Vikram Krishnamurthy is a professor with the ECE department at Cornell. His research interests are in stochastic optimization, partially observed decision problems, and statistical signal processing.