**Title:** Example Selection Methods for Stochastic Gradient Descent**Speaker: **Chris De Sa**Date and Time: **02/10/2022 4:10PM ET**Location:** Phillips 233 and Zoom

**Abstract**: Training example order in SGD has long been known to affect convergence rate. Recent results show that accelerated rates are possible in a variety of cases for permutation-based sample orders, in which each example from the training set is used once before any example is reused. This talk will cover a line of recent results in my lab on sample-ordering schemes. We will discuss the limits of the classic random-reshuffling scheme and explore how it can be improved (both in terms of theoretical rates and empirical performance) in various ways, including the use of Quasi-Monte-Carlo methods for Data Augmentation. The talk will conclude with some ongoing work from my lab on greedy example selection via apportionment.

**Papers Covered**: “A General Analysis of Example-Selection for Stochastic Gradient Descent” ICLR 2022

“Random Reshuffling is Not Always Better” NeurIPS 2020

**Bio**: Chris De Sa is an Assistant Professor in the Computer Science department at Cornell University. He is a member of the Cornell Machine Learning Group and leads the Relax ML Lab. His research interests include algorithmic, software, and hardware techniques for high-performance machine learning, with a focus on relaxed-consistency variants of stochastic algorithms such as asynchronous and low-precision stochastic gradient descent (SGD) and Markov chain Monte Carlo. The RelaxML lab builds towards using these techniques to construct data analytics and machine learning frameworks, including for deep learning, that are efficient, parallel, and distributed.