Over the past decade stochastic gradient descent and its many variance-reduced variants have emerged as powerful tools in the theory and practice of optimization and machine learning. In this talk I will discuss recent advances in leveraging these techniques to obtain provably faster running times for solving fundamental structured problems. In particular I will highlight how new sampling techniques can be used to obtained improved methods for approximately solving two-player matrix-games, a prevalent class of optimization problems which encompasses zero-sum games, linear programming, classification, and support vector machines.
Time permitting, I will also touch upon how variance reduction based-techniques has lead to improved running times for a broader class of problems including Markov decision processes and submodular optimization.
This talk is based primarily on joint work with Yair Carmon, Yujia Jin, and Kevin Tian based on
https://arxiv.org/abs/1907.02056