A Simple Convergence Proof for A Simple Convergence Proof Stochastic Approximation and Applications to Reinforcement Learning
STCS Colloquium
Speaker:
M. Vidyasagar (Indian Institute of Technology Hyderabad.)
Organiser:
Sandeep K Juneja
Date:
Tuesday, 10 May 2022, 16:00 to 17:00
Venue:
In person @ A-201 and also via Zoom
(Scan to add to calendar)
Abstract:
Since its invention by Robbins and Monro in 1951, the stochastic approximation (SA) algorithm has been a widely used tool for finding solutions of equations, or minimizing functions, with noisy measurements. Current methods for proving its convergence make use of the "ODE" method whereby the sample paths of the algorithm are approximated by the trajectories of an associated ODE. This method requires a lot of technicalities. Interestingly, as far back as 1965, there was a paper by Gladyshev that gave a simple convergence proof based on martingale methods; however, this proof worked for only a class of problems. In this talk I will combine martingale methods with a new "converse theorem" for Lyapunov stability, to arrive at a simple proof that works for the same situations where the ODE method applies. The advantage of this approach is that it can potentially be applied to several problems in Reinforcement Learning (RL), such as actor-critic learning (which is two time-scale SA), or RL with value approximation (which is SA with projections onto a lower-dimensional subspace). These directions are under investigation.
Zoom Link - https://zoom.us/j/91983281364?pwd=Wkl3MHMzWUFiYnVhV1d1U1E3bXhpZz09