Tata Institute of Fundamental Research

Fast Provable Non-convex Algorithms for Matrix Decomposition

STCS Seminar
Speaker: Praneeth Netrapalli (Microsoft Research New England 1, Memorial Drive Cambridge, MA 02142 United States of America)
Organiser: Rahul Vaze
Date: Friday, 23 Oct 2015, 11:00 to 12:00
Venue: A-212 (STCS Seminar Room)

(Scan to add to calendar)
Abstract:  Abstract: Matrix decomposition, where we wish to approximate a given matrix as a sum or product of two or more structured matrices, plays a fundamental role in many machine learning applications. Principal components analysis (PCA) and k-means clustering, which are ubiquitous in machine learning, are for instance specializations of this problem. For many such problems, there is a dichotomy between algorithms that have provable guarantees and those that are used in practice. The ones with provable guarantees, though polynomial time, are time and memory intensive where as the ones that are used in practice are not well understood but turn out to be very efficient and have good empirical performance. In this talk, we focus on one such empirically popular heuristic called alternating minimization (AltMin). Given a function that we wish to minimize, AltMin divides the variables into two or more sets and sequentially optimizes over one of them while holding the rest fixed. In many settings, it turns out that though minimizing the function jointly over all variables is computationally hard, optimizing over any one set of variables can be done efficiently. We provide the first performance guarantees for AltMin for the problems of:

1. Matrix Completion, where we wish to recover a low rank matrix from partial observations of some of the entries, and
2. Robust PCA, where we wish to decompose a given matrix as the sum of a low rank matrix and a sparse matrix.

For both of these problems, our results give faster algorithms than those known before. More broadly, our results indicate the potential use of non-convex optimization to design faster algorithms. I will mention some ongoing work and future research avenues in this direction.
 
Bio: Praneeth Netrapalli is currently a postdoctoral researcher at Microsoft Research New England in Cambridge MA. He obtained his MS and PhD from UT Austin and B-Tech from IIT Bombay, all in Electrical engineering. Before pursuing his PhD, he spent two years at Goldman Sachs, Bangalore as a quantitative analyst where he worked on pricing derivatives. His research focuses on designing efficient and provable algorithms for machine learning problems.