Generalized Linear Model (GLM) is a unified framework which brings Linear regression over real valued labels, Gamma or Poisson regression over positive labels, as well as Logistic regression over Binary labels under the same umbrella, and solves a Maximum Likelihood Estimation (MLE) problem to estimate the model generating the data. Even unsupervised learning model such as Empirical Mean Estimator is an example of maximum likelihood estimate. The MLE based algorithms often fail to recover the true model when a fraction of the observed data points are adversarially contaminated.
In this talk, I am going to discuss our work on Robust Learning Algorithms for GLM under adversarial corruptions. In this work, we introduced a version of Expectation Maximization (EM) algorithm which exploits an adaptive variance alteration while solving a weighted MLE. The algorithm, called SVAM (Sequential Variance-Altered MLE ), offers provable model recovery guarantees superior to the state-of-the-art for robust regression even when a constant fraction of training labels are corrupted. The algorithm is also efficient in the sense that it offers linear rate of convergence to true optima. Apart from linear regression, the technique and the result extend to gamma and logistic regression, mean estimation etc. SVAM also empirically outperforms several existing problem-specific techniques for robust regression and classification.
The talk is based upon a published article, coauthored by Bhaskar Mukhoty and Purushottam Kar.
Short Bio:
Debojyoti Dey is a final year PhD student at IIT Kanpur. His research interest spans non-convex and robust optimization, distribution learning in Probabilistic Graphical Model etc.