In this talk, I will discuss a robust online convex optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of rounds "k", unknown to the learner. Our focus is on a novel setting allowing unbounded domains and large gradients for the losses without relying on a Lipschitz assumption. We introduce a non-convex (invex) robust loss to mitigate the effects of outliers and develop a robust variant of the online gradient descent algorithm by leveraging our robust loss. We establish tight regret guarantees (up to constants), both dynamic and static, with respect to the uncorrupted rounds and conduct experiments to validate our theory. Furthermore, we present a unified analysis framework for developing online optimization algorithms for non-convex (invex) losses, utilizing it to provide regret bounds for our robust loss, which may be of independent interest.
Short Bio:
Adarsh Barik is a postdoc at the Institute of Data Science at the National University of Singapore. He did his PhD in Computer Science at Purdue and his undergrad at IIT Madras before that. His research interests are broadly in Theoretical and computational aspect of Optimization, Machine Learning, Information Theory and High Dimensional Data Analytics.