STCS Seminar
Speaker: |
Arpit Agarwal (Columbia University) |
Organiser: |
Sandeep K Juneja
|
Date: |
Monday, 14 Nov 2022, 16:00 to 17:00 |
Venue: |
A201 |
(Scan to add to calendar)
Abstract:
In many machine learning applications the learner is faced with a *stochastic environment* and it (sequentially) probes or influences the environment so as to optimize a given objective function. Examples of such applications include recommendation systems, web advertising, viral marketing, clinical trials, search ranking etc. For instance, in recommendation systems, the learner attempts to identify good recommendations by probing the stochastic preferences of users. Similarly, in viral marketing, the learner attempts to spread information through a social network using marketing campaigns that influence (stochastic) subsets of the network.
Most existing learning algorithms for these applications operate in one of two settings: (1) non-adaptive, and (2) fully adaptive. In the non-adaptive setting, all the selections/probes are completely determined ahead of time. However, these a priori selections might be inefficient as some of them might be unnecessary in hindsight. In the fully adaptive setting, the selection policy is updated after each observation from the environment. However, this fully adaptive setting might not be practical in many applications due to delays in receiving observations from many parallel sources. In this talk we introduce a semi-adaptive setting that interpolates between these two extreme settings for a wide range of learning and decision-making problems such as best arm identification in multi-armed bandits, ranking from pairwise comparisons, dueling bandits, and stochastic submodular maximization. We show that semi-adaptive policies enjoy the power of fully adaptive policies while requiring very few updates to the selection/probing rules. We also identify the trade-offs between rounds of adaptivity and performance.
Bio: Dr. Arpit Agarwal is a Postdoctoral Research Fellow at the Data Science Institute at Columbia University. His research interests primarily lie in machine learning, with connections to algorithmic economics and theoretical computer science. Before joining Columbia, he received a Ph.D. in Computer Science from the University of Pennsylvania and a masters in Computer Science from the Indian Institute of Science.