Speaker: | Anand D. Sarwate (Rutgers University, USA) |
Organiser: | Vinod M. Prabhakaran |
Date: | Tuesday, 7 Jan 2025, 16:00 to 17:00 |
Venue: | HBA Foyer |
Many measurements or signals are multidimensional, or tensor-valued. Vectorizing tensor data for statistical and machine learning tasks often results in having to fit a very large number of parameters. Using tensor decompositions to model such data can give a flexible and useful modeling framework whose complexity can adapt to the amount of data available. This talk will introduce classical decompositions (CP, Tucker) as well as more recent ones (tensor train, block tensor decomposition, and low separation rank) and show how they can be used to learn scalable representations for tensor-valued data and make predictions from tensor-valued data. Time permitting, we will describe applications of these ideas as part of neural networks and federated learning.
Note: This talk does not assume the audience has prior familiarity with tensor algebra.
Short Bio: Anand D. Sarwate received his Ph.D. in electrical engineering from UC Berkeley. He is a currently an Associate Professor at Rutgers and was previously a Research Assistant Professor at TTI-Chicago and a postdoc at the ITA Center at UCSD. His research interests include information theory, machine learning, signal processing, optimization, and privacy and security. Dr. Sarwate is a Distinguished Lecturer of the IEEE Information Theory Society for 2024--2025 and is on the Board of Governors of the IEEE Information Theory Society.