Abstract: Generalization error is the gap between an algorithm's performance on the true data distribution (unknown to us) and its performance on the given dataset (known to us). Thus, establishing upper bounds on generalization error is naturally of interest. In 2020, Steinke and Zakynthinou derived their bound in terms of the ability to guess an algorithm's input by observing its output. In this talk, we will try to go over the proof of this result.
Link to the paper: https://arxiv.org/abs/2001.09122