In distribution testing, the goal is to determine whether a given unknown probability distribution satisfies a particular property or is far from having that property. In the standard sampling model, the algorithm can only draw samples from the distribution. It is well known that using only samples is often too weak for testing many properties, as evidenced by lower bounds of the form $\Omega(n^c)$ for some constant $c < 1$, where $n$ is the size of the domain.
In the conditional sampling model, the algorithm can draw samples conditioned on any subset of the domain. This model turns out to be significantly more powerful, often leading to exponential or even double-exponential improvements in the sample complexity required for testing. In this talk, we will discuss algorithms and tight lower bounds for testing several properties, such as equivalence, in this model.
Short Bio: