One of the basic questions in complexity theory is how the complexity of computing $k$ instances of a function relates to the complexity of computing a single instance. More precisely, we want to know whether we can save any computation by solving $k$ instances together instead of solving each instance individually. In literature jargon, this question has a name: the direct-sum problem.
We will consider an easier version of this problem, called the elimination problem, in communication setting. Consider a bipartite boolean function: $f: \mathcal{X} \times \mathcal{Y} \rightarrow \mathcal{R}$ where Alice gets $X \in \mathcal{X}$ and Bob gets $Y \in \mathcal{Y}$ and jointly they compute the function $f(X,Y)$ by communicating with each other. In the elimination problem, Alice gets $X_1. \dots, X_k \in \mathcal{X}^k$ and Bob gets $Y_1, \dots, Y_k \in \mathcal{Y}^k$ and their goal is to output a string $\sigma_1, \dots, \sigma_k$ such that $f(X_i, Y_i) \neq \sigma_i$ for at least one $i \in [k]$. Clearly this is an easier scenario than the 'direct-sum' problem:
Consider the case where $f$ is the EQUALITY function, $X_1 = \dots = X_k$ and $Y_1 = \dots = Y_k$. To solve direct-sum one needs to solve at least one instance of $f$ whereas for elimination, no communication is necessary under the said promise.
We will prove an upper and a lower bound of the randomized communication complexity of the elimination problem.
Ref: Beimel, Amos and Daniel, Sebastian Ben and Kushilevitz, Eyal and Weinreb, Enav:
Choosing, Agreeing, and Eliminating in Communication Complexity.