We investigate the following question: If NP contains functions which are slightly hard to compute on average then does NP also contain function which is very hard to compute on average? Similar result is shown to hold outside NP by Yao by his celebrated XOR lemma. But same technique breaks down to amplify the hardness inside NP.
We will restrict our attention to boolean function though the result that we present can be generalized to functions that are not boolean valued. O'Donnell [OD02] showed hardness can be amplified in NP in non-uniform setting and he amplified the hardness from $1/poly(n)$ to $(1/2-1/n^{1/2+\epsilon})$ for polynomial sized circuit. Healy et al [HVV04] extended this result and amplified the hardness to the optima.
Trevisan in his papers [Tre03, Tre05] amplified the hardness from $(1/poly(n))$ to $(1/2 - 1/poly \log (n))$ in uniform setting. The results of Trevisan can be explained in terms of error correcting codes as done by Buresh-Oppenheim et al [BKS06]. This, in turn, leads to the open question of finding a monotone code with efficient list decoding algorithm.
In this talk, we will describe the gradual development of hardness amplification in NP.