In this talk, we will examine bandit algorithms in the special and relevant case of rarely occuring rewards. This problem will be seen from the perspective of both fixed confidence and fixed budget settings. In both these settings, we have been able to use approximate methods to drastically reduce the computational complexity of existing algorithms. We have also devised an algorithm to select the best system from a given collection of highly reliable systems, where failures are rare events.