In this talk, we will look at the Adversarial Multi-Armed Bandit problem. In this model, as the name suggests, the rewards are chosen by an adversary. We then present the EXP3 algorithm, a well-known algorithm for regret minimization in Adversarial Bandits, and analyze its regret.