The ability to learn and control tail risks, besides being an integral part of quantitative risk management, is important in running operations requiring high service levels and cyberphysical systems requiring high reliability guarantees. Despite this significance, scalable algorithmic approaches have remained elusive due to the rarity with which relevant risky samples are observed. In this talk we examine this bottleneck in two avenues: (i) statistical learning for minimization of tail risks and (ii) simulation of tail risks. We show efficient learning and simulation is possible by exploiting the similarity with which risk events unfold at different scales. This self-similarity, being a nonparametric characteristic, leads to satisfyingly expressive model classes and scalable algorithms which require exponentially fewer samples than their benchmark counterparts. Efficient learning is made possible by a novel targeted approach towards robustness which could be of interest in broader contexts due to its automatic bias correction property. Specifically, the self-similar structure provides a fertile ground to exhibit how mildly restrictive structures can be utilized to debias the error introduced by first-step model estimation.