How does one partition the vertex set of a graph into two parts $(S,S^C)$, so that the ratio of edges going across to the Volume (number of edges incident on vertices in S) in the vertex set is as small as possible? This quantity is often referred to as the Sparsity $\phi(S)$ of the cut, and finding the sparsest cut $\phi(G)$ in a graph $G$ is an important problem, used as a subroutine in many algorithmic tasks, like spectral clustering. A widely-used method is based on the spectrum of the Laplacian of the graph, and is referred to as spectral partitioning.
The method is based on the following inequality: $ \lambda_2 \leq \phi(G) \leq O(\sqrt{\lambda_2}) $. Here $lambda_2$ is the second largest eigenvalue of the normalized Laplacian of $G$.
This is a special case of a more general inequality valid on Reimannian manifolds proven by Cheeger, and was adapted to the discrete setting by Alon and Milman.
We shall see a proof of this inequality in the talk, and how it can be used to algorithmically generate a cut with a guarantee on sparsity as above.