The Johnson-Lindenstrauss lemma asserts that any n-point set in any Euclidean space can be mapped to a Euclidean space of dimension $k= O(\epsilon^{-2} \log n)$ so that all distances are preserved upto a multiplicative factor between $(1 - \epsilon)$ and $(1+\epsilon)$. There are several proofs of $JL Lemma$, the notable ones being by Indyk and Motwani, Dasgupta and Gupta, Achlioptas. All these proofs obtain such a mapping as a linear map $R^n -> R^k$ with a suitable random matrix. In this talk I intend to present a portion of a 2008 paper by Matoušek, where the author gives a simple and self-contained proof of the $JL-lemma$ which subsumes several of the earlier proofs. The paper uses a distribution on $n$ by $k$ matrices where the entries are independent random variables with mean $0$ and bounded variance, and with a uniform subgaussian tail. The distributions used by Indyk-Motwani and Achlioptas turn out to both fall in this category.