In Mining Massive Datasets, page 365, the following theorem is stated without proof:
Let A be a symmetric matrix. then the second-smallest eigenvalue of A is equal to $\displaystyle \min_{x} {x^TAx}$, when the minimum is taken over all unit vectors x that are orthogonal to the eigenvector $v_1$ associated with the smallest eigenvalue.
This phrasing seems somewhat sloppy, because let's look at the case where the smallest eigenvalue has an eigenspace of dimension $>1$. I think we will not really get the second-smallest eigenvalue $\lambda _2$ but the smallest eigenvalue $\lambda _1$ itself, because we can take another unit eigenvector of $\lambda _1$ that is orthogonal to $v_1$, and it will satisfy the minimization problem. ($x^TAx=x^T \lambda x =\lambda$)
So probably the right two theorems are:
The $k^{th}$ smallest (distinct) eigenvalue of L is equal to $\displaystyle \min_{x} {x^TAx}$, when the minimum is taken over all unit vectors x that are orthogonal to all of the eigenvectors associated with the smallest $k-1$ eigenvalues.
The second smallest eigenvalue of L counted with multiplicities according to the dimensions of the eigenspaces is equal to $\displaystyle \min_{x} {x^TAx}$, when the minimum is taken over all unit vectors x that are orthogonal to an eigenvector associated with the smallest eigenvalue.
For the $k^{th}$ smallest in the second proposition, we ask for orthogonality to all eigenvectors of the smaller eigenvalues.
Is it right? And how do I prove in both cases that $\displaystyle \min_{x} {x^TAx} \leq \lambda _k$?
This answer shows that without the second constraint we get the smallest eigenvalue, because the $x$ that gives the minimum is an eigenvector of $\frac{A+A^T}{2}=A$, and hence gives as the value of the quadratic form its associated eigenvalue. The answer there also seems to hint at the necessity of an assumption on the positive semi-definiteness of $A$; is it needed? The book does not require it.