The following is an exercise from Marcelo Viana's Lectures on Lyapunov Exponents. The goal is to calculate the extremal Lyapunov exponents. I am having trouble calculating the limit of the product of random matrices, which I believe should be done by applying the law of large numbers.
Consider an i.i.d. sequence of matrices: $$A_1 = \begin{pmatrix} \sigma & 0 \\ 0 & \frac{1}{\sigma}\end{pmatrix}\qquad A_2 = \begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}$$
Which are scaling/squishing by a factor of $\sigma>1$ and rotation by $\frac{\pi}{2}$, respectively. Consider the sequence $L^n = L_nL_{n-1}\cdots L_1$ where $L_i = A_1$ with probability $0 < p < 1$ and $L_i = A_2$ with probability $(1-p)$. By the theorem of Furstenberg and Kesten, the following limits exist almost surely: $$\lim_{n\to\infty} \frac{1}{n} \log \|L^n\| = \lambda_+ \qquad \lim_{n\to\infty} \frac{1}{n} \log \|(L^n)^{-1}\|^{-1} = \lambda_-$$
Since $\det(A_i) = 1$ we have $\det(L^n)=1$ for all $n$. Using the characteristic polynomial for 2x2 matrices $\rho_{A^TA}(\lambda) = \lambda^2 - tr(A^TA)\lambda + \det(A^TA)$ it follows that the matrices $L^n$ and $(L^n)^{-1}$ have the same singular values and norms for all $n$. Thus: $$\lambda_+ + \lambda_- = \lim_{n\to\infty} \frac{1}{n}\log\left(\|L^n\|\frac{1}{\|(L^n)^{-1}\|}\right) =\lim_{n\to\infty} \frac{1}{n} \log(1) = 0$$
What I am having trouble with is computing $\lambda_+$ directly. My thought is this: For all $n$ the matrix $L^n$ is either diagonal or anti-diagonal. Since the matrix $A_2$ is just a rotation, the singular values of $L^n$ only increase when $L_i = A_1$. Therefore: $$\frac{1}{n}\log\|L^n\| = \frac{1}{n}\log \sigma^k$$
Some $0 \leq \frac{k}{n} < 1$. This term should approach zero rather fast since $A_2$ swaps the axes that are getting scaled, e.g.: $$A_1A_2A_1 = A_2 = \begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}$$
There should be a way to incorporate the $L_i = A_2$ terms in order for $\frac{k}{n} \to 0$ in the limit instead of $\frac{k}{n} \to p$ but I can't figure it out.