I was following a proof provided in Gilbert Strang's book "Introduction to Linear Algebra". And I am confused by one step of the proof.
Suppose we have a $n$ by $n$ stochastic matrix $A$, where all elements are not negative and sum of every element in each row is $1$.
There exists a proof that there is only one biggest eigenvalue of $A$ equal to $1$ and other eigenvalues are less then $1$. I found it there: Proof that the largest eigenvalue of a stochastic matrix is 1
Now I want to get a proof that Markov chain has a steady state that is not affected by initial probability distribution:
$$ u_0 = \left( \begin{smallmatrix} u_1 \\ u_2 \\ \ldots \\ u_n \end{smallmatrix} \right) $$
Let's apply diagonalization to our matrix $A$:
$$ A = S \Lambda S^{-1} $$
Where $S$ consists of eigenvectors placed there as columns, $\Lambda$ is a diagonal matrix with corresponding eigenvalues.
Suppose we want to get a presentation of our initial distribution as a linear combination of eigenvectors of $A$ like this:
$$ u_0 = c_1 x_1 + c_2x_2 + \ldots + c_nx_n $$
In a matrix form:
$$u_o = SC$$
We can get $C$ from:
$$C = S^{-1}u_0$$
So, when we apply A to $u_0$ multiple times ($k$ times):
$$ u_k = A \ldots Au_o = S \Lambda S^{-1} \ldots S \Lambda S^{-1} u_o = S \Lambda S^{-1} \ldots S \Lambda S^{-1} SC = S \Lambda^{k} C$$
So, literally we get:
$$ u_k = c_1(\lambda_1)^kx_1 + c_2(\lambda_2)^kx_2 + \ldots + c_n(\lambda_n)^kx_n $$
But then author writes this in the book:
$$ u_k = x_1 + c_2(\lambda_2)^kx_2 + \ldots + c_n(\lambda_n)^kx_n $$
I understand that the author omits $\lambda_1$ because it is equal to 1. Why does the author omit $c_1$?
EDIT: I found out that $c_1$ is equal to 1. But I don't know why. This is why the author omits it.
Later in his proof the author shows that:
$$ \lim_{k\rightarrow \infty } u_k = x_1 $$
So, the author concludes that steady state is equal to the eigenvector with corresponding eigenvalue of 1.