Stochastic matrices $\bf P$ with element $p_{i,j}$ at row $i$ and column $j$ have some specific limitations to them, namely:
$$\cases{\displaystyle\sum_{\forall j} p_{i,j} = 1\\ p_{i_j} \geq 0}$$
Is there some way we can use these limitations to derive what restrictions will on their inverse ${\bf P}^{-1}$.
The properties I know of so far are limited to all $\lambda({\bf P}) \in [0,1]$ with one guaranteed eigenvalue always located at $1$ with corresponding eigenvector representing a steady-state. But there can exist several of these. For example the matrix $${\bf P} = \begin{bmatrix}1&0&0\\0&1&0\\1/3&1/3&1/3\\\end{bmatrix}$$ The both two first states are steady and the third is not.
Also as mentioned by @Stefan it is possible that $\lambda({\bf P}) =0$, making our matrix singular and impossible to invert. We will consider the matrices for which this does not happen.
My own work is limited to concluding that if any one vector has eigenvalue $1$, then there must exist a corresponding eigenvalue for the inverse which is the same, and this corresponding eigenvector for ${\bf P}^{-1}$ must correspond to the steady state eigenvector for $\bf P$.
But for the other eigenvectors of ${\bf P}^{-1}$, I don't know how to classify them. (or any other property which could be of interest)
