2

I have a Markov Chain transition probability matrix as the following. The possible states are $\{0,1,2\}$ $$ P = \begin{bmatrix} 0.6 & 0.2 & 0.2 \\ 0.2 & 0.5 & 0.3 \\ 0 & 0 & 1 \end{bmatrix} $$

The question asks me the last non-absorbing state is $0$, starting from state $X_{0} = 0$.

I attempt the following: I let $T = \min\{n \geq 0, X_{n} = 2 \}$ be time of absorption and consider $u_{i} = P(X_{T-1} = 0 | X_{0} = i)$ I set up the equation of using First Step Analysis:

$$ u_{0} = 0.6 u_0 + 0.2u_1 + 0.2u_2 $$

$$ u_{1} = 0.2 u_0 + 0.5 u_1 + 0.3 u_2 $$

$$ u_2 = 1 $$

I solve this and get $u_1 = 44/50$ and $u_0 = 47/50$. But I checked the answer in the back it is said that $u_0 = u_1 = 1$. Does I misunderstand the statement to express in terms of $u_{i}$?

Dong Le
  • 501
  • 2
    You've done something wrong because we should have $u_0+u_1=1$. The answer $u_0=u_1=1$ doesn't make sense for the same reason. – Math1000 Feb 23 '20 at 05:08

2 Answers2

1

I wrote this $\texttt R$ code to simulate the process:

rm(list=ls())

N <- 10000

X <- rep(0, N)

for(i in 1:N) {
  state <- 0

  while(state!=2) {
    U <- runif(1)
    if(state==0) {
      if(U <0.6)
        state <- 0
      else if(U <0.8)
        state <- 1
      else {
        X[i] <- 0
        state <- 2
      }
    }
    else {
      if(U<0.2)
        state <- 0
      else if(U < 0.7)
        state <- 1
      else {
        X[i] <- 1
        state <- 2
      }
    }
  }
}

cat(sprintf("P(last non-absorbing state=0) = %f\n", length(which(X==0))/N))

This gives an approximate probability of $0.62$ that the last non-absorbing state is $0$. I hope it is useful.

Math1000
  • 38,041
  • Can you explain this in terms of First Step Analysis? – Dong Le Feb 23 '20 at 19:39
  • 1
    It isn't a first-step analysis. It simulates the process, starting at state $0$ until it reaches state $2$, and records the last state visited before reaching state $2$. It does this 10000 times (you can increase $N$ for less variance on the point estimate of the mean) and outputs the fraction of times the last state visited was state $0$0. – Math1000 Feb 23 '20 at 20:23
0

My approach is mostly a brute-force attack.

Since the needed probability is $$ \sum_{n=1}^\infty \Pr(X_{n-1}=0,\ X_n=2) =\sum_{n=1}^\infty P^{n-1}(0,0) P(0,2), $$we need to find top-left element of matrix $P^{n-1}$.

Let the eigen-decomposition of $P$ be $P=\Gamma\text{diag}\lbrace{1,\lambda_2,\lambda_3}\rbrace\Gamma^{-1}$. By solving $\text{det}(P-\lambda I)=0$, $\lambda_2$ and $\lambda_3$ are found to be $\frac{11\pm\sqrt{17}}2$.

Notice that the $3$rd components of the $2$nd and the $3$rd eigenvectors are both $0$. So we can write $$ \Gamma=\begin{bmatrix}\frac1{\sqrt{3}} & v_1 & v_2 \\ \frac1{\sqrt{3}} & u_1 & u_2 \\\frac1{\sqrt{3}} & 0& 0 \\\end{bmatrix} $$with inverse $$\Gamma^{-1}=\begin{bmatrix}0 & 0 & \sqrt{3} \\ v_1 & u_1 & -v_1 - u_1 \\ v_2 & u_2& -v_2- u_2 \\\end{bmatrix}. $$

From $$(P-\lambda_2 I)\begin{bmatrix}v_1\\u_1\\0\end{bmatrix}=0 \text{ and } v_1^2+u_1^2=1,$$ it can be deduced after some algebra that $v_1^2$ and $v_2^2$ are respectively $\frac8{17\pm\sqrt{17}}.$

Therefore, the needed probability is $$ P(0,2) \left(v_1^2 \sum_{i=0}^\infty \lambda_2^i +v_2^2 \sum_{i=0}^\infty \lambda_3^i \right) =\frac15 \left(\frac{v_1^2}{1-\lambda_2} + \frac{v_2^2}{1-\lambda_3}\right)=\frac58=0.625. $$

Zack Fisher
  • 2,481