Suppose we have a Markov chain with transition matrix $$\textbf{P}= \begin{pmatrix} \frac{1}{2} & \frac{1}{2} & 0&0&0&0 \\ \frac{1}{3} & \frac{2}{3} &0&0&0&0 \\\frac{1}{5}&\frac{1}{5}&\frac{1}{5}&\frac{1}{5}&\frac{1}{5}&0 \\ \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{8} & 0 & \frac{1}{8} \\ \frac{1}{6}&\frac{1}{6}&\frac{1}{6}&\frac{1}{6}&\frac{1}{6}&\frac{1}{6} \\ \frac{1}{4}&\frac{1}{8}&\frac{1}{8}&\frac{1}{8}&\frac{1}{8}&\frac{1}{4} \end{pmatrix} $$ We want to calculate $$\mathbb{P}(X_n=3 \ \text{for some} \ n \geq 1|X_0=3)$$. My teacher gave the solution as follows : $$\mathbb{P}(X_n=3 \ \text{for some} \ n \geq 1|X_0=3) =\sum_{i=3}^6\mathbb{P}(X_1=i, X_n=3 \ \text{for some} \ n \geq 1|X_0=3)=p_{33}+\sum_{i=4}^6\mathbb{P}(X_1=i|X_0=3)\mathbb{P}(X_n=3 \ \text{for some} \ n \geq 1|X_0=i)=p_{33}+\sum_{i=4}^6p_{3i}\mathbb{P}(X_n=3 \ \text{for some} \ n \geq 1|X_0=i)$$ where $p_{ij}$ is the $(i, j)^{\text{th}}$ entry of $\textbf{P}$. And then he made the state $3$ an absorbing state with the new transition matrix $$\tilde{\textbf{P}}= \begin{pmatrix} \frac{1}{2} & \frac{1}{2} & 0&0&0&0 \\ \frac{1}{3} & \frac{2}{3} &0&0&0&0 \\ 0&0&1&0&0&0 \\ \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{8} & 0 & \frac{1}{8} \\ \frac{1}{6}&\frac{1}{6}&\frac{1}{6}&\frac{1}{6}&\frac{1}{6}&\frac{1}{6} \\ \frac{1}{4}&\frac{1}{8}&\frac{1}{8}&\frac{1}{8}&\frac{1}{8}&\frac{1}{4} \end{pmatrix} $$ And then he computed the absorption probabilities $$\alpha_i=\mathbb{P}(\tilde{X_n}=3 \ \text{for some} \ n \geq 1|\tilde{X_0}=3), i=4, 5, 6$$ where $\tilde{X_n}$ is the Markov chain corresponding to the transition matrix $\tilde{\textbf{P}}$. Using these absorption probabilities he computed the required probability. $$$$My question is how can we make a state absorbing even if it is not an absorbing state in the original problem as it will change the entire Markov chain and then the probabilities under consideration will change? How can we use the probabilities computed for this new Markov chain for our original problem?
1 Answers
This is a pretty clever technique if you ask me. So you want to compute $$\mathbb{P}(X_n = 3 \text{ for some } n \geq 1|X_0 = i).$$ Now consider the following modification of $X_n$ which I denote by $\tilde{X}_n$: the new chain* follows the exact same path as the original one before it visits the state $3$. And once $X_n$ hits $3$, the new chain remains in that state indefinitely. In notations, letting $T=\inf\{n \geq 1\colon \, X_n = 3\}$ be the hitting time of the state $3$, we have: $$\tilde{X}_n = \begin{cases} X_n & \text{if }n< T,\\ 3 &\text{if }n\geq T. \end{cases}$$
Two facts about $\tilde{X}_n$. First of all, it should not be very difficult to show that it's a Markov chain with transition matrix $\tilde{\mathbf{P}}$ (that is, same transitions as $\mathbf{P}$ but the state $3$ is now absorbing).
Second of all, from the construction it should be clear that the event "$X_n$ visits the state $3$ starting from state $i$" is equal to the event "$\tilde{X}_n$ gets absorbed in state $3$". Thus the probability we were trying to compute is simply the absorption probability for this new chain: $$\mathbb{P}(X_n = 3 \text{ for some } n \geq 1|X_0 = i) = \mathbb{P}(\tilde{X}_n = 3 \text{ for some } n \geq 1|\tilde{X}_0 = i) = \alpha_i.$$
*I say chain but it is not yet clear that it's indeed a Markov chain.
- 2,787
-
Suppose we toss a fair coin infinitely many times and we want to find the probability that the patter $'HTH'$ appears before the pattern $'HHH'$. Then in that case also we make these states absorbing and then compute the absorption probability. How can we do that? These two problems are completely different. In the first case the coin is being tossed infinitely many times but in the second case we stop flipping the coin if one of the patterns $'HTH'$ or $HHH'$ is observed. – user2545 Nov 11 '23 at 19:34
-
I don't see why the same argument doesn't apply. You can simply use the convention that once you get one of the mentioned patterns, your auxiliary chain stays there. Notice that you're not tossing the coin anymore since you're forcing the chain to stay in a certain state (so there's no randomness). @user671269 – Michh Nov 12 '23 at 15:42