-2

Consider a game with six states 1, 2, 3, 4, 5, 6. Initially a player starts either in state 1 or in state 6. At each step the player jumps from one state to another as per the following rules. A perfectly balanced die is tossed at each step. i) When the player is in state 1 or 6: If the roll of the die results in k then the player moves to state k, for k = 1, . . . , 6.

ii) When the player is in state 2 or 3: If the roll of the die results in 1, 2 or 3 then the player moves to state 4. Otherwise the player moves to state 5.

iii) When the player is in state 4 or 5: If the roll of the die results in 4, 5 or 6 then the player moves to state 2. Otherwise the player moves to state 3. The player wins when s/he visits 2 more states, besides the starting one.

(a) Calculate the probability that the player will eventually move out of states 1 and 6.

(b) Calculate the expected time the player will remain within states 1 and 6.

(c) Calculate the expected time for a player to win, i.e., to visit 2 more states, besides the starting one.[ISI 2021]

I understand it's probably a Markov Chain Problem but I am unable to solve it.

State 1: Starting state 1; State 2: Starting state 6; State 3: State 2 or 3 State 4: State 4 or 5; To calculate the probability that the player will eventually move out of states 1 and 6, we need to find the probability of reaching states 2, 3, 4, or 5 from states 1 and 6. We can do this by calculating the absorbing probabilities, using formula (I−Q)(−1) . But my problem is formulating the transition matrix in state 3 and 4

Akina
  • 21
  • 2
    Welcome to math SE. What have you tried? Also, try to post only one question at a time. – Alain Remillard May 06 '24 at 18:45
  • State 1: Starting state 1; State 2: Starting state 6; State 3: State 2 or 3 State 4: State 4 or 5; To calculate the probability that the player will eventually move out of states 1 and 6, we need to find the probability of reaching states 2, 3, 4, or 5 from states 1 and 6. We can do this by calculating the absorbing probabilities, using formula $(I - Q)^(-1)$. But my problem is formulating the transition matrix in state 3 and 4 – Akina May 06 '24 at 19:05
  • 2
    Could you edit the post to include this. Most people wont read the comments. – Alain Remillard May 06 '24 at 23:56

1 Answers1

1

a) the probability of eventually moving out of state 1 and 6 has to be 1. This is because the game cannot end with the player staying in state 1 and 6, and the chance of the game continuing forever is 0.

b) to find the expected time we will use the definition of expectation value $$ \sum_n n\mathbb{P}(n) $$ The probability of being in state 1 or 6 at turn 0 (the start of the game) is 1. From here the chance of staying is 1/3 so our sum turns into: $$ \sum_n n\frac{2}{3}(3)^{-n+1} = 2\sum_n n (3)^{-n} $$ This can then be found by $$2\frac{1/3}{(1-1/3)^2} = \frac{3}{2}$$

c) There is a 4/6 chance you will move from state 1 or 6 on your first move. If this is the case, then you are guaranteed to win after 2 turns as you cannot move to a state you have already been on the second turn. Now if you start in state 1 and you roll a 6 (1/6 chance) then the game ends when you roll a 2, 3, 4, or 5. The expectation value of this is The expectation value of how long you will stay in state 1 or 6 (which we found in b), plus one, of course, because we have already taken 1 move. We now have: $$ \bar n = \frac{1}{6}(\bar n +1) + \frac{4}{6} (2) + \frac{1}{6}\left(\frac{5}{2}\right) $$ Where $\bar n$ is the expectation value. Here, the $\bar n +1$ is because if you roll a 1 on your first turn, the expectation value will be the same plus the one roll you already took. From here we can solve for $\bar n$: $$ \frac{5}{6}\bar n = \frac{1}{6}+\frac{8}{6}+\frac{3}{12} $$ $$ \bar n = 2.3 $$

Nic
  • 2,296