Consider a game with six states 1, 2, 3, 4, 5, 6. Initially a player starts either in state 1 or in state 6. At each step the player jumps from one state to another as per the following rules. A perfectly balanced die is tossed at each step. i) When the player is in state 1 or 6: If the roll of the die results in k then the player moves to state k, for k = 1, . . . , 6.
ii) When the player is in state 2 or 3: If the roll of the die results in 1, 2 or 3 then the player moves to state 4. Otherwise the player moves to state 5.
iii) When the player is in state 4 or 5: If the roll of the die results in 4, 5 or 6 then the player moves to state 2. Otherwise the player moves to state 3. The player wins when s/he visits 2 more states, besides the starting one.
(a) Calculate the probability that the player will eventually move out of states 1 and 6.
(b) Calculate the expected time the player will remain within states 1 and 6.
(c) Calculate the expected time for a player to win, i.e., to visit 2 more states, besides the starting one.[ISI 2021]
I understand it's probably a Markov Chain Problem but I am unable to solve it.
State 1: Starting state 1; State 2: Starting state 6; State 3: State 2 or 3 State 4: State 4 or 5; To calculate the probability that the player will eventually move out of states 1 and 6, we need to find the probability of reaching states 2, 3, 4, or 5 from states 1 and 6. We can do this by calculating the absorbing probabilities, using formula (I−Q)(−1) . But my problem is formulating the transition matrix in state 3 and 4