Questions tagged [markov-chains]

Stochastic processes (with either discrete or continuous time dependence) on a discrete (finite or countably infinite) state space in which the distribution of the next state depends only on the current state. For Markov processes on continuous state spaces please use (markov-process) instead.

A Markov chain is a stochastic process on a discrete (finite or countably infinite) space in which the distribution of the next state depends only on the current state. These objects show up in probability and computer science both in discrete-time and continuous-time models. For Markov processes on continuous spaces please use .

A discrete-time Markov chain is a sequence of random variables $\{X_n\}_{n\geq1}$ with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states, i.e. $$\mathbb P(X_{n+1}=x\mid X_{1}=x_{1},X_{2}=x_{2},\ldots ,X_{n}=x_{n})=\mathbb P(X_{n+1}=x\mid X_{n}=x_{n}),$$ if both conditional probabilities are well defined, i.e. if $\mathbb P(X_{1}=x_{1},\ldots ,X_{n}=x_{n})>0.$

6166 questions
118
votes
6 answers

How often does it happen that the oldest person alive dies?

Today, we are brought the sad news that Europe's oldest woman died. A little over a week ago the oldest person in the U.S. unfortunately died. Yesterday, the Netherlands' oldest man died peacefully. The Gerontology Research Group keeps records:…
Řídící
  • 3,268
52
votes
5 answers

Time to reach a final state in a random dynamical system (answer known, proof unknown)

Consider a dynamical system with state space $2^n$ represented as a sequence of $n$ black or white characters, such as $BWBB\ldots WB$. At every step, we choose a random pair $(i,j)$ with $i
41
votes
9 answers

Probability brain teaser with infinite loop

I found this problem and I've been stuck on how to solve it. A miner is trapped in a mine containing 3 doors. The first door leads to a tunnel that will take him to safety after 3 hours of travel. The second door leads to a tunnel that will return…
38
votes
3 answers

Is ergodic markov chain both irreducible and aperiodic or just irreducible?

As I find some definition says: Ergodic = irreducible. And then Irreducible + aperiodic + positive gives Regular Markov chain. A Markov chain is called an ergodic chain if it is possible to go from every state to every state (not necessarily in one…
colinfang
  • 857
31
votes
5 answers

What is a Markov Chain?

What is an intuitive explanation of Markov chains, and how they work? Please provide at least one practical example.
30
votes
4 answers

Markov process vs. markov chain vs. random process vs. stochastic process vs. collection of random variables

I'm trying to understand each of the above terms, and I'm having a lot of trouble deciphering the difference between them. According to Wikipeda: A Markov chain is a memoryless, random process. A Markov process is a stochastic process, which…
30
votes
3 answers

'Intuitive' difference between Markov Property and Strong Markov Property

It seems that similar questions have come up a few times regarding this, but I'm struggling to understand the answers. My question is a bit more basic, can the difference between the strong Markov property and the ordinary Markov property be…
30
votes
5 answers

When the product of dice rolls yields a square

Succinct Question: Suppose you roll a fair six-sided die $n$ times. What is the probability that the product of the rolls is a square? Context: I used this as one question in a course for elementary school teachers when $n=2$, and thought the…
29
votes
2 answers

Nice references on Markov chains/processes?

I am currently learning about Markov chains and Markov processes, as part of my study on stochastic processes. I feel there are so many properties about Markov chain, but the book that I have makes me miss the big picture, and I might better look…
Tim
  • 49,162
28
votes
2 answers

Drunkard's walk on the $n^{th}$ roots of unity.

Fix an integer $n\geq 2$. Suppose we start at the origin in the complex plane, and on each step we choose an $n^{th}$ root of unity at random, and go $1$ unit distance in that direction. Let $X_N$ be distance from the origin after the $N^{th}$…
27
votes
6 answers

Why Markov matrices always have 1 as an eigenvalue

Also called stochastic matrix. Let $A=[a_{ij}]$ - matrix over $\mathbb{R}$ $0\le a_{ij} \le 1 \forall i,j$ $\sum_{j}a_{ij}=1 \forall i$ i.e the sum along each column of $A$ is 1. I want to show $A$ has an eigenvalue of 1. The way I've seen…
27
votes
7 answers

Die fixed so it can't roll the same number twice in a row, using markov chains?

Studying for probability test and the following question came up: A six sided die is 'fixed' so that it cannot roll the same number twice consecutively. The other 5 sides each show up with a probability $\frac{1}{5}$. Calculate P($X_{n+1} = 5 \mid…
27
votes
2 answers

Knight returning to corner on chessboard -- average number of steps

Context: My friend gave me a problem at breakfast some time ago. It is supposed to have an easy, trick-involving solution. I can't figure it out. Problem: Let there be a knight (horse) at a particular corner (0,0) on a 8x8 chessboard. The knight…
SSF
  • 1,372
26
votes
1 answer

Average swaps needed for a random bubble sort algorithm

Suppose we have $n$ elements in a random permutation (each permutation has equal probability initially). While the elements are not fully sorted, we swap two adjacent elements at random (e.g. the permutation $(1, 3, 2)$ can go to $(1, 2, 3)$ or $(3,…
25
votes
7 answers

Example of a stochastic process which does not have the Markov property

According to this definition, A stochastic process has the Markov property if the conditional probability distribution of future states of the process depends only upon the present state. [...] given the present, the future does not depend on the…
1
2 3
99 100