4

Consider the sequence of random variables where $X_0=0$ and $X_{n+1}$ is an integer selected uniformly from $[0,X_n+1]$. Equivalently, $(X_n)$ is a Markov chain with the $0$-indexed transition matrix $$P=\begin{bmatrix} \frac12 & \frac12 & 0 & 0 &0&\ldots\\ \frac13 & \frac13 & \frac13 & 0 & 0 & \ldots\\ \frac14 & \frac14 & \frac14 & \frac14& 0 & \ldots\\ \vdots & \vdots & \vdots & \vdots& \vdots & \ddots \end{bmatrix}.$$ Calculations indicate a Poisson limiting distribution: $$X_n\overset{d}{\rightarrow}\text{Pois(1)}$$ How can we prove this?

By looking at which states can reach state $m$, we have $$P(X_n=m) =\sum_{k\geq m}\frac{P(X_{n-1}=k-1)}{k+1}$$ and $$P(X_n=m) =\sum_{0\leq k_{i+1}\leq k_i+1\\ k_0=0,\ k_n=m}\prod_i\frac{1}{k_i+1}$$ But I'm not really sure what to do with this. I though of likening it to a binomial distribution so that I could use the Poisson limit theorem, but I couldn't get anywhere.

Jacob
  • 3,319
  • 15
  • 27
  • $X_n$ does not seem to have a binomial distribution for $n>1$. Note $E[X_n]=1-\frac1{2^n} \to 1$ and $X_n$ is supported on ${0,1,\ldots,n}$. You could perhaps show $\text{Pois(1)}$ is a stable distribution with that transition matrix. – Henry Aug 07 '24 at 22:23
  • @Henry Yes, that was clear to me that $X_n$ is not binomial distributed (otherwise it could not have a Poisson limit distribution). More precisely, the angle of attack I was thinking of was approximating $P(X_n=m)$ with $\binom{n}{m}p_{n,m}^m(1-p_{n,m})^{n-m}$ for some sequence $p_{n,m}\sim1/n$ for each $m$, the idea being that the approximation could be made using some sort of counting method. I think we can also show that $X\sim\text{Pois}(1)$ is stable under that transition matrix since $P(X=m)=\sum_{k\geq m}\frac{P(X=k-1)}{k+1}$ can be easily verified, but I don't know if that's sufficient – Jacob Aug 07 '24 at 23:17
  • There are theorems about unique stationary distributions and convergence to this distribution, using properties such as it being aperiodic and irreducible. – Henry Aug 07 '24 at 23:57
  • Related: https://math.stackexchange.com/questions/1171283/find-the-stationary-distribution-of-an-infinite-state-markov-chain – Jacob Aug 08 '24 at 21:19

2 Answers2

4

Let $U_1, U_2, \ldots$ be IID RVs having the uniform distribution on $[0, 1]$. Then we construct $(X_n)$ and $(Y_n)$ as follows:

  • $X_0 = 0$ and $Y_0 \sim \text{Poisson}(1)$.
  • $X_{n+1} = \lfloor (X_n + 2) U_{n+1} \rfloor $ and $Y_{n+1} = \lfloor (Y_n + 2) U_{n+1} \rfloor$.

This realizes OP's Markov chain, and in particular, we know that $(Y_n)$ is a stationary process having $\text{Pois}(1)$ distribution. (@Esteban G.'s computation shows that $\text{Pois}(1)$ is indeed a stationary distribution of OP's Markov chain. I also verified this using probability generating function, but the computation was nasty.) Moreover,

  1. the construction shows that $X_n \leq Y_n$ for all $n$, and
  2. $Y_n = 0$ for some $n$ with probability 1, which implies that $X_n = Y_n$ eventually holds with probability 1. In particular, $\mathbb{P}(X_n \neq Y_n) \to 0$ as $n \to \infty$.

Hence, for each $k$,

$$ \mathbb{P}(X_n = k) = \mathbb{P}(Y_n = k) + \mathcal{O}(\mathbb{P}(X_n \neq Y_n)) \to p_{\text{Pois}(1)}(k). $$

Sangchul Lee
  • 181,930
3

Stationary Distribution

You can actually build the stationary distribution by noting that if $\pi P = \pi$, then looking at the $j$-th and $j+1$-th coordinates you get that $$\pi_{j+1}=\pi_{j}-\frac{\pi_{j-1}}{j}.$$ Then, let's fix $\pi_1=c\in\mathbb{R}$ and assume as our induction hypothesis that $\pi_j=c/(j-1)!$ for $j=1,2,\dots,n$ (you can actually build this hypothesis by looking at the first terms), and observe that $$ \pi_{n+1}=\pi_{n}-\frac{\pi_{n-1}}{n}=\frac{c}{(n-1)!}-\frac{c}{n(n-2)!}=\frac{nc-(n-1)c}{n!}=\frac{c}{n!}.$$ It follows that any stationary distribution should satisfy that $\pi_j=\pi_1/(j-1)!$, and since $\sum_{j=1}^\infty \pi_j = 1$, it follows that $c=e^{-1}$, so the only stationary distribution for this Markov chain follows a $\text{Pois}(1)$ distribution.

Ergodicity

In order to prove that the Markov chain is ergodic it is enough to prove that it is aperiodic and irreducible:

  1. Aperiodicity follows from the fact that you can go back to any state in any number of steps, so the periodicity of any state $j$ is $$d(j)=\text{gcd}(n\mid P^n(j,j)>0)=\{1,2,\dots\}=1.$$
  2. The chain is irreducible since for every pair of states $i,j$, you can find paths connecting them in both directions: let's assume without loss of generality that $i\leq j$ and note that the following are paths with positive probability $$i\mapsto i+1\mapsto \cdots\mapsto j\quad\text{and}\quad j\mapsto i.$$

Conclusion

Since the Markov chain is ergodic and it admits one stationary distribution, it has a limiting distribution and it coincides with its stationary distribution :)

  • Thanks, this looks good, but I have one tiny point of contention. It is my understanding that to guarantee the existence of the limiting distribution you also need positive recurrence since this is a infinite state DTMC. Luckily for us, in an irreducible DTMC the existence of a stationary distribution is equivalent to positive recurrence. So I don't think that we should be concluding that there is one stationary distribution because the chain is irreducible and aperiodic, but rather that it has a limiting distribution because it is irreducible, aperiodic, and has a stationary distribution. – Jacob Aug 08 '24 at 21:09
  • Pedantic, I know, but I just want to make sure I have the flow of logic down correctly. – Jacob Aug 08 '24 at 21:13
  • Hey! I actually don't think it is pedantic and that you are totally right. I just edited the conclusion so that the logic flow is now correct. – Esteban G. Aug 09 '24 at 13:44