2

So, I asked a question earlier about an interesting Markov chain that I found, and I learnt that when $k\leq\frac12$, we start getting an ordinary distribution on a measureable set.

But I was wondering, what exactly is this steady-state distribution - so my current way of thinking of it, is that I start with the uniform distribution over $(0,1)$ and then I keep running the following intuitive recipe:

  • At given point $x$ take an infinitesimal slice to $x+\delta x$ and transport $1/p$ of that mass to $k+(1-k)x$ to $k+(1-k)(x+\delta x)$ and take the rest to $x(1-k)$ to $(x+\delta x)(1-k)$.
  • Repeat indefinitely

I don't really have the analysis tools to think about what this does in various scenarios, I'm particularly interested in what happens when $k$ is pretty small, since my simulations show that I get something that looks like a Beta distribution - but there appears to be a lot of complex behaviour hiding in this idea that I would like to know more about, if people can explain it or point to where its been discussed before.

Thanks again for all your help!

  • What do you mean by "ordinary distribution"? Actually, for small $k$ it seems like neither answer tells you even what kind of distribution you have. Misha's answer in particular only handles $k \geq 1/2$, while Small Deviation's answer writes it down as a random variable but that doesn't give a qualitative description. – Ian Oct 04 '23 at 14:40
  • 1
    Of course you can get the characteristic function from Small Deviation's answer (assuming it is correct); it is $\prod_{j=1}^\infty \left ( p e^{itk(1-k)^j} + 1 - p \right )$. But that doesn't help you very much, does it? – Ian Oct 04 '23 at 14:43
  • I'd say by "ordinary distribution", I mean a distribution with a continuous PDF. Anyway, isn't there a bijection between PDFs and the Characteristic Function given by the Fourier transform under certain constraints - you could be onto something here, Ian! – Thomas Pluck Oct 04 '23 at 14:54
  • Yeah, I just ran this through Mathematica, unhelpful is the word, hah. I wouldn't be surprised if this this won't settle down unless you maybe convolve the Dirac delta's with a normal distribution. Which, given this problem arises out of electronic engineering, I'll say is an acceptable compromise. – Thomas Pluck Oct 04 '23 at 15:02
  • 1
    For very small $k$, linearizing the logs and the exponential suggests that maybe you have just a Dirac delta at $p(1-k)$. – Ian Oct 04 '23 at 16:07
  • Yes, but we were hoping for something a little wider than a Dirac delta, haha – Thomas Pluck Oct 04 '23 at 17:13
  • I wonder if there is some kind of phase transition from a Dirac delta to a continuous distribution to a singular distribution. It would be interesting to identify the boundary between those regions in the parameter space... – Ian Oct 04 '23 at 17:37

2 Answers2

2

So, I think a rigorous answer drops out of SmallDeviation's answer and John Dawkin comment beneath that. A good way to define the Markov Chain in question is:

$$X_t=(1-k)^t\cdot X_0+\sum_{j=1}^{t} (1-k)^{j}\epsilon_{j}$$

Where $\epsilon_j$ are Bernoulli variables with mean $p$. So, as $t\to \infty$ we have:

$$X_\infty = \sum_{j=1}^\infty (1-k)^j\epsilon_j$$

This general form when $\mathbb{E}[\epsilon_j]=1/2$ and $λ=1-k$ is called a Bernoulli convolution and it is a well studied object.

Theorem 1 (Shmerkin). The set $\{ k \in (1/2, 1) : X_\infty\text{ is singular}\}$ is of 0 Hausdorff dimension.

Theorem 2 (Solomyak). The Bernoulli convolution $X_\infty$ is absolutely continuous and has density in $L^2$ for almost every $k\in(0,1/2)$

This turns out to be a pretty rich subject of research in itself, so to find out if this approximates a normal distribution or a beta distribution seems silly, although that is what it looks like.

1

You can write the master equation of your Markov chain. For the pdf $f$, it is: $$ f_{n+1} = T[f_n]\\ T[f] = \frac{p}{1-k}f\left(1-\frac{1-x}{1-k}\right)1_{(k,1)}(x)+\frac{1-p}{1-k}f\left(\frac{x}{1-k}\right)1_{(0,1-k)}(x) $$ Your stationary pdf is just a fixed point of $T$. You can look at the asymptotics as $x\to0^+,1^-$. Assuming a power law: $$ \begin{align} f(x)&\sim x^{\alpha-1} &&x\to 0^+ \\ f(x)&\sim (1-x)^{\beta-1} &&x\to 1^- \end{align} $$ you get from the functional equation: $$ \begin{align} \alpha &= \frac{\ln(1-p)}{\ln(1-k)} & \beta &= \frac{\ln p}{\ln(1-k)} \end{align} $$ so instead of checking if any beta function works, you just need to check it for: $$ f(x)\propto x^{\alpha-1}(1-x)^{\beta-1} $$ You can check analytically and numerically that this is not the case.

Hope this helps.

LPZ
  • 6,049
  • Good observation. Do you think there's anything that could be said about asymptotic normality as k approaches 0, assuming p = 1/2? – Thomas Pluck Oct 04 '23 at 22:47