1

Let $X=\{0,1\}$ be the space consisting of two points $0$ and $1$. $\mu(\{0\})=\mu(\{1\})=\frac{1}{2}$ and $\Omega:=\prod_{i=1}^{\infty}X_{i}$, each $X_{i}$ is a copy of $X$, with the product $\sigma-$algebra and product measure $\lambda$. Consider the map $f(\omega)$ from $\Omega\longrightarrow [0,1]$ that maps $\omega=(x_{1},\cdots, x_{n},\cdots)$ to $$f(\omega):=\sum_{j=1}^{\infty}\dfrac{x_{j}}{2^{j}}\in [0,1].$$

I want to compute the pushforward $f_{*}\lambda=\lambda f^{-1}.$ A related question has been asked here and the OP posted his proof there. identifying the measure $\lambda f^{-1}$ on the interval $[0,1]$

I did not quite understand that proof. The OP wanted to compute the preimage $f^{-1}(E)$, where $E$ is the dyadic interval $E=(k/2^{j}, (k+1)/2^{j})$, and tried to argue that $\lambda f^{-1}(E)$ is the same as the Borel measure on $[0,1]$, then he tried to generalize this up to all Borel sets (this is easy since Borel sets are generated by dyadic intervals).

However, I don't think that post's solution is correct. The problem is in the first step. See post here, Compute the preimage of dyadic interval via binary expansion map.. It seems that the preimage $f^{-1}(E)$ cannot be a finite length expansion, since only dyadic rational has terminating binary expansion.

I tried to figure the problem out but failed.. Is there any other way to solve this problem? Or perhaps the solution in the post above can be refined and then applied to this problem?

I saw some of this thing when I learnt Bernoulli shift, so I read several online note, trying to solve the problem in the view of dynamical system.. but it did not work out either..

Thank you!

Arctic Char
  • 16,972
  • Try showing that $f$ is a bijection a.e. that preserves the measure. That should be enough. – Matematleta Sep 28 '20 at 05:03
  • @Matematleta yeah. It is bijective except a countable number of points (dyadic interval), and thus it is bijective a.e. Even though it preserves measure, how could I say the measure it push forward is Borel measure? (I am not sure how to prove it is measure preserving, but it is more doable than computing the inverse) – JacobsonRadical Sep 28 '20 at 05:26
  • I think it's not too hard to show that ${A\subseteq \Omega:m(f(A))=\lambda (A)}$ is a $\sigma$-algebra. BTW I think you can find an inverse using the Rademacher Functions. I seem to remember doing this a long time ago, but I have forgotten the details! – Matematleta Sep 28 '20 at 05:55

2 Answers2

2

I'll write proof in probabilistic terminology, which I'm better used too.

In other words $f = \sum_{j=1}^\infty \frac{Y_j}{2^j}$, where $Y_1,Y_2,..$ are independent, identically distributed random variables (due to product space and measure) with distribution $\lambda(\{\omega : Y_1(\omega)=1 \})=\lambda(\{\omega : Y_1(\omega)=0 \}) = \frac{1}{2}$.

It is obvious that $f \in [0,1]$ everywhere.

First approach

Let $F:\mathbb R \to \mathbb R$ be given by $F(t) = f_*\lambda( (-\infty,t])$. By what we said above, we get $F(t) = 0$ for $t < 0$ and $F(t) = 1$ for $t>1$.

Now, take any $t = \frac{k}{2^n}$ for some $n \in \mathbb N_+$ and $k \in \{1,...,2^n-1\}$ . Such a number has it's binary representation of the form $t = \sum_{i=1}^n \frac{a_i}{2^i}$ (every $a_i \in \{0,1\}$).

We want to compute $F(t) = \lambda (\{\omega : f(\omega) \le t \}) = \lambda (\{ \omega: \sum_{j=1}^\infty \frac{Y_j(\omega)}{2^j} \le \sum_{i=1}^n \frac{a_i}{2^i} \})$

Now, let's say that $1 \le i_1 <...<i_k \le n$ and $a_{i_1},...,a_{i_k} =1$ and the rest are $0$.

We're looking at $i_1$. Clearly, we must have every $Y_1(\omega),...,Y_{i_1-1}(\omega)$ to be equal $0$ (with probability $\frac{1}{2^{i_1-1}}$). Now $2$ cases:

  1. if $Y_{i_1}(\omega)$ is equal $0$ (with probability $\frac{1}{2}$), we can do whatever we want further, since $\sum_{i=i_1+1}^\infty \frac{1}{2^i} = \frac{1}{2^{i_1}}$.

  2. if $Y_{i_1}(\omega) = 1$ (with probability $\frac{1}{2}$), then we must have every $Y_{i_1+1}(\omega),...,Y_{i_2-1}(\omega)$ to be equal $0$ (with probability $\frac{1}{2^{i_2-i_1-1}}$) and now, again for $Y_{i_2}$ we have two cases (similarly, either it is $0$ and we can do whatever we want further, or it's $1$, and we need some more (if any) to be $0$, and so on till $Y_{i_k}$.

To sum up together, we get $$F(t) = \frac{1}{2^{i_1}} + \frac{1}{2^{i_1}}\cdot \frac{1}{2^{i_2-i_1}} + ... + \frac{1}{2^{i_{k-1}}}\cdot \frac{1}{2^{i_k - i_{k-1}}} = \sum_{j=1}^k \frac{1}{2^{i_k}} = \sum_{i=1}^n \frac{a_i}{2^i} = t$$

We showed the result for dyadic $t$, but by density of such dyadics in $[0,1]$ and right continuity of $F$ (due to continuity of finite/probabilistic measure), we get for any $t \in [0,1] : F(t) = t$.

Now, since CDF of a random variable uniquelly describes the distribution, we get that $f$ is a random variable with uniform distribution, hence $\lambda_*f(E) = m(E \cap [0,1])$, where $m$ is Leb.Measure

(Or you can proceed without referring to probability. We get $\lambda_*f( (-\infty,t]) = t 1_{t \in [0,1]} + 1_{t \in (1,+\infty)}$, so that $\lambda_*f( (a,b]) = 1_{b > 1} + b1_{b \in [0,1]} - 1_{a>1} - a_{a \in [0,1]}$ so again, since such intervals generate borel sets, we get $\lambda_*f(E) = m(E \cap [0,1])$

Second approach

Using notion of characteristic function, we can do it even simplier. CF of a random variable $f$ is given by $\varphi_f:\mathbb R \to \mathbb C$, $\varphi_f(t) = \mathbb E[\exp(itf)] = \int_{\Omega} \exp(itf(\omega))d\lambda(\omega)$. Letting $g=2f-1 = \sum_{j=1}^\infty \frac{2Y_j - 1}{2^j}$ we get by independence and dominated convergence

$$ \varphi_g(t) = \mathbb E [ \prod_{j=1}^\infty \exp(i \frac{t}{2^j} (2Y_j-1)) ] = \prod_{j=1}^\infty \varphi_{2Y_j-1}(\frac{t}{2^j})$$

We can easily calculate $\varphi_{2Y_j-1}(s) = \frac{1}{2}(e^{is} + e^{-is}) = \cos(s)$ so that for $t \neq 0$ we get$$\varphi_g(t) = \lim_{ M \to \infty} \prod_{j=1}^M \cos(\frac{t}{2^j}) = \lim_{M \to \infty} \prod_{j=1}^M \sin(\frac{t}{2^{j-1}}) \frac{1}{2 \sin(\frac{t}{2^j})} = \lim_{M \to \infty} \sin(t) \frac{\frac{1}{2^M}}{\sin(\frac{t}{2^M})} \to \frac{\sin(t)}{t}$$

Which means that $g$ has Uniform $[-1,1]$ distribution, hence $f=\frac{g+1}{2}$ has Uniform $[0,1]$ distribution, and we get the same result

1

Here is an approach using the Rademacher functions. First note that except for the sequence whose terms are only $0$ or the sequence whose terms are only $1$ the fiber $f^{-1}(f(\omega))$ contains exactly two elements when $\omega$ is eventually $0$ or eventually $1$ and otherwise, it is a singleton.

So, we may define a function $\epsilon: [0, 1]\to \{0,1\}^{\mathbb N}$ by taking $\epsilon(t)=$ the unique element of $f^{-1}(t)$ if this fiber is a singleton, and if not, then we choose the element in $f^{-1}(t)$ that is eventually $0.$

Then, there are functions $\epsilon_i:[0,1]\to \{0,1\}:t\mapsto (\epsilon(t))_i$ and $t=\sum_{i=1}^\infty\frac{\epsilon_i(t)}{2^i}.$

Now define $r_i(t)=1-2\epsilon_i(t).$ ($r_i$ maps to $\{1,-1\}$ but this change does not affect the result). The graphs of the first few $r_i$'s will help in verifying the claim that follows:

enter image description here

The sum $\sum_{i=1}^n\frac{r_i(t)}{2^i}$ is constant on $\left(\frac{k}{2^n},\frac{k+1}{2^n}\right): k+1<2^n$ so $r_i(t)=r_i$ a constant equal to $\pm 1: 1\le i\le n$ on this interval. It follows that

$\lambda f^{-1}\left(\left(\frac{k}{2^n},\frac{k+1}{2^n}\right)\right )=\lambda (\{\omega: r_1,\cdots, r_n\ \text{are constant}\ \})=\frac{1}{2^n}.$

Matematleta
  • 30,081