1

Suppose I have a sequence of positive random vectors $\vec{X}_N$ of fixed length $l$. That is, $\vec{X}_N = (x_N^{(1)}, x_N^{(2)},\cdots, x_N^{(l)})$ where each entry $x_N^{(i)} > 0$.

Suppose I further have that $\mathbb{E}\left[ \sum_{i=1}^l \left( \frac{(x_N^{(i)}-1)^2}{x_N^{(i)}}\right) \right] = O(1/N)$ so the LHS in the limit $N \to \infty$ is $0$.

It seems clear that $ \vec{X}_N$ in the limit $N \to \infty$ should be the ones-vector ${\bf{1}} = (1,1,\cdots, 1)$.

But is this convergence almost surely, or is this a convergence in probability? Or is this assertion false?

1 Answers1

1

Let $(\Omega,\mathcal{A},\mathbb{P})$ be our probability space. For $1\leqslant p<\infty$, let $L_p$ denote the space of (equivalence classes of) random variables $X:\Omega\to \mathbb{R}$ (that is, $\mathcal{A}$-$\mathcal{B}$-measurable functions, where $\mathcal{B}$ is the Borel measure on $\mathbb{R}$) having finite $p^{th}$ moment, equipped with the norm $\|\cdot\|_p$ satisfying $\|x\|_p^p=\mathbb{E}[|x|^p]$. Recall that for $1\leqslant p<q<\infty$ and $x\in L_q$, $x\in L_p$ and $\|x\|_p\leqslant \|x\|_q$ (from, for example, Jensen's inequality applied with the convex function $\phi(t)=t^{q/p}$.

First let's start with a single sequence $(x_N)_{N=1}^\infty\subset L_2$ and assume that each is almost surely positive. Let $$t_N=\frac{(x_N-1)^2}{x_N}.$$ Then $\lim_N \mathbb{E}\frac{(x_N-1)^2}{x_N}=0$ is exactly the assumption that $\lim_N \|t_N\|_1=0$. This implies that every subsequence of $t_N$ has a further subsequence which converges almost surely to $1$. This implies that every subsequence of $(x_N)_{N=1}^\infty$ has a subsequence which converges almost surely to $1$. That's because for any $N_1<N_2<\ldots$ and $\omega\in \Omega$, $\lim_k t_{N_k}(\omega)=1$ iff $\lim_k x_{N_k}(\omega)=1$. So if $(t_{N_k})_{k=1}^\infty$ converges pointwise to $1$ on a subset of $\Omega$ will full measure, $(x_{N_k})_{k=1}^\infty$ will converge pointwise to $1$ on the same set of full measure.

We can then build this up to the case of vectors with length $l$ by noting that if $$\lim_N\mathbb{E}\sum_{i=1}^N \frac{(x^{(i)}-1)^2}{x_N^{(i)})=0,$$ then we have the same limit for each individual $i$. For any subsequence of the sequence, there is a subsequence where the $x^{(1)}_N$ converge to $1$ almost surely. This has a further subsequence where the $x^{(2)}_N$ converge to $1$ almost surely. Etc. Thus after extracting subsequences $l$ times, we get convergence of the subsequence of random vectors to the $1$ vector almost surely. Thus we can say that every subsequence has a further subsequence which converges to $1$ almost surely.

At this point, we can take an aside and note that we have a counterexample here. Consider the unit circle with uniform measure (so an arc of $2\pi \theta$ radians has probability $\theta$). Define $\theta_0=0$ and $\theta_N=2\pi \sum_{i=1}^N\frac{1}{k}$. Let $u_N$ be the indicator of $[\theta_{N-1},\theta_N)$ and let $x_N=1+u_N$. Then $x_N-1$ is $1$ on exactly a set of measure $1/N$, since $\theta_N-\theta_{N-1}=2\pi/N$, and $\frac{(x_N-1)^2}{x_N}$ is $1/2$ on exactly that same set. It's $0$ everywhere else. So $$\mathbb{E}\frac{(x_N-1)^2}{x_N}=1/2N.$$ But because $\lim_N\theta_N=\infty$, these indicators wrap around the circle infinitely many times. For each $\theta$ in the circle, $\theta$ is contained in infinitely many of the intervals $[\theta_{N-1},\theta_N)$, and it it outside of infinitely many of these intervals. Therefore for any $\theta$ in the circle, $x_N(\theta)$ is $1$ infinitely often and $0$ infinitely often. Therefore $x_N$ does not converge to $1$ at any point, let alone on a set of full measure. So we don't have almost sure convergence. This produces a counterexample for each $l$ by letting $x^{(1)}_N$ be defined as above and letting $x^{(i)}_N=1 for all $i=2,\ldots, l$.

For any $\epsilon>0$ and any positive number $x$ with $|1-x|\geqslant \epsilon$, $$\frac{(x-1)^2}{x} \geqslant \frac{\epsilon^2}{1+\epsilon}.$$ To see this, note that for $0<x<1$, if $x<1-\epsilon$, $$\frac{(x-1)^2}{x}\geqslant \frac{\epsilon^2}{x}\geqslant \frac{\epsilon^2}{1}\geqslant \frac{\epsilon^2}{1+\epsilon}.$$ To handle the case $x>1$, we note that the function $x\mapsto \frac{(x-1)^2}{x}$ is increasing on $(1,\infty)$ because is has positive derivative, so on the interval $[1+\epsilon,\infty)$, the value is at least as big as its value at $1+\epsilon$, which is $\frac{\epsilon^2}{1+\epsilon}$. For $\epsilon>0$, let $E_N$ be the event that $|x_N-1|>\epsilon$. Then $$\mathbb{E}\frac{(x-1)^2}{x}\geqslant \mathbb{E}1_{E_N}\cdot \frac{(x-1)^2}{x}\geqslant \frac{\epsilon^2}{1+\epsilon}\cdot \mathbb{P}(E_N).$$ Therefore if $$\lim_N \mathbb{E}\frac{(x-1)^2}{x^2}=0,$$ $$\mathbb{P}(E_N) \leqslant \frac{1+\epsilon}{\epsilon^2}\cdot \mathbb{E}\frac{(x_N-1)^2}{x_N}\underset{N\to\infty}{\to}0.$$ Thus we have convergence to $1$ in probability.

Again, we can use this fact to get to the case of vectors of length $l$. Here we have a few norms we can apply to vectors of length $l$, but they yield the same results. For $1\leqslant p<\infty$, let $\|(a_i)_{i=1}^l\|_p^p=\sum_{i=1}^l|a_i|^p$. Let $$\|(a_i)_{i=1}^l\|_\infty=\max_{1\leqslant i\leqslant l}|a_i|.$$ Let $1_l$ denote the length $l$ vector having each entry equal to $1$. Convergence to $1_l$ in probability for vectors is the condition that for every $\epsilon>0$, $\lim_N \mathbb{P}(\|X_N-1_l\|_p\geqslant \epsilon)=0$. Here, it doesn't matter what $p\in [1,\infty]$ we take. If this conclusion holds for a single $p\in [1,\infty]$, it holds for all of them, because $$\|a\|_1\geqslant \|a\|_p\geqslant \|a\|_\infty \geqslant \|a\|_1/l$$ for all length $l$ vectors $a$. For convenience, let's work with $p=\infty$. Then $\|(x^{(1)}_N,x^{(2)}_N,\ldots, x^{(l)}_N)-1_l\|_\infty \geqslant \epsilon$ iff $|x^{(i)}_N-1|\geqslant \epsilon$ for one value of $i\in \{1,\ldots, l\}$, so $$\mathbb{P}(\|X_N-1_l\|_\infty\geqslant \epsilon) \leqslant \sum_{i=1}^l \mathbb{P}(|x^{(i)}_N-1|\geqslant \epsilon),$$ and we know from the preceding paragraph that each term of this sum, and therefore the entire sum, converges to $0$ as $N\to\infty$.

user469053
  • 2,620