1

Let $X\in{}\mathbb{R}^N$, with independent sub-gaussian coordinates s.t. $E[X_i^2]=1, E[X_i]=0$.

W.T.S:

$\text{Var}(\|X\|_2)\le{C'K^4}$ with $C'>0$ and $K:=\max_{1\le{i\le{N}}}{\|X_i\|_{\psi^2}}$,

I don't know where to start, ive tried using various properties of sub-gaussian r.v. with no luck, any hints?

The concentration of the norm theorem:

Let $X\in{}\mathbb{R}^N$, with independent sub-gaussian coordinates s.t. $E[X_i^2]=1$. $$\|\|X\|_2-\sqrt{n}\|_{\Psi^2}\le{CK^2},\space{}C>0,\space{}K:=\max_{1\le{i\le{N}}}{\|X_i\|_{\psi^2}}$$

$$$$ $$$$

$$$$

P.S. I have shown the fact that:

$$\sqrt{n}-CK^2\le{}\mathbb{E}[\|X\|_2]\le{}\sqrt{n}+CK^2$$

I essentially used the theorem, found a lower bound of the subgaussian norm of $\|X\|_2-\sqrt{n}$ with the Lp norm, set p to 1. and used Jensens inequality since $f(x)=|x|$ is a convex function.

I am using the following book and believe the question is similar to ex3.1.4: https://www.math.uci.edu/~rvershyn/papers/HDP-book/HDP-book.pdf

kam
  • 1,366
  • 1
    The fact above with the assumptions provided is the concentration of the norm theorem, I will label it accordingly. – kam May 11 '20 at 16:37
  • 1
    I will provide a sketch of the proof I have done, also I am using the text: 'High-Dimensional Probability: An Introduction with Applications in Data Science Book by Roman Vershynin' – kam May 11 '20 at 16:45
  • Well actually, its from my lecture notes which are based on the book> I found it in the book as Exercise 3.1.4, but it's not quite the same. In the book we want to show that the variance is equal to $O(1)$. – kam May 11 '20 at 16:54
  • Those are the notes I am using yes. – kam May 11 '20 at 17:00
  • 1
    Presumably $\Psi$ and $||\cdot||_{\Psi^2}$ mean something. – Michael May 11 '20 at 17:07
  • @Michael yes, they refer to the sub-gaussian norm. – kam May 11 '20 at 17:09
  • "found a lower bound of the subgaussian norm using the $L^p$ norm" : can you elaborate here? – Sarvesh Ravichandran Iyer May 18 '20 at 04:32

2 Answers2

3

Start with $\text{Var}\|X\|_2 = \operatorname{E}(\|X\|_2 - \operatorname{E}\|X\|_2)^2 \leq \operatorname{E}(\|X\|_2 - \sqrt{n})^2$ because the mean $\text{E}\|X\|_2$ minimizes the mean squared error. It is used for a similar question.

Further, by using the inequality for the $L_1$ norm $|\|X\|_2 - \sqrt{n}| \leq CK^2$ you obtained we have the upper bound as follows, $$\text{Var}\|X\|_2 \leq \text{E}(\|X\|_2 - \sqrt{n})^2 \leq C^2K^4 .$$

1

You can expand the definition of variance and use the mean linear property: $$ \begin{aligned} \text{Var}(\|X\|_{2}) &= \mathbb{E}(\|X\|_{2} - \mathbb{E}\|X\|_{2} )^{2} \\ &= \mathbb{E}\|X\|_{2}^{2} - 2(\mathbb{E}\|X\|_{2})^{2} + (\mathbb{E}\|X\|_{2})^{2} \\ &= \mathbb{E}\|X\|_{2}^{2} - (\mathbb{E}\|X\|_{2})^{2} \\ \end{aligned} $$ Since $\mathbb{E}X_{i}^{2} = 1$, $\mathbb{E}\|X\|_{2}^{2} = n$ and you can bound the second term using what you already have $$ \begin{aligned} \text{Var}(\|X\|_{2}) &= \mathbb{E}\|X\|_{2}^{2} - (\mathbb{E}\|X\|_{2})^{2} \\ &\leq n - (\sqrt{n} + CK^{2})^{2} \\ &= -2\sqrt{n}CK^{2} + C^{2}K^{4} \\ &\leq C^{2}K^{4} \end{aligned} $$