Disclaimer: this post can now also be found on mathoverflow, see here.
I would like to understand the proof of the following theorem, which is a simpler version of Corollary 4.24 from Martin Hairer's lecture notes on stochastic PDEs, see here. The content of this post is really not about stochastic PDEs, but just about understanding a consequence of Kolmogorov's continuity criterion.
Theorem Let $\{\eta_k\}_{k\ge 0}$ be countably many i.i.d. standard Gaussian random variables. Let $\{f_k\}_{k\ge 0} \subset \operatorname{Lip}([0,1],\mathbb{R})$. Suppose that there is some $\delta\in (0,2)$ such that $$ S_1^2 = \sum_{k\ge 0} \Vert f_k \Vert_{L^{\infty}}^2 < \infty \quad \text{and} \quad S_2^2 = \sum_{k\ge 0} \Vert f_k \Vert_{L^{\infty}}^{2-\delta} \operatorname{Lip}(f_k)^{\delta} <\infty, $$ and define $f = \sum_{k\ge 0} \eta_k f_k$. Then $f$ is almost surely bounded and Hölder continuous for every Hölder exponent smaller than $\delta/2$.
- My first question is: In what sense does $\sum_{k\ge 0} \eta_k f_k$ converge? I have some vague understanding, see 1.1. and 1.2. below, but I am not sure what is the right way to look at this in the context of the theorem.
- For every $x\in [0,1]$, we have $\sum_{k\ge 0} f_k(x)^2 <\infty$. Hence the series $f(x)= \sum_{k\ge 0} \eta_k f_k(x)$ converges almost surely and in $L^2(\Omega,\mathbb{R})$ as a sum of independent Gaussians with summable variances. Here, $\Omega$ denotes the probability space on which the $\eta_k$'s live. Furthermore, we have that $f(x)$ is a real Gaussian with mean $0$ and variance $\sum_{k\ge 0} f_k(x)^2$. A priori, the corresponding set of probability $0$ depends on $x$. Since countable unions of null-sets are null-sets, we can extend the a.s. convergence to a countable set of points in $[0,1]$.
- For every $m<n$, we have $$ \mathbb{E}\big[ \Vert \sum_{k=m+1}^n f_k \eta_k \Vert_{L^2([0,1])}^2 \big] = \sum_{k,l=m+1}^n \langle f_k , f_l \rangle_{L^2([0,1])} \mathbb{E} \big[ \eta_k \eta_l\big]\\ = \sum_{k=m+1}^n \Vert f_k \Vert_{L^2([0,1])}^2 \le \sum_{k=m+1}^n \Vert f_k \Vert_{L^{\infty}([0,1])}^2. $$ Hence the series converges in $L^2(\Omega, L^2([0,1]))$. Since $L^2([0,1])$ is separable, we also get convergence almost surely in $L^2([0,1])$. The latter statement is shown, e.g., in Proposition 2.14 of "Stochastic equations in infinite dimensions" by Da Prato and Zabczyk.
- I do not understand the last sentence in the proof given in the lecture notes "The claim now follows from Kolmogorov’s continuity theorem." Let me elaborate. If we just look at the random variables $f(x) = \sum_{k\ge 0} f_k(x) \eta_k$, $x\in [0,1]$, then these are a stochastic process indexed by $x\in [0,1]$ in the sense that they are a family of random variables with a prescribed family of finite dimensional distributions. Indeed, since the series for $f(x)$ and $f(y)$ converge in $L^2(\Omega,\mathbb{R})$, we have that $$ \mathbb{E} \big[ f(x)f(y)\big] = \sum_{k,l=1}^{\infty} f_k(x)f_l(y) \mathbb{E}\big[\eta_k \eta_l\big] =\sum_{k=1}^{\infty} f_k(x) f_k(y). $$ Hence we know all covariances. Since the random variables are Gaussian, this gives us all the distributions of the random vectors $(f(x_1),\dots, f(x_k))$, i.e. the finite dimensional distributions of the random process. Furthermore, this process satisfies Kolmogorov's continuity criterion, as is explained in detail in the lecture notes. It follows that there exists a Gaussian measure $\mu$ on the space $C([0,1],\mathbb{R})$ such that, if $Y$ is a random variable with law $\mu$, then $Y(x)$ is equal in law to $f(x)$ for every $x$, see Proposition 4.20 in the lecture notes. My question is: How can we conclude that almost surely with respect to the randomness that comes from the sequence $\{\eta_k\}_{k\ge 0}$, the function $f$ is continuous (notice that I do also not understand in what sense $f$ is a function, see the first question)?