5

On different sources I found different parabolic scalings for space time white noise that I believe are in contradicton one with the other.

Let $\xi(t,x)$ be space-time white noise on $\mathbb{R}\times\mathbb{R}^d$. I apply a scaling $t\to t\epsilon^{-\alpha}$, $x\to x\epsilon^{-\beta}$ and I want to find $\gamma$ such that the new noise $$\epsilon^{\gamma}\xi(t\epsilon^{-\alpha},x\epsilon^{-\beta})$$ has the same distribution as $\xi(t,x)$.

Which is the right $\gamma$? How can I compute it?

A covariance calculation suggests $$ \gamma=-\alpha\frac{d}{2}-\frac{\beta}{2} $$ but I found some sources which say that $\dot W(t,dx)dt$ has the same distribution as $$\epsilon^{\frac{d}{2}+1}\dot W(t\epsilon^{-2},dx\epsilon^{-1})dt$$ where $W$ is a cylindrical Wiener process on $L^2(\mathbb{R}^d)$, and this would correspond to saying that $$ \gamma=\alpha\frac{d}{2}+\frac{\beta}{2} \ .$$ Which is the right $\gamma?$

Am I missing something?

2 Answers2

7

As I had the same question as you, and also to answer Conrado Costa's question, I tried to do said covariance calculation and came up with the following heuristic argument: Space-time white noise is supposed to satisfy (e.g. in Hairer's SPDE script, page 4) $$ E[\xi(s,x)\xi(t,y)] = \delta^{1}(t-s)\delta^{d}(x-y), $$ where $\delta^{k}$ denotes the $k$-dimensional $\delta$-"function". Now you rescale you equation in the way you described (except for signs, I would like to keep everything as notationally simple as possible): $$ \tilde{\xi}(t,x) := \epsilon^{\gamma} \xi(\epsilon^{\alpha}t,\epsilon^{\beta}x). $$ You would like to have the same covariance relation as before, i.e. \begin{equation} E[\tilde{\xi}(s,x)\tilde{\xi}(t,y)] = \delta^{1}(t-s)\delta^{d}(x-y). \end{equation} We have $\delta^{k}(\lambda x) = \lambda^{-k}\delta^{k}(x)$ as you can see from the (heuristical) calculation $$ \int_{\mathbb{R}^{k}} f(x)\delta^{k}(\lambda x) dx = \int_{\mathbb{R}^{k}} f(\lambda^{-1} y) \delta^{k}(y) \lambda^{-k} dy = \lambda^{-k} f(0) = \int_{\mathbb{R}^{k}} f(x) \lambda^{-k}\delta^{k}(x) dx $$ where we used the substitution $y = \lambda x$ which gives $dy = \lambda^{k} dx$ and hence $dx = \lambda^{-k} dy$. Since this holds for every $f$, we have $\delta^{k}(\lambda x) = \lambda^{-k} \delta^{k}(x)$ in the sense of distributions. One could justify this by using distribution theory, see e.g. Grafakos, Classical Fourier Analysis, Example 2.3.12. Now we see that in order to get the desired covariance condition, since \begin{align*} E[\tilde{\xi}(s,x)\tilde{\xi}(t,y)] &= E[\epsilon^{2\gamma} \xi(\epsilon^{\alpha}s,\epsilon^{\beta}x)\xi(\epsilon^{\alpha}t,\epsilon^{\beta}y)] = \epsilon^{2\gamma} E[\xi(\epsilon^{\alpha}s,\epsilon^{\beta}x)\xi(\epsilon^{\alpha}t,\epsilon^{\beta}y)] \\ &= \epsilon^{2 \gamma} \delta^{1}(\epsilon^{\alpha}(t-s)) \delta^{d}(\epsilon^{\beta}(x-y)) = \epsilon^{2 \gamma} \epsilon^{-\alpha}\epsilon^{-d \beta} \delta^{1}(t-s)\delta^{d}(x-y). \end{align*} So we get the condition $$ 2 \gamma = \alpha + d \beta \Leftrightarrow \gamma = \frac{\alpha}{2} + d \frac{\beta}{2}. $$ According to Hairer's 2014 paper, p.3, if we choose $d=1$, $\alpha = 2, \beta = 1$, we should get $\gamma = 3/2$, which is indeed the case. Now coming back to your question, if we would like to have opposite signs for $\alpha, \beta$ and $\gamma$, we see that, indeed, your calculation is correct! So I don't know what those other sources are but I would say that your reasoning was correct.

And, after having written all this, I realised that yes, you are missing something, namely that in the "conflicting" condition, we don't have $\gamma = \alpha \frac{d}{2} + \frac{\beta}{2}$ but rather your condition $\gamma = -\alpha \frac{d}{2} - \frac{\beta}{2}$ since $$ - (-1)\frac{d}{2} - \frac{-2}{2} = 1 + \frac{d}{2}. $$ So I think you got confused by the many minuses in the scalings ;-)

Hope that helps! Andre

Andre
  • 1,600
1

One year later, I suppose the issue has been solved. Clearly your computations are correct, but I personally found it surprising that at large scales white noise "decays" instead of exploding, which is what brought me to this question. Here's a simple and obvious argument as to why your result is not only correct, but also reasonable. We just consider white noise on the real line, then we can say: $$\xi (t) = \partial_t B(t)$$ for a Brownian motion $B$. Now we use the scaling invariance of the Brownian motion, namely $$B_{\epsilon}(t) : =\sqrt{\epsilon} B(t \epsilon^{-1}) \stackrel{d}{=}B(t).$$ Then by taking the time derivative we find: $$ \xi_{\epsilon}(t) = \partial_t B_{\epsilon}(t) = \frac{1}{\sqrt{\epsilon}} \partial_t B(t\epsilon^{-1}) \stackrel{d}{=} \partial_t B(t) = \xi(t). $$

Kore-N
  • 4,275