7

Assume we have an Itô process of the form :

$$X_t=X_a+\int_a^t f(s)dB(s)+\int g(s)ds$$

(or $dX_t=f(t)dB(t)+g(t)dt$).

I would like to calculate the quadratic variation of the process using the definition:

$$\sum_i (X_{t_{i}}-X_{t_{i-1}})^2=\sum_i \bigg(X_a+\int_a^{t_{i}}f(s)dB(s)+\int_a^{t_{i}}g(s)ds-X_a-\int_a^{t_{i-1}}f(s)dB(s)-\int_a^{t_{i-1}}g(s)ds\bigg)^2$$

I guess I could the write this as

$$=\sum_i\bigg(\int_{t_{i-1}}^{t_{i}}f(s)dB(s)+\int_{t_{i-1}}^{t_{i}}g(s)ds\bigg)^2$$

If I assume $f$ is a simple stochastic process I write

$$=\sum_i\bigg(f(s_{i-1})(B_{s_i}-B_{s_{i-1}})+g(s_{i-1})(s_i-s_{i-1})\bigg)^2$$ $$=\sum |f(s_{i-1})|^2(B_{s_i}-B_{s_{i-1}})^2+2\sum f(s_{i-1})g(s_{i-1})(B_{s_i}-B_{s_{i-1}})(s_i-s_{i-1})+\sum g(s_{i-1})^2(s_i-s_{i-1})^2$$

I suppose that the second and third summation tend to zero when the mesh of our partitions goes to zero (although I don't know how to formalize it here). The third should go to zero since the quadratic variation of a function of finite variation should be zero, the second summation I guess by the continuity of the brownian motion. (?)

And the first one should converge to the Riemann integral $$\int_a^t f(s)^2 ds$$

Now my doubts are:

  • I may have performed some barely-legal-steps in my calculations, could you point them out?
  • How to show that indeed the first summation converges (in which sense?) to the Riemann integral?
  • What if $f$ is not simple, how can I extend this to that case?

Thanks in advance.

saz
  • 123,507
Chaos
  • 3,417
  • 1
    What are your assumptions on $f$, $g$? (Random or deterministic? Bounded or just square integrable? Continuous or just measurable?) – saz Jan 25 '20 at 06:44
  • I should have clarified that aspect, $f$ is adapted and with trajectories in $L^2[a,b]$ almost sure, $g$ instead is adapted with trajectories in $L^1[a,b]$ almost sure. – Chaos Jan 25 '20 at 08:39
  • 1
    Could you please add this information to the body of your question? Makes it easier for other readers to understand what exactly you are interested in – saz Jan 25 '20 at 10:29
  • I am editing the body of the question. Thanks for pointing that out – Chaos Jan 25 '20 at 11:51

1 Answers1

9

Since you only impose mild assumptions on $f$, $g$, the proof is somewhat technical, e.g. we cannot work with Riemann sums because $f^2$ might not be Riemann integrable.

Without loss of generality, I will assume that $a=0$ and $X_0=0$. Write $X_t = M_t+A_t$ where $$M_t := \int_0^t f(s) \, dB_s \qquad A_t := \int_0^t g(s) \, ds.$$ If we denote by $\langle \cdot,\cdot \rangle$ the quadratic (co)variation, then $$\langle X,X \rangle_t = \langle M+A,M+A \rangle_t = \langle M,M \rangle_t + 2 \langle M,A \rangle_t + \langle A,A \rangle_t. \tag{1}$$ This follows by a straight-forward computation similar to that in your question. We are going to show that \begin{align*} \langle M,M \rangle_t &= \int_0^t f(s)^2 \, ds \tag{2} \\ \langle M,A \rangle_t &= 0 \tag{3} \\ \langle A,A \rangle_t &= 0. \tag{4} \end{align*}

Proof of $(4)$:

Let $g=g(t,\omega)$ be a measurable function such that $g(\cdot,\omega) \in L^1([0,T])$ for any $T>0$. Then $t \mapsto A_t(\omega) = \int_0^t g(s,\omega) \, ds$ is a continuous function for each $\omega$ and so $A_{\bullet}(\omega)$ is uniformly continuous on $[0,T]$. If $\Pi=\{0=t_0<\ldots<t_n=T\}$ is a partition of $[0,T]$ with mesh size $|\Pi|$, then \begin{align*} \sum_i |A_{t_{i+1}}-A_{t_i}|^2 &\leq \sup_{|s-t| \leq |\Pi|, s,t \in [0,T]} |A_{s}-A_t| \sum_{i=1}^n |A_{t_{i+1}}-A_{t_i}| \\ &\leq \sup_{|s-t| \leq |\Pi|, s,t \in [0,T]} |A_{s}-A_t| \int_0^T |g(s)| \, ds. \end{align*} Because of the uniform continuity on $[0,T]$, the right-hand side converges a.s. to $0$ as the mesh size $|\Pi|$ tends to zero. This proves $\langle A,A \rangle_T=0$.

Proof of $(3)$:

This is quite similar to the previous proof. Take measurable $f,g$ such that $f(\cdot,\omega) \in L^2([0,T])$ and $g(\cdot,\omega) \in L^1([0,T])$ for $T>0$. The stochastic integral $M_t = \int_0^t f(s) \, dB_s$ has continuous sample paths with probability $1$. Exactly as in the previous part, we get $$\sum_i |M_{t_{i+1}}-M_{t_i}| \, |A_{t_{i+1}}-A_{t_i}| \leq \sup_{|s-t| \leq |\Pi|, s,t \in [0,T]} |M_s-M_t| \int_0^T |g(s)| \,ds.$$ Because of the uniform continuity on compact time intervals, the right-hand side converges to $0$ as $|\Pi| \to 0$. Hence, $\langle M,A \rangle_T=0$ for all $T>0$.

Proof of $(2)$:

For simple functions this is a straight-forward calculation, see this question. To extend $(2)$ to a larger class of functions, we need to use approximation techniques. For brevity of notation set $$S_{\Pi}(Y,Z) := \sum_{i} (Y_{t_{i+1}}-Y_{t_i})(Z_{t_{i+1}}-Z_{t_i})$$ and $S_{\Pi}(Y) =: S_{\Pi}(Y,Y)$.

Case 1: $f$ satisfies $\mathbb{E}\int_0^T f(s)^2 \, ds < \infty$ for each $T>0$.

Since $f$ is progressively measurable and satisfies the above integrability condition, there exists a sequence of simple functions $(f_n)_{n \in \mathbb{N}}$ such that $$\mathbb{E}\int_0^T |f(s)-f_n(s)|^2 \, ds \to 0, \qquad T>0 \tag{5}$$ and $$\mathbb{E} \left| \int_0^T f_n(s) \, dB_s - \int_0^T f(s) \, dB_s \right|^2 \to 0, \qquad T>0. \tag{6}$$ Set $M_n(t):=\int_0^t f_n(s) \, dB_s$ and fix $T>0$. We have $$\langle M,M \rangle_T = \langle M-M_n,M-M_n \rangle_T + 2 \langle M-M_n,M_n \rangle_T + \langle M_n,M_n \rangle_T. \tag{7}$$ Let $\Pi$ be a partition of $[0,T]$ . Taking expectation and applying Itô's isometry, we find \begin{align*} \mathbb{E}(S_{\Pi}(M-M_n)) &= \sum_i \mathbb{E}\int_{t_i}^{t_{i+1}} (f_n(s)-f(s))^2 \, ds \\ &= \mathbb{E}\int_0^T (f_n(s)-f(s))^2 \, ds \end{align*} Letting $|\Pi| \to 0$ using Fatou's lemma, we get

$$\mathbb{E}(\langle M-M_n \rangle_T) \leq \mathbb{E}\int_0^T (f_n(s)-f(s))^2 \, ds \xrightarrow[n \to \infty]{(5)} 0, $$ which shows that $\langle M-M_n\rangle_T \to 0$ in $L^1$. Similarily, an application of Itô's isometry (combined with the polarization identity, see here) shows that $$\mathbb{E}(S_{\Pi}(M-M_n,M_n)) = \mathbb{E}\int_0^T (f_n(s)-f(s)) f_n(s) \, ds.$$ Applying the Cauchy-Schwarz inequality and using $(5)$, it follows that the right-hand side converges to $0$ as $n \to \infty$ (uniformly in $\Pi$), and so $\langle M-M_n,M_n \rangle_T \to 0$ in $L^1$. Finally we already know that $\langle M_n,M_n \rangle_T=\int_0^T f_n(s)^2$ and so $$\lim_{n \to \infty} \langle M_n,M_n \rangle_T = \int_0^T f(s)^2 \, ds.$$ Letting $n \to \infty$ in $(7)$ proves the assertion.

Case 2: $\int_0^t f(s)^2 \, ds < \infty$ with probability $1$.

In order to extend $(2)$ such function we need to truncate $f$, e.g. consider $f_n := (-n) \vee f \wedge n$. Each $f_n$ satisfies the integrability assumption from Case 1, and so we know the quadratic variation of $\int_0^t f_n(s) \, dB_s$. Now, similar as in the previous part, we can use this knowledge to compute the quadratic varion of $\int_0^T f(s) \, dB_s$. Let me know in case that you really want to see all the details.

saz
  • 123,507
  • +1 Wow such an amazing answer, I'm accepting it since it solves my original question, I'll read all the details and eventually I'll let you know if something is not clear! Thanks again – Chaos Jan 25 '20 at 11:48
  • @RScrlli Sure, let me know if something is not clear. – saz Jan 25 '20 at 11:48
  • +1 Fantastic answer – Mdoc Jan 25 '20 at 21:20
  • Hey @saz sorry to bother you, I don't quite get the result after the formula $(7)$. I don't see how $\mathbb E(S(M-M_n))\rightarrow 0$ as $n\rightarrow \infty$ implies that the quadratic variation of $(M-M_n)$ equals $0$. As far as I understand the q.v. is calculated by letting the mesh of the partition go to $0$. But in this case we are letting $n$ go to $\infty$. Thanks in advance! – Chaos Feb 07 '20 at 08:54
  • Does it has to do with the Law of Large Numbers? – Chaos Feb 07 '20 at 09:05
  • @RScrlli You are right; there is something off. The quadratic variation of $(M-M_n)$ is not zero but tends to zero as $n \to \infty$... seems that I forgot to put limits at several places. I will correct it lateron when I have time. – saz Feb 07 '20 at 09:19
  • Thanks! I'll stay tuned – Chaos Feb 07 '20 at 09:21
  • @RScrlli It should be okay now, I think... let me know if something is unclear. – saz Feb 07 '20 at 14:37
  • @saz I am wondering, if the equalities (2),(3), (4) hold true almost surely. From your proof, it seems that (2) for example is not a.s. If yes do you have any reference? – sakas Apr 15 '20 at 21:18
  • @sakas What do you mean by "whether they hold almost surely"? The quadratic variation is of course defined as a limit in probability (or in $L^2$) but nevertheless (2)-(4) are statements about almost sure equality of random variables, e.g. $\langle M,M \rangle_t = \int_0^t f(s)^2 , ds$ almost surely (the exceptional set may depend on $t$). – saz Apr 16 '20 at 05:52
  • @saz I am wondering if $\langle M,M\rangle_t:=\sum\limits_{i=1}^n(M(t_i)-M(t_{i-1}))^2 \longrightarrow\int_a^tf(s)^2ds$ a.s. For example you can show using Borel Cantelli that for Brownian motion $\langle B,B\rangle_t:=\sum\limits_{i=1}^n(B(t_i)-B(t_{i-1}))^2 \longrightarrow t$ a.s. Does this makes sense now? – sakas Apr 16 '20 at 06:40
  • @sakas I don't know whether the a.s. convergence holds for the quadratic variation of the stochastic integral. For the other two terms the a.s. convergence is pretty immediate from the proof. – saz Apr 17 '20 at 05:06
  • @saz May I ask the details of the last part, Case 2? – JJW Nov 28 '23 at 17:53