3

I am reading the proof on the existence of solutions to SDEs from the following post: https://almostsuremath.com/2010/02/10/existence-of-solutions-to-stochastic-differential-equations/

Here, $Z^1, \dots, Z^m$ are semimartingales, $X=(X^1,\dots, X^n)$ is a cadlag adapted process and $X \mapsto a_{ij}(X)$ is a map from the space of cadlag functions to the set of $Z^j$-integrable processes that satisfy some Lipschitz continuity condition.

Namely, we allow $a_{ij}(X)_t$ to be a function of the process $X$ at all times up to $t$. And then assume the two properties on it.

enter image description here

For any cadlag adapted $X$, we define the following $n$-dimensional process, $F(X)^i = N^i + \sum_{j=1}^m \int a_{ij}(X)dZ^j$.

In the proof below, the author constructs inductively a process $X^{(r+1)}$ that approximates $X$ based on the first jump sizes with respect to $F(X)$.

$M$ is defined as $1_{[0,\tau)} (X-F(X))$.

My questions are :

  1. in the proof below, why do we have the identity $$\lambda_r (X^i)^{\tau_r} = \sum_{j=1}^m \int \lambda_r 1_{(0,\tau_r]} (a_{ij}(X)-a_{ij}(0))dZ^j + \lambda_r (\sum_{j=1}^m \int 1_{[0,\tau_r)} a_{ij}(0)dZ^j + (N^i)^{\tau_r}+(M^i)^{\tau_r}?$$

I cannot figure out how to get this kind of an expansion.

  1. Below this equation in the proof, it says that the final term on the right hand side tends ucp to zero as $r \to \infty$. That $\lambda_r M^{\tau_r}$ tends to zero is clear since $M^{\tau_r}$ is bounded by $\epsilon$ and $\lambda_r \to 0$. But why do $\lambda_r N^{\tau_r}$ and $\lambda_r \sum_j \int_{1_{[0,\tau_r})}a_{ij}(0)dZ^j$ tend to zero? $N$ is just assumed to be some cadlag adapted process so I cannot see why stopping it on $\tau_r$ would make it tend to $0$ when multiplied by $\lambda_r$. Also I don't understand why the stochastic integral with integrand $1_{[0,\tau_r)} a_{ij}(0)$ tend to zero when multiplied by $\lambda_r$. Why do the assumptions on $a_{ij}$ give this?

Moreover, I posted this as another question before. I can't figure out why the sequence $X_{\tau_r \wedge t}^*$ being bounded in probability, where $*$ represents the running supremum of the normed process, imply that $X$ is almost surely bounded on the interval $[0,\tau)$ whenever $\tau<\infty$.

I would greatly appreciate if anyone takes a look at this one as well. Bounded in probability of the running maximum of a stopped process implies almost sure boundedness on the interval $[0,\lim \tau_r)$?

  1. Finally, it seems like the boundedness of $X$ on the interval $[0,\tau)$ for $\tau<\infty$ is used to show that $1_{(0,\tau)}(a_{ij}(X)-a_{ij}(0))$ is a locally bounded process.

However, from (P2) we always have that $$|1_{(0,\tau)}(a_{ij}(X)-a_{ij}(0))|_t \le KX^*_{t-}$$ and $X^*$ is a cadlag adapted process, hence $X^*_{t-}$ will be left continuous with right handed limits adapted, so locally bounded. Hence, isn't $1_{(0,\tau)}(a_{ij}(X)-a_{ij}(0))$ immediately locally bounded from (P2)? Why do we need $X$ almost surely bounded on $[0,\tau)$ whenever $\tau<\infty$ for this?

enter image description here

enter image description here

  • Can't finish writing my answer by the end of the bounty, unfortunately. I will make sure to post it when I'm done, but this bounty might pass by before that. Nevertheless, it's the answer which is important so I'll make sure to provide that. – Sarvesh Ravichandran Iyer Feb 08 '22 at 18:57
  • @SarveshRavichandranIyer Thank you just sad I can’t award you the bounty. Did you have the chance to look at my comment on the previous answer? – nomadicmathematician Feb 08 '22 at 19:00
  • Thanks, I had a chance to look at that. I have to edit the answer there based on it, so I will do that and get back to you with a comment there. I'm focusing on both questions when I'm on MSE for the past few days, so hopefully I can try to answer this one when I'm done there. – Sarvesh Ravichandran Iyer Feb 08 '22 at 19:12
  • 1
    @SarveshRavichandranIyer Appreciate the effort so much looking forward to your answer – nomadicmathematician Feb 08 '22 at 19:14
  • @SarveshRavichandranIyer Hello Sarvesh are you still working on this problem? – nomadicmathematician Feb 27 '22 at 10:48
  • I've been out of the site for over a week because of some off-MSE issues. I've been committing some time a day to format an answer, but it hasn't come off yet. I'll continue to try my best, as much as off-MSE issues permit. – Sarvesh Ravichandran Iyer Feb 27 '22 at 11:41
  • Hi, I'm almost done. I will not be proving 3. here, I'll do it on the other post instead. Lowther seems to have really messed up towards the end of lemma 5's proof, it looks like there are some small issues that need sorting out. I'm done with $1$ and $2$, and your argument for $4$ makes sense, so it's all about what comes towards the end now, and I am stuck at one small point that I will try to sort out. – Sarvesh Ravichandran Iyer Mar 10 '22 at 13:12
  • @SarveshRavichandranIyer Thank you great to hear that! Will look forward to your posts. – nomadicmathematician Mar 10 '22 at 14:54
  • Just one clarification : at the end, he writes $|X_{\tau_r} - X_{\tau_{r-1}}| = |X^r_{\tau_r} - F(X^r)_{\tau_r}|$ from "the definition of $X$". I'm not able to get this equality from his definition, but funnily enough I've completed the details of the rest of the proof! If you can get this then let me know and I'll add it and submit the answer, otherwise I'll submit it without an explanation, hoping that it's correct. – Sarvesh Ravichandran Iyer Mar 10 '22 at 17:45
  • @SarveshRavichandranIyer Yes I neglected this point it seems from the definition that $X_{\tau_r} = F(X^r){\tau_r}$ but I don't see how $X{\tau_{r-1}}=X_{\tau_r}^r$. I didn't understand the definition correctly here, I just looked at the Figure here that we should have by construction of $X$ that $\Vert X_{\tau_r}-X_{\tau_{r-1}}\Vert \ge \epsilon$. But is this indeed always true from the definition of $\tau_r$? Can we guarantee that $\tau_r>\tau_{r-1}$, and $X_{\tau_r}= F(X^r){\tau_r}$ and $X{\tau_{r-1}}=F(X^{r-1}){\tau{r-1}}$ as the figure suggests and the difference is $\ge \epsilon$? – nomadicmathematician Mar 10 '22 at 18:24
  • There may be an issue in the definition of $X$ ; I think the proof should work if some sloppiness is cut out. I'll do the following : I'll post an answer, but it will try to solve questions $1,2,4$ as statements independent of lemma 5 as far as possible, rather than weave them into lemma 5. At least that way, we can understand most of the proof , leaving only the indexing to be sorted out. – Sarvesh Ravichandran Iyer Mar 11 '22 at 10:03
  • @SarveshRavichandranIyer Ok I think I understand the equality here. So the construction of the process $X$ gives that $X^r$ is equal to $X$ before time $\tau_r$ that is for $t<\tau_r$ and is constant over time $t\ge \tau_{r-1}$. Hence, $X^r_{\tau_r}$ should be equal to $X_{\tau_{r-1}}$ since at time $\tau_{r-1}$, $X^r=X$ and $X^r$ is constant onwards from $\tau_{r-1}$ so $X^r$ at $\tau_r$ should be the same as at $\tau_{r-1}$. And of course the definition just gives $X_{\tau_r}=F(X^r)_{\tau_r}$. – nomadicmathematician Mar 11 '22 at 20:20
  • Thanks very much, I got what you meant. I will be able to add it in and be done hopefully in an hour or so. – Sarvesh Ravichandran Iyer Mar 12 '22 at 06:34

1 Answers1

1

Thanks for the additional clarification. With this, I'll be able to write a full detailed proof of lemma $5$, sans the proof of the boundedness assumption which I'll cover in the other post.

Setup

$D^n$ is the space of $n$-dimensional cadlag adapted processes. We have $N \in D^n$, and semi-martingales $Z^1,\ldots ,Z^m$. Let $L^1(Z^j)$ denote the space of predictable and $Z^j$-integrable processes. Let $a_{ij} : D^n \to L^1(Z^j)$ be a map with the following property : for every $X,Y \in D^n$, $|a_{ij}(X) - a_{ij}(Y)| \leq K(X-Y)^*_-$ where $L_t^* = \sup_{s \leq t} \|L_s\|$ is the running maximum of $L$.

With this, define $F(X)$ for $X \in D^n$ component-wise by $$ F(X)_t^i = N_t^i + \sum_{j=1}^m\int_0^t a_{ij}(X)_tdZ^j $$

Lemma 5(Lowther) : For all $\epsilon>0$ there is an $X\in D^n$ with $\|X-F(X)\|< \epsilon$ i.e. $\|X_t - F(X)_t\|<\epsilon$ for all $t\geq 0$.

Reducing the proof to a proposition

The idea is to create $X$ as a jump process which remains constant across certain time intervals, within which it does not diverge from its $F$-value by more than $\epsilon$. We then require to prove that $X$ can be continued to infinity.

For this, we define the processes $X^r$ (which are successive extensions in time, finally giving $X$) using a sequence of stopping times $\tau_r$ (which indicate the time points at which the tolerance $\epsilon$ is breached and $X$ needs to jump). We have $X^0 = N_0, \tau_0=0$, and for $r\geq 0$, $$ X^{r+1}_{t} = \begin{cases} X^r_t & t < \tau_r \\ F(X^r)_{\tau_r} & t \geq \tau_r \end{cases} \\ \tau_{r+1} = \inf\{t \geq \tau_r : \|X_t^{r+1} - F(X^{r+1})_t\| \geq \epsilon\} $$

Note that the $\tau_r$ are increasing stopping times, which increase pointwise to some stopping time $\tau$ as $r \to \infty$. Let $X_t = X^r_t1_{\tau_r>t}$. We will show that $X$ is the desired process.

To begin, we show that on $[0,\tau)$ we have $\|X_t-F(X)_t\|<\epsilon$. Indeed, let $t<\tau$. Then, there exists $r$ such that $\tau_{r+1}>t \geq \tau_{r}$ so that $X_t = X^{r+1}_t$. Since $t<\tau_{r+1}$, by definition we have $\|X^{r+1}_t - F(X^{r+1})_t\| < \epsilon$. Since $X = X^{r+1}$ on $[0,t]$, it follows that $F(X)_t = F(X^{r+1})_t$ and therefore $\|X_t - F(X)_t\|<\epsilon$.

An integral formula

Let $\lambda_r \to 0$ be a sequence of positive reals, and let $M = 1_{[0,\tau)} (X-F(X))$ be the difference process that is bounded by $\epsilon$. For a process $Y$, we define the stopped process (at $\tau$) $(Y^\tau)_t = Y_{\tau \wedge t}$. With this, we now study the sequence of processes $\lambda_r X^{\tau_r}$, and show that it goes ucp to $0$.

We will ignore the $\lambda_r$ for now, and write some equalities that will hold on $[0,\tau)$. For one, note that $X^{\tau_r} = M^{\tau_r} + F(X)^{\tau_r}$. Hence, for every component $i$ we have $$ X^{i,\tau_r} = M^{i,\tau_r} + F(X)^{i,\tau_r} = M^{i,\tau_r} + N^{i,\tau_r} + \sum_{j=1}^m \int 1_{[0,\tau_r]}a_{ij}(X)dZ^j $$

Write $a_{ij}(X) = a_{ij}(X) - a_{ij}(X_0) + a_{ij}(X_0)$ and note that $X_0=0$. This is done to invoke the difference property of the $a_{ij}$. With this, we get $$ X^{i,\tau_r}= \left(\sum_{j=1}^m \int 1_{[0,\tau_r]}(a_{ij}(X) - a_{ij}(0))dZ^j\right)+\left(M^{i,\tau_r} + N^{i,\tau_r} + \sum_{j=1}^m \int 1_{[0,\tau_r]}a_{ij}(0)dZ^j\right) $$

Finally, we multiply by $\lambda_r$ to get $$ \lambda_rX^{i,\tau_r}= \left(\sum_{j=1}^m \lambda_r\int 1_{[0,\tau_r]}(a_{ij}(X) - a_{ij}(0))dZ^j\right)+\lambda_r\left(M^{i,\tau_r} + N^{i,\tau_r} + \sum_{j=1}^m \int 1_{[0,\tau_r]}a_{ij}(0)dZ^j\right) $$

This doesn't perfectly match with Lowther, but it turns out we need not be very accurate.

ucp convergence

We will now show that both terms go to zero in ucp.

For the second, we begin by seeing that $\sup_{s \leq t} \|M_s\| <\epsilon$ for any $t$, therefore $\sup_{s \leq t} \|\lambda_rM_s\| <\lambda_r\epsilon \to 0$ as $r \to \infty$. The other two expressions can be resolved with the same idea.

Indeed, let $Y$ be any cadlag adapted process, and let $\sigma_r$ be a sequence of increasing stopping times with $\mu_r$ a sequence of positive reals converging to $0$. We can show that $\mu_rY^{\sigma_r} \to 0$ in ucp as follows : pick a $t,K$ and let $\epsilon>0$. Note that on $[0,t]$, because $Y$ has cadlag paths, it is a.s. bounded , see e.g. here. That is, we know that $$ P(\sup_{[0,t]} \|Y_s\| = \infty) = 0 \implies \lim_{L \to \infty} P(\sup_{[0,t]} \|Y_s\|> L) = 0 $$

Pick $L$ large enough so that $P(\sup_{[0,t]} \|Y_s\|> L)< \epsilon$. If this is true, then note that $\sup_{[0,t]} \|Y_s\| \geq \sup_{[0,t]}\|Y^{\sigma_r}_s\|$ for all $t$, therefore $P(\sup_{[0,t]} \|Y^{\sigma_r}_s\|> L)< \epsilon$ for all $r$, and multiplying by $\mu_r$ gives $P(\sup_{[0,t]} \|\mu_rY^{\sigma_r}_s\|> \mu_rL)< \epsilon)$ for all $r$. As $\mu_rL \to 0$, we can assume that $r$ is large enough so that $\mu_rL < K$. In that case, $$ P(\sup_{[0,t]} \|\mu_rY^{\sigma_r}_s\|> K)< \epsilon \forall \text{ large $r$} \implies \limsup_{r \to \infty}P(\sup_{[0,t]} \|\mu_rY^{\sigma_r}_s\|> K) < \epsilon $$ for all $\epsilon>0$. Consequently, $\limsup_{r \to \infty}P(\sup_{[0,t]} \|\mu_rY^{\sigma_r}_s\|> K) \leq 0$, which forces $\lim_{r \to \infty}P(\sup_{[0,t]} \|\mu_rY^{\sigma_r}_s\|> K) = 0$, as desired.

Applying this with $\sigma_r = \tau_r$, and with each of $Y_t = N^i_t$ and $Y_t = \sum_{j=1}^m \int_0^t a_{ij}(0)dZ^j$ tells you that the second term goes in ucp to $0$.

For the first term (actually, I wonder if we can use the earlier argument instead of lemma 4, but that's a different topic) , we note that $\lambda_r|(a_{ij}(X) -a{ij}(0))| \leq \lambda_rKX^*_-$ by the difference property. Lemma $4$ applies straightaway since $\lambda_r$ is a convergent hence bounded sequence. It tells you that $\lambda_rX^{\tau_r} \to 0$ in ucp.

The existence of $\lim X_{\tau_r}$

We will now show that , providing $\tau<\infty$ has non-zero probability, $X_{\tau_r}$ has a limit with non-zero probability. We begin by writing the definition of $X_{\tau_r}$. Note that $X_{\tau_r} = X^{r+1}_{\tau_r} = F(X^r)_{\tau_r}$. Writing this down, $$ X_{\tau_r} = N_{\tau_r} + \sum_{j=1}^m \int_0^{\tau_r} a_{ij}(X^r) dZ^j = N_{\tau_r} + \sum_{j=1}^m \int_0^{\tau_-} 1_{[0,\tau_r]}a_{ij}(X^r)dZ^j $$

On $[0,\tau)$, we will now apply dominated convergence in probability. Indeed, $1_{[0,\tau_r]}a_{ij}(X^r) \to 1_{[0,\tau)}a_{ij}(X)$ a.s. by an argument similar to an earlier one in this answer. However, we also have $$ |1_{[0,\tau_r]}a_{ij}(X^r)| \leq |a_{ij}(X^r) - a_{ij}(0)| + |a_{ij}(0)| $$ on $[0,\tau)$. The latter is integrable since $a_{ij}$ maps into $L^1(Z^j)$ for each $j$. The former is integrable since $|a_{ij}(X^r) - a_{ij}(0)| \leq K(X^r)^{*}_-$ which is a locally bounded hence $Z^j$-integrable function. It follows that $$ \lim_{r \to \infty} N_{\tau_r} + \sum_{j=1}^m \int_0^{\tau_-} 1_{[0,\tau_r]}a_{ij}(X^r)dZ^j $$ exists, since $N_{\tau_r} \to N_{\tau^-}$. It follows that $\lim_{r \to \infty} X_{\tau_r}$ exists a.s., provided that $\tau<\infty$ i.e. the limit exists with non-zero probability.

Contradiction

However, $\lim_{r \to \infty} X_{\tau_r}$ doesn't exist a.s., since $$ X^r_{\tau_r} = F(X^{r-1})_{\tau_{r-1}} = X^r_{\tau_{r-1}} = X_{\tau_{r-1}} $$

(the first and second equalities follow from the definition of $X^r$, the second from the definition of $X$) and $$ X_{\tau_{r}} = X^{r+1}_{\tau_{r}} = F(X^{r})_{\tau_r} $$

Therefore $$ \|X_{\tau_r} - X_{\tau_{r-1}}\| = \|F(X^r)_{\tau_r} - X^r_{\tau_{r}}\| \geq \epsilon $$

by the definition of $\tau_r$. Thus, $X_{\tau_r}$ is almost nowhere convergent, contradicting the conclusion of the previous section. The only possibility is that $\tau=\infty$, as desired.

  • There is still one big problem I'm not able to sort out : why is ir true that $\tau_{r+1}>\tau_{r}$? I tried figuring stuff out from the definition of $\tau$ but perhaps I've got stuck in the notation again. I think it's required in a couple of places, but there may be ways to get around it in any case. I'll move to the other problem : and I don't think it's required over here, because I didn't seem to use it? – Sarvesh Ravichandran Iyer Mar 12 '22 at 11:03
  • Great answer. A few questions. 1: I believe in stochastic integral in these notes $\int_0^t X dZ$ is defined as $\int 1_{(0,t]} XdZ$. You use $\int 1_{[0,t]} XdZ$ instead. Does this not affect things in Answer 2? 2: Which argument are you referring to for $1_{[0,\tau_r]} a_{ij}(X^r)\to 1_{[0,\tau)}a_{ij}(X)$? – nomadicmathematician Mar 12 '22 at 23:14
  • I thought about the question $\tau_{r+1}>\tau_r$ as well. I think this also just follows from the construction of $X$ and $\tau_r$. The definition of $\tau_{r+1}$ gives the first time since $\tau_r$ that $X^{r+1}=F(X^r){\tau_r}$ and $F(X^{r+1}){\tau_r}$ differ by at least $\epsilon$. But when we compare $F(X^r){\tau_r}$ and $F(X^{r+1}){\tau_r}$, then the $N$ components are the same, and the stochastic integrals are also equal since $a_{ij}$ is a function that depends only on the times leading up to $\tau_r$ but not including $\tau_r$ (I think this is what George assumes) – nomadicmathematician Mar 13 '22 at 11:24
  • So since $X^{r+1}=X^r$ for $t<\tau_r$, the two stochastic integrals are equal and at $\tau_r$, $X^{r+1}$ and $F(X^{r+1}$ cannot differ by more than $\epsilon$, and since they are both right continuous, either the set is empty(giving $\tau_{r+1}=\infty$) or $\tau_{r+1}>\tau_r$. I think either case works for us. – nomadicmathematician Mar 13 '22 at 11:26
  • @nomadicmathematician I'm sorry that I wasn't able to respond to your queries quickly. In the order that they were posted (1) I don't think it affects the argument , I went through it and felt it goes through. (2) I'm referring to the argument that shows that $X = X^{r+1}$ on $[0,t]$ for $ \tau_r\leq t < \tau_{r+1}$. An analogue should show that $X^r \to X$ almost surely on $[0,\tau)$ and therefore $1_{[0,\tau_r]}a_{ij}(X_r) \to a_{ij}(X)$ a.s. by the Lipschitz property of $a_{ij}$. Also, thanks for the additional comments around the $\tau_r$ monotonicity, I completely agree with what you say – Sarvesh Ravichandran Iyer Mar 25 '22 at 08:21
  • Perhaps $1_{[0,\tau_r]} a_{ij}(X^r) \to a_{ij}(X)$ is not true a.s. : but I think it's true if we treat it as convergence in probability, and that's enough for the stochastic integral to converge. In fact, looking back I don't think one gets a.s. convergence because the supremum process doesn't go to zero a.s. : but it does go to zero in probability which should be enough. That requires an argument which I will fulfill – Sarvesh Ravichandran Iyer Mar 25 '22 at 08:23
  • I'm sorry Sarvesh I don't really see how we get $1_{[0,\tau_r]} a_{ij}(X^r) \to a_{ij}(X)$ in probability at the moment. Could you add this argument to the proof if possible? – nomadicmathematician Mar 25 '22 at 11:19
  • @nomadicmathematician I'll do that, since it's important. I've also answered the other question some time ago, you can take a look. – Sarvesh Ravichandran Iyer Mar 25 '22 at 11:27
  • Thank you. Just went through the other answer it seems perfect! – nomadicmathematician Mar 25 '22 at 17:17