0

In the book Continuous-Time Markov Processes: An Introduction by Thomas M. Liggett, on page 22 (Chapter 1.7), the author introduces the notation $E^x$, corresponding to a probability measure $P^x$, which is the law of $x + B$, where B is a Brownian motion.

I found a helpful discussion in this Math Stack Exchange post, but I still have some confusion regarding two points:

  1. Definition of $E^x(Y)$ for a general bounded random variable $Y$:

    • From the book, I see that $E^x(B(t))=E(x+B(t))$, which suggests that the expectation under $P^x$ simply shifts the process by x.
    • However, for a general bounded random variable $Y$, how do we define $E^x(Y)$? Is it always true that $E^x(Y) = E(Y + x)$? In particular, since $Y$ is not necessarily a function of the Brownian motion $B$, the expression $\int YdP^x$ seems to be meaningless.
  2. Understanding the Markov property (Theorem 1.46) and the meaning of $E^{X(s)}(Y) $:

    • The theorem states a Markov property in terms of $E^{X(s)}(Y)$, but I am struggling to interpret it.
    • It seems that $E^{X(s)}(Y)$ represents modifying the expectation by shifting the starting point of the process, rather than the more typical interpretation of the Markov property, which involves conditional expectation:
      $$ E(Y \mid \mathcal{F}_s) = E(Y \mid B^s) $$ where $B^s$ is the future Brownian motion from time $s$.
    • How exactly should one interpret $E^{X(s)}(Y)$, and how does this formulation relate to the classical Markov property?

I would appreciate any clarifications on these points!

1 Answers1

0

Question 1

The Math Stack Exchange post that you cite makes it very clear that the sample space $\Omega$ in this setup is the set of all possible sample paths of a Brownian motion, and every $\omega \in \Omega$ represents one realization of the complete path of the Brownian motion. (Recall that these sample paths are almost surely continuous and therefore the sample space can be taken as the space of all continuous functions of time.) Every random variable being a (measurable) function of $\omega$ is a (measurable) function of a Brownian motion. Your fear that $Y$ is not necessarily a function of the Brownian motion $B$ is therefore unfounded.

The same Math Stack Exchange post also explains that the shift operation starts by shifting all sample paths by $x$. Imagine the sample path being plotted with time on the $X$ axis and the value of the Brownian motion on the $Y$ axis. What is being said is that the entire path is shifted vertically by $x$. In particular the Brownian motion now starts at $x$ instead of $0$. We then find the new probability measure giving the probabilities under the new process.

$E^x$ is the expectation with respect to this measure. It is then obvious that $E^x(B(t))=E(x+B(t))$. But if you had a function $Y=f(B(t))$ then $E^x(Y)=E^x(f(B(t)))=E(f(x+B(t)))$. In other words, you do not add $x$ to the expectation, but you add $x$ to the sample path of the Brownian motion, then compute the value of $f(x+B(t))$ and then compute its expectation.

Question 2

Recall that the Brownian motion has independent increments, and the increment to the process from time $s$ to $s+\tau$ has the same distribution as the increment from time $0$ to $\tau$. So if we want to know what would happen from time $s$ to $s+\tau$ given that the Brownian motion has the value $b=B(s)$ at time $s$, we can equivalently ask what would happen from time $0$ to $\tau$ if the process started at $b$ at time $0$. This is again a Markov property. If we know $B(s)$, we do not care about the values of $B(t)$ prior to $s$. We can ignore this entire past history and imagine the Brownian motion starting now with the value $b=B(s)$. This implies using the shifted probability measure and the shifted expectation $E^b$.