1

Suppose a discrete stochastic process $X_n(\omega)$ is defined on the common probability space $(\Omega,P,\mathcal{F})$ and only takes $N$ steps. The state space of each $X_n$ has $K$ elements. Then we at most have $K^N$ distinct sample paths. Since each $\omega$ in the space $\Omega$ of elementary outcomes uniquely corresponds to a sample path, there will be $K^N$ different $\omega$ in $\Omega$, i.e., $|\Omega|=K^N$. But If we add another step to the process, $N\rightarrow N+1$, then using the same logic, $|\Omega|=K^{N+1}$. This is what becomes confusing to me, why $\Omega$ is dependent on the number of steps of the process?

Also, each $X_n(\omega)$ is a measurable function on $\Omega$, assuming they are independent of each other, then adding the extra $X_{N+1}(\omega)$ shouldn't change the measurability and the law of any preceding random variables. But we do see that $\Omega$ is expanded to accommodate the presence of $X_{N+1}(\omega)$, then how do we guarantee that $X_n$ ($n\le N$) is still measurable on the expanded $\Omega$? And since $\Omega$ is expanded, these $X_n$ are now defined on a new space so they are not the same $X_n$ as before. Can we still say they are the same random variables?

  • 1
    It seems to me the question is essentially the same as: If we flip one coin then the sample space is $\Omega={H,T}$. If we flip two coins then $\Omega = {(H,H), (H,T), (T,H), (T,T)}$, why? – Michael Jan 30 '20 at 05:23
  • @Michael. My confusion is that the second $\Omega$, for four outcomes, seems be defined for the "process", but not for each component (each toss). For each toss, the probability space ${H,T}$ is more appropriate. However, in the stochastic process theory, the random variable at each time step also has that expanded probability space with many redundant elements – yixianshuiesuan Jan 30 '20 at 18:08
  • 1
    Yes. For example, if each outcome $\omega \in \Omega$ is a vector, then random variables $X(\omega)$ and $Y(\omega)$ might only depend on one component of the vector. We just use a sample space large enough to contain all the things we want. In the 2-coin toss example we must use a sample space at least as large as ${(H,H), (H,T),(T,H),(T,T)}$, it is not appropriate to define different sample spaces for different tosses, because we need to be able to measure the probability associated with the tosses in relation to each other (such as "what is the probability they are both tails?") – Michael Jan 30 '20 at 21:48

1 Answers1

2

"Since each $\omega$ in the space $\Omega$ of elementary outcomes uniquely corresponds to a sample path, there will be $K^N$ different $\omega$ in $\Omega$, i.e., $|\Omega|=K^N$..."

Preassumed that every path can be followed it is correct to state that there will be at least $K^N$ different outcomes in $\Omega$ but it is wrong to state that will be exactly $K^N$ different outcomes in $\Omega$. This in the first place because the stochastic process will at most provide some constraints on $\Omega$ but is certainly not determining for $\Omega$. On the contrary, I would say. By modeling the situation we have an enormous freedom when it comes to the construction of a suitable probability space.

As an example (already mentioned in the comment of Michael): if we throw just once a fair coin then we can choose for $\Omega=\{H,T\}$ equipped with $\wp(\Omega)$ as $\sigma$-algebra and a probability measure determined by $P(\{H\})=0.5$.

But nothing stops us from choosing e.g. $\Omega=\mathbb R^{\mathbb 3}$ equipped with $\sigma$-algebra $\mathcal B(\mathbb R^3)$ together with random variable $X:\Omega\to\mathbb R$ prescribed by $(\omega_1,\omega_2,\omega_3)\mapsto\omega_1$. If we interpret a negative result as Tails and a non-negative result as Heads then concerning the probability measure $P$ on $(\mathbb R^3,\mathcal B(\mathbb R^3))$ it is enough to demand that $P\left(X^{-1}([0,\infty))\right)=P([0,\infty)\times\mathbb R\times\mathbb R)=0.5$. Note that without encountering any problems we can go for $3$ fair coin flips by prescribing $Y,Z:\Omega\to\mathbb R$ by $\omega\mapsto\omega_2$ and $\omega\mapsto\omega_3$ respectively, and extra demands $P(\mathbb R\times[0,\infty)\times\mathbb R)=P(\mathbb R\times\mathbb R\times[0,\infty))=0.5$.

So if you are dealing with a discrete stochastic process having $N$ steps then just think of an underlying probability space that makes it possible to do "extra" steps. Actually "expansion of probability space" is rarely needed because we just start with one that has all the benefits we want.

Also have a look at this question and its answer.

drhab
  • 153,781
  • Thanks. So for a continuous random process such as 1-D Brownian motion, we can always specify a maximum probability space, e.g., $\mathbb{R}^{\mathbb{R}^+}$, to contain all the sample paths, and then declare that those elements in $\mathbb{R}^{\mathbb{R}^+}$ which are not continuous to be excluded from consideration (or together has measure zero)? – yixianshuiesuan Jan 30 '20 at 18:04
  • I must confess that I am not quite familiar with Brownian motion and/or the way how a suitable underlying probability space is usually constructed in that situation. So it would be kind of reckless to confirm or deny what you are saying in your comment. More or less I live in good faith that a suitable probability space always can be constructed. I do not always know how, but that does not withhold me from believing that. If you want to know more about Brownian motion in this context then you could consider asking a question about that. Eventually with a link to this question. – drhab Jan 30 '20 at 18:14