0

there seems to be a subtle relationship between knowledge and measurability. If I have a stochastic process $(X_n)_n$, then for example a stopping time ( other examples would be martingales, conditional expectations etc.) is defined as a function $\tau: \Omega \rightarrow \mathbb{N}_0 \cup \{\infty\}$, such that $\{\omega: \tau(\omega) \le n\} \in \sigma (\{X_k;k\le n\})$. This is supposed to mean that based on the outcome of the first $n$ random variables, we are able to say, whether we stop or whether we don't.

Well, assuming that we stop for a particular set of $\omega's$ called $A$. How exactly do we get the idea now, that the first $n$ random variables tell us whether this event occurs and our stopping time is less than $n$? I mean, I do see that $A$ is somehow determined by what $X_1,...,X_n$ are, but I don't see HOW exactly $X_1,..,X_n$ tell us, whether we need to stop or not? As I am apparently still missing a step in this interpretation, I would highly appreciate it if anybody could elaborate on this? Basically the equation is: Why does measurability mean that we can decide whether anything is true or false?

  • You may find this helpful: http://math.stackexchange.com/questions/331410/why-is-stopping-time-defined-as-a-random-variable/331474#331474 –  Jul 08 '14 at 18:41

1 Answers1

1

The condition means that, for every $n$, there exists some measurable set $A_n$ such that $$\tau\leqslant n\iff(X_1,\ldots,X_n)\in A_n.$$ Thus, every positive stopping time $\tau$ can be rewritten as $$\tau=\inf\{n\geqslant1\mid(X_1,\ldots,X_n)\in A_n\},$$ for some well-chosen sequence $(A_n)$.

Did
  • 284,245