there seems to be a subtle relationship between knowledge and measurability. If I have a stochastic process $(X_n)_n$, then for example a stopping time ( other examples would be martingales, conditional expectations etc.) is defined as a function $\tau: \Omega \rightarrow \mathbb{N}_0 \cup \{\infty\}$, such that $\{\omega: \tau(\omega) \le n\} \in \sigma (\{X_k;k\le n\})$. This is supposed to mean that based on the outcome of the first $n$ random variables, we are able to say, whether we stop or whether we don't.
Well, assuming that we stop for a particular set of $\omega's$ called $A$. How exactly do we get the idea now, that the first $n$ random variables tell us whether this event occurs and our stopping time is less than $n$? I mean, I do see that $A$ is somehow determined by what $X_1,...,X_n$ are, but I don't see HOW exactly $X_1,..,X_n$ tell us, whether we need to stop or not? As I am apparently still missing a step in this interpretation, I would highly appreciate it if anybody could elaborate on this? Basically the equation is: Why does measurability mean that we can decide whether anything is true or false?