2

Let me preface this by saying that basically the same question has been asked before on MathStackExchange. However, there is one small detail in an exercise that I cannot reconcile. The following question is Exercise 4.1 in "Probability with Martingales" by David Williams:


Exercise 4.1: Let $(\Omega, \mathcal{F}, P)$ be a probability triple. Let $\mathcal{I}_1$, $\mathcal{I}_2$, and $\mathcal{I}_3$ be three $\pi$-systems on $\Omega$ such that for $k=1,2,3$, $$\mathcal{I}_K\subseteq\mathcal{F}\quad \text{and} \quad\Omega\in \mathcal{I}_k.$$ Prove that if $$P(I_1\cap I_2\cap I_3) = P(I_1) \cdot P(I_2) \cdot P(I_3)$$ whenever $I_k\in \mathcal{I}_k$ ($k=1,2,3$), then $\sigma(\mathcal{I}_1)$, $\sigma(\mathcal{I}_2)$, and $\sigma(\mathcal{I}_3)$ are independent. Why did we require the $\Omega\in\mathcal{I}_k$?


It appears that one can simply mock the proof directly from the lemma on page 39 in his book where $k=2$. However, in that lemma, there was no assumption that $\Omega\in\mathcal{I}_k$ for $k=1,2$. Therefore, I am confused by the assumption in E4.1 and the last question in the exercise. Is this simply a callously worded exercise or am I missing something? The proof I have is as follows (mimicking a previous lemma in Williams):

Proof of Exercise 4.1: Fix $I_1\in\mathcal{I}_1$ and $I_2\in\mathcal{I}_2$. Consider the mappings $$ J_3 \mapsto P(I_1\cap I_2\cap J_3) \quad \text{and}\quad J_3\mapsto P(I_1)\cdot P(I_2)\cdot P(J_3),$$ for $J_3\in\sigma(\mathcal{I}_3)$. One can verify that these are measures on the measure space $(\Omega, \sigma(\mathcal{I}_3))$, that they both have a total mass of...

OH WAIT! As I was typing the "..." above, I realized something. Recall that William's earlier lemma used a result that if two finite measures agree on a $\pi$-system and have the same total mass, then they agree on the $\sigma$-algebra generated by the $\pi$-system. Therefore, the technical issue here is that the total mass of the two measures above are $$P(I_1\cap I_2)\quad \text{and}\quad P(I_1)\cdot P(I_2),$$ respectively. If $\Omega\not\in \mathcal{I}_3$, it is not necessarily the case that the above two numbers equal. Using this approach by creating measures two more times, the same issue will come up, requiring that $\Omega\in\mathcal{I}_k$ for $k=1,2$.

This begs the following question: Is the requirement that $\Omega\in \mathcal{I}_k$ only an issue for this particular proof? The same question in this post does not require this assumption. However, the proof there made use of the $\pi-\lambda$ theorem, which was not directly used in the earlier lemma in Williams (for the $k=2$ case; although, recall that they did not require that $\Omega\in\mathcal{I}_k$ for $k=1,2$ for that proof).

Satana
  • 1,259

2 Answers2

2

What if $\mathcal I_3$ consists of just the empty set. Then hypothesis holds for any $\mathcal I_1$ and $\mathcal I_2$ ; $\mathcal I_1$ and $\mathcal I_2$ need not be independent.

  • Thanks Kavi - you made me realize something. I added it to the end of my question. I also posed another question. Would you be able to answer the new question? Thank you. – Satana Aug 31 '18 at 00:37
1

Regarding your new question: The hypothesis $\Omega\in{\cal I}_k$ for each $k$ is necessary when we're talking about independence of sigma-algebras generated from more than two $\pi$-systems, for the reason that you discovered. (The other question that you linked should have required the hypothesis. Note that the accepted answer does assume that $\Omega$ is among the sets for which the independence holds.)

When there are two $\pi$-systems, then the assertion $P(\Omega\cap H)=P(\Omega)P(H)$ holds vacuously, so including the hypothesis would be unnecessary.

If you look closely at the proof of Lemma 1.6 in Williams' Appendix A, you'll see that the requirement $\mu_1(\Omega)=\mu_2(\Omega)$ is necessary before we can apply Dynkin's lemma.

grand_chat
  • 40,909