9

Let $\{X_n\}_n$ be a sequence of i.i.d. (discrete) random variables with expected value $\mu$, and let $\{B_n\}_n$ be a sequence of finite and bounded random variables. Assume that for all $n$, $B_n$ is independent from $X_n$ but not necessarily from $X_i$ for $i<n$. I am interested in whether the almost sure convergence $$\frac{B_1X_1+\cdots + B_nX_n}{B_1+\cdots + B_n}\to \mu$$ holds. Of course, when $B_n = 1$ for all $n$, the result holds by the strong law of great numbers, and when the $B_i$ are (possibly different but) constant, there are some results in literature about the weighted sums.

I am interested in finding a reference more general case, where the weights are random variables. As a concrete example, think of $X_n$ as the result of a unit bet on red in roulette, so $X_n \in \{0, 2\}$, and $B_n$ as the size of the bet at time $n$, chosen based on previous outcomes but independent of the current spin. When the bets are equal to $1$ (or to some constant $b$), the sequence clearly converges to $36/37$ (the probability of winning times the payoff). It seems reasonable that any bounded betting strategy depending on the past should not change this limit, but I do not currently see a clear proof.

Does anyone know of a reference for this kind of result, or have an idea of how to prove it?

Lios
  • 1,272

2 Answers2

2

Note that $$ \frac{B_1X_1+\cdots+B_nX_n}{B_1+\cdots+B_n}=\mu+\frac{B_1(X_1-\mu)+\cdots+B_n(X_n-\mu)}{B_1+\cdots+B_n}. $$ The numerator of the fraction on the right is a martingale, to which the strong law of large numbers applies (https://www.ams.org/journals/tran/1968-131-01/S0002-9947-1968-0221562-X/S0002-9947-1968-0221562-X.pdf) provided $$ \sum_{n=1}^\infty\frac{E[B_n^2(X_n-\mu)^2]}{n^2}<\infty. $$ For this, all we need is that the $X_n$ are i.i.d. with finite variance (since the $B_n$ are assumed uniformly bounded). Thus, $$ \frac{B_1(X_1-\mu)+\cdots+B_n(X_n-\mu)}{n}\to0 $$ a.s., so for the desired result, namely $$ \frac{B_1X_1+\cdots+B_nX_n}{B_1+\cdots+B_n}\to\mu $$ a.s., it suffices to assume the fairly mild condition $$ \liminf_{n\to\infty}\frac{B_1+\cdots+B_n}{n}>0 $$ a.s. This holds if, for example, bet sizes are bounded away from $0$.

1

This answer is not right because $(\tilde{B}\cdot Y)_n$ is not a martingale, as per @lonza leggiera's comment. I will still leave it up though since the part about convergence in probability is still correct.


Define $X_n = Y_n - \mu$. Then $\sum_{n\leq N} Y_n$ is a martingale, and so is $$(\tilde{B}\cdot Y)_n = \frac{\sum_{n\leq N} B_nY_n}{\sum_{n\leq N} B_n},$$ and in particular it is a supermartingale. Assuming that the distribution of the $Y_n$ has compact support, then $(\tilde{B}\cdot Y)_n$ is uniformly bounded in $L^1(\mathbb{P})$, so Doob's forward convergence theorem still applies. Hence it is sufficient to prove that $(\tilde{B}\cdot Y)_n \to 0$ to conclude.

Now the conditional Markov inequality gives, for r.v.'s $X$ and $Y>0$ that $$ \mathbb{P}\left (X > Y \mid \mathcal{F}_{n-1}\right) \leq \frac{E(X \mid Y)}{Y},$$

whenever $Y$ is $\mathcal{F}$ measurable and $EY < \infty$. So, assuming that $B_n >0$ and $EB_n < \infty$, we have $$\displaylines{\mathbb{P}\left (\left\lvert \sum_{n\leq N}B_nY_n\right\lvert \geq \epsilon\sum_{n \leq N}B_n \mid \mathcal{F}_{n-1} \right) \leq \frac{\mathbb{E}\left(\sum_{n\leq N}B_nY_n \mid \mathcal{F}_{n-1}\right)^2}{\epsilon^2\left(\sum_{n\leq N}B_n\right)^2} \\ = \frac{\sum_{n\leq N}\sum_{m\leq M}B_nB_m\mathbb{E}(Y_nY_m)}{\epsilon^2\left(\sum_{n\leq N}B_n\right)^2}} \\ \leq \frac{C\sum_{n\leq N}B_n^2}{\epsilon^2\left(\sum_{n\leq N}B_n\right)^2} $$

What we now need in order to conclude is that $$\mathbb{E}\bigg(\frac{\sum_{n\leq N}B_n^2}{\sum_{n \neq m}B_nB_m}\bigg) \to 0.$$

Since the quantity in the brackets is positive, this is the same as asking that $$\frac{\sum_{n\leq N}B_n^2}{\sum_{n \neq m}B_nB_m} \to 0 \quad \text{a.s.}$$ This is the case, for example, if the $B_n$ are bounded away from $0$, i.e. $0 < c \leq B_n \leq C$ a.s. In such a case it is clear that $$\frac{\sum_{n\leq N}B_n^2}{\sum_{n \neq m}B_nB_m}\sim 1/N \to 0.$$

There are probably other conditions that can take the place of the 'bounded away from zero' assumption. However, there are (at least) two necessary conditions, namely that $\sup_n EB_n < \infty$ and $\sum_{n\geq 1} B_n = \infty$ a.s. Intuitively, the latter condition insures that the tail is always large enough to overcome deviations in the beginning of the sequence, while the former insures that there are no 'spikes' in the convergence behaviour of $(\tilde{B}\cdot Y)_n$. It is easy to construct counterexamples when either of these two conditions fails. Maybe I will update the post with counterexamples later.

Enforce
  • 1,003
  • Thank you for the reply. I also have thought about similar reasoning, as in my other question, possibly with $\sum B_k(X_k-\mu) / \sum B_k$, which if I am not mistake should be a martingale. However, as you mention, I fail to arrive to an explicit description of the limit. – Lios Apr 30 '25 at 07:10
  • @Lios see the edit to my answer :) – Enforce May 01 '25 at 13:20
  • @Lios I don't believe $\frac{\sum_\limits{k=1}^nB_k\big(X_k-\mu\big)}{\sum_\limits{k=1}^nB_k}$ is a martingale in general. If $B_j=1$ almost surely for all $j,$ for instance, and $\ \mathscr{F}r $ is the $\ \sigma$-algebra generated by $\frac{\sum\limits{k=1}^jB_k\big(X_k-\mu\big)}{\sum_\limits{k=1}^jB_k}=\frac{\sum_\limits{k=1}^j\big(X_k-\mu\big)}{j}\ $ for $\ j=1,2,\dots,r,$ then$$\mathbb{E}\left(\left.\frac{\sum_\limits{k=1}^n\big(X_k-\mu\big)}{n},\right|,\mathscr{F}{n-1}\right)=\left(\frac{n-1}{n}\right)\left(\frac{\sum\limits{k=1}^{n-1}\big(X_k-\mu\big)}{n-1}\right)\ . $$ – lonza leggiera May 02 '25 at 07:44
  • @lonzaleggiera you are absolutely right, thanks. Enforce, thanks. I have to admit that I need some time to see through the part about convergence in probability. For example, in the formula for the conditional inequality, do you mean $E[X\mid \mathcal{F_{n-1}}]$? And why is it enough to work with conditional probabilities to conclude convergence in probability? Also, after you define $X_n$, should you use it instead of $Y_n$ in the rest of the discussion? And finally, I am not really seeing how you use the classical SLLN (if you are actually using it) to find the bound for the probability... – Lios May 06 '25 at 07:08