4

Thanks in advance for the help.

I have $n$ dependent and identically distributed Bernoulli r.v.s $X_1, \dots, X_n$ with success probability $p$. Consider $X = \sum_{i=1}^n X_i$. I know its first ($\mu = \mathbb{E}[X]$) and second ($\nu = \mathbb{E}[X^2]$) moment.

What I'm looking for, it's an upper bound to the quantity $\text{Pr}\left[X \ge 1 \right]$ that is more complex than the simple Markov's inequality (maybe involving $\nu$ also?). Something which does the "opposite job" of the second moment method.

More in general, are there any known lower bounds (involving first and second moment) of the probabilities $\text{Pr}\left[X \le (1 - \epsilon) \mu \right]$ and $\text{Pr}\left[X \le \mu - \lambda \right]$, for $\epsilon \in (0,1)$ and $0 <\lambda < \mu$?

I know there exists a "reverse Chernoff's bound" (here) for independent Bernoulli r.v.s, but is there anything known when we have dependency and the only known things are $\mu$ and $\nu$?

I have to specify that I suspect the variables are positively correlated, but it is very difficult to prove.

EDIT: possibly, the third moment $\mathbb{E}[X^3]$ can be computed too.

  • 1
    Even if $X_i$ are independent, $X\sim Bin(n,p)$ and hence with probability close to $1$, $X\ge 1$. (In this independent case $\mathbb{P}(X\ge 1)=1-\mathbb{P}(X=0)=1-(1-p)^n\to 1$. Or are you looking for something else? – van der Wolf Mar 15 '23 at 16:19
  • I'm looking for anti-concentration bounds: lower bounds to the probability of being far from the mean – CuriousGuy Mar 22 '23 at 15:07
  • Did you look at this: https://en.wikipedia.org/wiki/Paley%E2%80%93Zygmund_inequality ? Set $Z=X_1+\dots+X_n$. – van der Wolf Mar 24 '23 at 10:07

1 Answers1

1

Let us consider a couple of scenarios, in all cases $X_i$ are Bernoulli($p$).

Suppose that the joint pmf of $(X_1,X_2)$ is given by \begin{align} & & 0 \quad& 1 \\ 0 & & a \quad & b \\ 1 && c \quad & d \end{align} so that $a,b,c,d\ge 0$, $a+b+c+d=1$. In order to have $X_i\sim Bernouilli(p)$ we also demand that $a+b=1-p$, $a+c=1-p$, $b+d=p$, $c+d=p$.

First consider the case $p\ge 1/2$, and let \begin{align} a&=0\\ b=c&=1-p\\ d&=2p-1 \end{align} In this case, $\mathbb{P}(X_1+X_2\le 1)=\mathbb{P}(X_1=0,X_2=0)=0$ and hence $$ \mathbb{P}(X\ge 1)=1 $$ as long as $n\ge 2$.

In general, if $p_n\ge 1/k$ for some positive integer $k$, assume that the joint distribution of $(X_1,\dots,X_k)$ is such that each configuration with exactly one $1$ (i.e. a permutation of $(0,0,..,0,1)$) has the probability $\frac{1-p}{k-1}$ and $\mathbb{P}((X_1,\dots,X_k)=(1,1,\dots,1))=\frac{kp-1}{k-1}\ge 0$. For this distribution $\mathbb{P}(X_1+X_2+\dots+X_k\le 1)=\mathbb{P}(X_1=X_2=\dots=X_k=0)=0$ and hence $$ \mathbb{P}(X\ge 1)=1 $$ as long as $n\ge k$.

van der Wolf
  • 5,743
  • 4
  • 13