4

Consider a probability distribution $D$ over $n$ bit strings. Denote $U$ to be the uniform distribution over $n$ bit strings and $U_{n}$ to be the uniform distribution over integers $\{1, 2, \ldots, n\}$.

Consider the following two equivalent statements (they are equivalent by Yao's theorem):

  1. There is no uniform polynomial time next bit predictor $A$ such that $$ \underset{X \sim D \\ M \sim U_{n} }{\text{Pr}}[A(X_1X_2.....X_{M-1})=X_M] \geq \frac{1}{2} + \frac{1}{\text{poly}(n)}. $$
  2. There is no uniform polynomial time distinguisher $B$ such that $$ \mid\underset{X \sim D}{\text{Pr}}[B(X)=1] - \underset{Y \sim U}{\text{Pr}}[B(Y)=1]\mid \geq \frac{1}{\text{poly}(n)}. $$

I was thinking about the centrality of the uniform distribution in the reductions. Will some form of these statements work when we replace the uniform distribution by a known and efficiently samplable distribution $D_2$? Let's say we can also efficiently sample from every marginal of $D_2$.

In other words, consider Statement 3 as follows.

Statement 3: There is no uniform polynomial time distinguisher $B$ such that $$ \mid\underset{X \sim D}{\text{Pr}}[B(X)=1] - \underset{Y \sim D_2}{\text{Pr}}[B(Y)=1]\mid \geq \frac{1}{\text{poly}(n)}. $$

Does this imply and/or is implied by a Statement 4 like the following (or some variant of this statement)?

Statement 4: There is no uniform polynomial time next bit predictor $A$ such that $$ \underset{X \sim D \\ M \sim U_{n} }{\text{Pr}}[A(X_1X_2.....X_{M-1})=X_M] \geq c, $$ where $c$ depends on the distribution $D_2$ and the distinguishing advantage of $B$.

If so, can we have an explicit form for $c$?

BlackHat18
  • 397
  • 1
  • 9

1 Answers1

2

Nice question!

This seems to have been addressed in a conference paper also available here by Schrift and Shamir in 1991:

A.W. Schrift, A. Shamir, On the universality of the next bit test, Conference on the Theory and Application of Cryptography, 1990.

There is a later journal version in Journal of Cryptology as well.

To summarize, they consider a source of biased but independent bits and how to devise a distinguisher for it. Note that without loss of generality they assume the probability of a $1$ bit is $b\in (1/2,1)$ but call the quantity $b$ the bias which is a bit counterintuitive, compared to the traditional use of bias for the quantity $b-\frac{1}{2}$.

In particular, they define a weighted success rate of any non-constant PPTA algorithm $$A:\{0,1\}^n\rightarrow \{0,1\}$$ in predicting the $i^{th}$ bit of a biased source as here

Note: The notation $f<O(\nu(n))$ is used for any function that vanishes faster than any polynomial reciprocal power, i.e., any vanishing function.

There are some other alternative tests proposed in the paper.

This paper is cited quite a lot, but mostly by papers that apply it to various sources. In fact, it seems that the same authors had already used this technique to prove results on hardness of every bit including the zero biased most significant bit for discrete logs modulo a composite number.

kodlu
  • 25,146
  • 2
  • 30
  • 63