7

According to Regev's paper, p15

Correctness. Note that if not for the error in the LWE samples, $b-⟨a, s⟩$ would be either 0 or ⌊ q ⌋ depending on the encrypted bit, and decryption would always be correct. Hence we see that a decryption error occurs only if the sum of the error terms over all S is greater than q/4. Since we are summing at most m normal error terms, each with standard deviation αq, the standard deviation of the sum is at most $\sqrt{m}\alpha q < q \log{n}$; a standard calculation shows that the probability that such a normal variable is greater than q/4 is negligible.

I did a little experiment with mathematica and the resulting probability of decryption error is around 0.28. Can you point out where I was wrong?

Firstly I build the the following variables as in the paper:

In[1]:= n=10
        q=RandomPrime[{n^2,2n^2}]
        m=1.1*n*Log[q]
        α=1/(Sqrt[n]*Log[n]^2)
        σ=α * q
        Floor[q/2]
        q/4
Out[1]= 10
Out[2]= 131
Out[3]= 53.6272
Out[4]= 1/(Sqrt[10]*Log[10]^2)
Out[5]= 131/(Sqrt[10]*Log[10]^2)
Out[6]= 65
Out[7]= 131/4

Then I calculate the maximum probability that the error terms sum is greater than q/4 by calculating the value of the Cumulative Probability function when x is -q/4:

In[323]:= (*Probability that sum is greater than q/4*)
          CDF[NormalDistribution[0,Sqrt[m]*α*q],-q/4]
Out[323]= 0.279785

As you can see, it's too large for "negligible" so I think there's something wrong with my calculation.

Here are the PDF for a single error $e$ and the sum of errors:

Single error Single error

Sum of errors Sum of errors

xtt
  • 369
  • 2
  • 12

2 Answers2

5

The probability of error is negligible "as a function of $n$", meaning that the probability of error will decrease (quickly) as $n$ grows. Increasing $n$ should solve your issue.

Maarten Bodewes
  • 96,351
  • 14
  • 169
  • 323
LeoDucas
  • 1,466
  • 7
  • 12
5

Denote by $X$ the random variable which is the sum over all $S$. As mentioned, this is a Gaussian of standard deviation at most $\sqrt{m}r$ with $r = \alpha q$. Hence, by properties of the (sub-)Gaussian distribution you have that

$$\operatorname{Pr}\left[|X| > t\right]\leq 2\exp\left(\frac{-\pi t^2}{r^2m}\right)$$

so, for $t = \frac{q}{2}$ you have

$$\operatorname{Pr}\left[|X| > \frac{q}{2}\right]\leq 2\underbrace{\exp\left(\frac{-\pi \left(\frac{q}{2}\right)^2}{r^2m}\right)}_{\varepsilon(n)}$$

From there you can see that by choosing $q(n)$ appropriately (e.g. with the parameters proposed in the paper), you can make of $\varepsilon(n)$ a negligible function. This does not necessarily mean that, once you fix a particular $n$, this probability is 'small'. What it means intuitively is that as $n$ grows this probability gets smaller and smaller at a good rate, while keeping the parameters 'practical' (i.e. polynomial).

Daniel
  • 4,102
  • 1
  • 23
  • 36