3

In the decisional LWE problem, we distinguish between the LWE sample $(a \cdot s + e = b, a)$ (where $s$ and $e$ are drawn from a Gaussian distribution) and $(u, a)$ for a uniformly random $u$.

Can you distinguish between $(b, a, h)$ and $(u, a, h)$ by adding the information that $s + t = h$, for $t$ drawn from a Gaussian distribution?

Maarten Bodewes
  • 96,351
  • 14
  • 169
  • 323
oracle
  • 35
  • 5

1 Answers1

2

In this case you have a LWE problem with different parameters which is certainly no harder than the original problem. The hardness of LWE problems is likely to be dependent on the parameters used (we know that there are trivially weak parameterisations, and we believe that there are strong parameterisations). Your new "hint" variation can be transformed into the original setting by having the adversary ignore $h$ and hence the hint is no harder.

Probably the best way to analyse and compare your hint system is to treat it as an LWE problem with the same dimensions, but with $s$ being drawn from a different probability distribution. Specifically, $s$ should be treated as being drawn from the conditional distribution given $h$. More concretely, if we have random variables $S\sim\mathcal N(0,\sigma_S^2)$, $T\sim\mathcal N(0,\sigma_T^2)$ and $H=S+T$, we use the continuous version of Bayes theorem to write down the probability density function of $S$ given that $H=h$ (assuming that $S$ and $T$ are independent in the following): \begin{eqnarray*}f_{S|H=h}(s)&=&\frac{f_{H|S=s}(h)f_S(s)}{f_H(h)}\\ &=&\frac{f_{T|S=s}(h-s)f_S(s)}{f_H(h)}\\ &=&\frac1{\sqrt{2\pi}}\sqrt{\frac{\sigma_S^2+\sigma_T^2}{\sigma_S^2\sigma_T^2}}\exp\left(-\frac{(h-s)^2}{2\sigma_T^2}-\frac{s^2}{2\sigma_S^2}+\frac{h^2}{2(\sigma_S^2+\sigma_Y^2)}\right) \end{eqnarray*}

How this might affect the difficulty will depend on the relative size of $\sigma_T$. Intuitively, if $\sigma_T$ is small then we expect $t$ to be very small and $h$ to be a very good approximation to $s$, and indeed as $\sigma_T\to 0$ we see $f_{S|H=h}(s)$ concentrates very strongly around $s=h$ giving us a much weaker LWE problem. Conversely, if $\sigma_T$ is large then we expect $t$ to vary greatly and so that $h$ provides less information about $s$, and indeed as $\sigma_T\to\infty$ we see $f_{S|H=h}(s)\to f_S$ moving us closer to the original LWE problem.

ETA: Per the follow-on question in the comments, in the case $\sigma_T=\sigma_S$, the expression simplifies to $$f_{S|H=h}(s)=\frac1{\sqrt{\pi\sigma_S^2}}\exp\left(\frac{-(s-h/2)^2}{\sigma_S^2}\right)$$ which is the density function of a $N(h/2,\sigma_S^2/2)$ distribution. By constructing the samples $(b-a.h/2,a)$ or $(u-a.h/2,a)$ we can transfer to a LWE instance where the error is satisfies a Gaussian identical to the original problem and the secret is satisfies a Gaussian with variance half of the original problem and which is therefore easier (according to current methods). The problem is certainly no harder as we could perturb by a further Gaussian to restore the original distribution.

More generally, if $\sigma_T^2=\theta\sigma_S^2$ for some $\theta\ge 0$, we obtain a normal with variance $\frac\theta{1+\theta}\sigma_S^2<\sigma_S^2$ so that we can always convert to an LWE instance with smaller variance for the secret. This is always no harder and easier according to current costing methods.

ETA 20250410: I should further add that things are even more extreme if a fresh $t$ and $h$ are generated for each sample. In this case, if we average all of the $h$ the central limit theorem tells us that this will be very close to $s$. In the case of a single hint, I shall detail possible approaches in an answer to your follow-up question.

Daniel S
  • 29,316
  • 1
  • 33
  • 73