5

I'd like to discuss message bit layout in the Saber and $KYBER$'s IND-CPA encryptions.(Details of these two schemes follows behind these question paragraphs). From my understanding, both Saber and $KYBER$ somehow place the secret message to the highest bit of elements of vector:

  1. in Saber, the message bits $m$ are set at reconciliation info $c_m$'s highest bit, i.e. $c_m$'s $\epsilon_p$-th bit, hidden by $v^\prime$)
  2. and in $KYBER$ lays the secret bit into vector $v$, hidden with $t^Tr$.

In this process, both of them used rounding-like operations to get the right bit segments, Saber uses rounding of LWR, and $KYBER$ uses Compression-Decompression.

To me, Saber looks smarter because it gets rid of random error generation in the rounding process, while $KYBER$ seems to have "redundant" error addition besides its compressing and decompressing. But the results of NIST suggests that I might misunderstand this. So whether we can see $KYBER$ redundant? if so, to what extent does this redundance mean to security, or does it give $KYBER$ an advantage over Saber?

As a student quite new to post-quantum cryptography, I really appreciate any intuitions and corrections, thanks so much!

Appendix

Here are the two IND-CPA encryption scheme:

  1. Saber:Saber IND-CPA encryption

  2. $KYBER$:Kyber IND-CPA enc

R_Jalaei
  • 515
  • 2
  • 12
Sharon
  • 95
  • 9

1 Answers1

5

The rationale for the Kyber design is to view the "add error then round" process as adding security from the addition of $e_2$ and then providing bandwidth efficiency via the rounding in a manner that strictly does not reduce security. In their round 3 specification document in section 1.5 "Different noise values $\eta_1$ and $\eta_2$" they speak of the noise introduced by rounding as "deterministic". They go on to say

unlike in (Ring/Module)-LWR schemes, where the security completely relies on the noise that’s deterministically generated by rounding, our dependence on the deterministic noise is much smaller [...] without the LWR assumption, our parameter set for Kyber512 has 112 bits of core-SVP hardness – more specifically, the public keys are protected with 118 bits, and the ciphertexts with 112; with a weak version of the LWR assumption, it has 118-bit security everywhere

In other words they argue if they depend solely on the hardness of learning with errors and make no assumption about the hardness of learning with rounding, the "redundant" noise means that they believe that they still have a security level of "112-bits of core-SVP hardness" whereas if they assume that learning with rounding is also hard then they have a "118-bits of core SVP hardness".

For SABER the claims of security if there is a difference between the hardness of LWE and LWR are less explicit.

In their final report on the 3rd round of the process NIST explicitly mention the distinction between the LWE and LWR problems as forming part of their decision making process. In section 4.1.1 Kyber Overall assessment, we read

There is arguably more evidence to support the MLWE problem (which KYBER is based upon) than the MLWR [assumption]... which Saber [relies upon].

and in section 4.3.4 Saber Overall assessment, we read

NIST determined that there was no compelling reason to standardize multiple different structured lattice KEMs and chose KYBER instead of Saber. One factor that led to this decision was NIST’s assessment that the MLWE problem, which accounts for most of the security of KYBER, is better studied than the MLWR problem on which the security of Saber is entirely based. While it did not seem particularly likely that the use of MLWR as opposed to MLWE would result in a significant loss of security, KYBER and Saber were similar enough in security and performance profile that factors such as this could determine the decision.

Daniel S
  • 29,316
  • 1
  • 33
  • 73