4

My understanding of the Fiat-Shamir With Aborts Signature Scheme is as follows. We calculate the signature $z = cs +y$, with $s$ being the secret key, and $c$ being the challenge. We need $y$ to hide $cs$, so that $z$'s distribution is indistinguishable from a uniform distribution. If both $cs$ and $y$ are from some finite group $G$ (say the $\mathbb{Z}_{N}$), then there is no rejection step needed: We simply sample $y$ uniformly at random from $G$, and then $cs +y$ also appears uniformly random from $G$ (without knowledge of $s$).

The paper that introduces the paradigm then explains that this uniform sampling from $y$ cannot be done for lattice. This is what I do not understand yet. In the lattice-based signature scheme, both $s$, $c$, and $y$ are from (subsets of) the ring $R = \mathbb{Z}_p[x] / \langle X^n+1\rangle$. But it should be possible to sample uniformly at random from that ring, correct?

The paper also explains that $y$ needs to be sampled from a small range as picking $y$ from a large range would "ange because doing so would require us to make a much stronger complexity assumption which would significantly decrease the efficiency of the protocol".

Why would $y$ being picked from a larger range influence the complexity assumption? Is that because for a larger $y$, it is easier to find a pre-image in the hash-function $h(y) = \langle a,y\rangle$

Mark Schultz-Wu
  • 15,089
  • 1
  • 22
  • 53

2 Answers2

2

As stated in the paper mentioned, in the context of the lattice-based ID scheme (Figure 3), the prover picks a random $\tilde y \in D^m_y$ and commits to it by sending $Y=h(\tilde y)$ to the verifier. The response $\tilde z=\tilde sc+\tilde y$ must fall within a specific range $G^m$ to be accepted, or the prover aborts.

The key issue with using a uniform distribution over a large range (as done in number-theoretic schemes like Girault's) is highlighted in Section 1.3. For lattice-based schemes, selecting uniformly at random from a large range would require a much stronger complexity assumption. Specifically, the hardness of finding a super-polynomial approximation of the shortest vector (SVP) rather than the current $O(n^2)$ approximation.

This stronger assumption would significantly decrease the efficiency of the protocol, as it would necessitate larger parameters and more computational overhead to maintain security.

R_Jalaei
  • 515
  • 2
  • 12
1

You can. See On Removing Rejection Conditions in Practical Lattice-based Signatures. I quote

We show both positive and negative results on removing the rejection conditions in Fiat-Shamir based lattice signatures. Out of the two rejection conditions used both in Dilithium and qTESLA, we show that removing one of the rejection conditions is possible. As a result, we provide a variant of Lyubashevsky’s signature with one rejection condition. The variant of Fiat-Shamir based lattice signature we propose can be instantiated with comparable parameters with Dilithium and qTESLA in terms of security, public-key and signature sizes, and rejection rate. The key difference to the previous schemes is that the secret key and masking terms are sampled uniformly random over the base ring.

More concretely, below is Figure 1 of the linked paper

A "Fiat-Shamir with Aborts" Signature Scheme with uniform "masking" term, from eprint 2021/924

In particular, note that during signing that $\mathbf{r}^t \stackrel{\\\$}{\gets} R_q^l$ is sampled uniformly, and $\mathbf{z} := \mathbf{r} + c\mathbf{s}\pmod q$, e.g. this term is precisely the term you are asking about.

Mark Schultz-Wu
  • 15,089
  • 1
  • 22
  • 53