Skip to navigation Skip to main content Skip to footer

Building Intuition for Lattice-Based Signatures – Part 1: Trapdoor Signatures

24 July 2023

By Elena Bakos Lang

Introduction

Since the first lattice-based cryptography results in [Ajtai96], lattices have become a central building block in quantum-resistant cryptosystems. Based on solving systems of linear equations, lattice-based cryptography adds size constraints or error terms to linear systems of equations, turning them into quantum-computer resistant one-way or trapdoor functions. Since the first theoretical cryptosystems of the 90’s and early 2000s, lattice-based cryptography has been a very active area of research, resulting in ever-more practical signature and encryption schemes, and yielding many advances in areas such as fully-homomorphic encryption.

This two-part blog series aims to provide some intuition on the main building blocks that are used in the construction of the two lattice-based signature schemes selected for standardization by the National Institute of Standards and Technology (NIST), Dilithium and Falcon, and showcases the techniques used in many other lattice-based constructions. This first part will describe a construction using lattice-based trapdoor functions and the hash-and-sign paradigm, which is at the core of the signature scheme Falcon. The second part will describe a construction based on the Fiat-Shamir paradigm, which is at the core of the signature scheme Dilithium.

Table of Contents

Lattice Background

Before diving into signature constructions, we must first introduce a few concepts about lattices and lattice-based hard problems.

At a high level, lattices can simply be thought of as the restriction of vector spaces to a discrete subgroup. In particular, a lattice is defined as the set of integer linear combinations of a set of basis vectors B = \{\vec{b}_1, \dots, \vec{b}_n\} \subseteq \mathbb{R}^n. For simplicity, we often restrict ourselves to integer lattices, i.e. lattices with basis vectors chosen from \mathbb{Z}^n.

Similarly to vector spaces, a lattice can be defined by an infinite number of equivalent bases. Two bases B_1 and B_2 define the same lattice if every point in the lattice generated by B_1 can also be generated as an integer linear combination of the basis B_2, and vice versa1. For example, the two-dimensional lattice \Lambda generated by B_1 = \left\{\begin{bmatrix}10\\7\end{bmatrix}, \begin{bmatrix}9\\6\end{bmatrix}\right\} can instead be generated from the basis B_2 = \left\{\begin{bmatrix}2\\-1\end{bmatrix}, \begin{bmatrix}1\\1\end{bmatrix}\right\}, as depicted below.

Note that, unlike standard vector spaces, not all linearly independent sets of n lattice vectors form a basis for a given lattice. For example, the set \left\{\begin{bmatrix}2\\-1\end{bmatrix}, \begin{bmatrix}2\\2\end{bmatrix}\right\} is not a basis for the lattice \Lambda, as there is no integer linear combination of the basis vectors that generates the vector \begin{bmatrix}3\\0\end{bmatrix}, while we can write \begin{bmatrix}3\\0\end{bmatrix} = \begin{bmatrix}2\\-1\end{bmatrix} + \begin{bmatrix}1\\1\end{bmatrix} = -6\begin{bmatrix}10\\7\end{bmatrix} + 7\begin{bmatrix}9\\6\end{bmatrix}.

Each basis B naturally corresponds to a space known as the fundamental parallelepiped, defined as the set \mathcal{P}(B) = [0,1)^n \times B = \{\vec{x}: \vec{x} = \sum_{i = 1}^n a_i\vec{b_i} \text{, such that } a_i \in [0,1) \text{ for all } i\}. Graphically, this corresponds to the n-dimensional parallelepiped with sides equal to the basis vectors, and defines a natural tiling for the space underlying the lattice. While a fundamental parallelepiped is closely tied to the basis that generated it, and is thus not unique for a given lattice, the volume enclosed by a fundamental parallelepiped is identical regardless of which lattice basis is chosen, and is thus a lattice invariant.

Some of the (conjectured) hardest computational problems over lattices are finding short vectors in an arbitrary lattice, known as the Shortest Vector Problem (SVP), and finding the lattice point closest to a target \mathbf{t} \in \mathbb{R}^n, known as the Closest Vector Problem (CVP). We can also define approximation versions of each, SVP_\gamma and CVP_\gamma, which ask to find a short vector of length up to a multiplicative approximation factor \gamma \geq 1 of the length of the shortest vector, or close vector of distance up to a multiplicative approximation factor \gamma \geq 1 of the distance of the closest vector from the target, respectively.

The effort required to solve each of the problems increases with the dimension n, and as the approximation factor gets closer to \gamma = 1. In particular, while there exist efficient algorithms for solving the SVP and the CVP for low dimension such as n=2 or for exponential approximation factors \gamma = 2^{O(n)}, the problems are NP-hard for large dimensions n and low approximation factors. Modern lattice cryptography chooses underlying problems with dimension around n = 512 and approximation factors around \widetilde{O}(n), although the exact choices of parameters for the constructions we present will be omitted from this blog post for simplicity.

Note:
Lattices can be defined over a number of various algebraic structures. They can be defined over the real numbers, such as the examples above, but are often defined over \mathbb{Z}_q, rings or modules, as those result in more efficient implementations due to the presence of additional structure within the lattice. Whether this extra structure affects the hardness of lattice problems is an open question, but to the best knowledge of the community it is not exploitable in the cryptographic settings.

In both of our examples in this blog post series, we will use one particular family of lattices known as the q-ary lattices, which are defined as follows, for some A \in \mathbb{Z}_q^{n \times m}:

\Lambda(A) = \{y \in \mathbb{Z}^m: y = A^Ts\mod{q} \text{ for } s \in \mathbb{Z}^n\}
\Lambda^\perp(A) = \{e \in \mathbb{Z}^m: Ae \equiv 0 \mod{q}\}.

These lattices are mutually orthogonal, as each consists of exactly the vectors orthogonal to all the vectors in the other one. This property is particularly useful for checking membership of a vector in a lattice: if B is a basis for \Lambda^\perp(A), then checking if x \in \Lambda(A) can be done by checking whether Bx \equiv 0 \mod{q}, and similarly checking whether y \in \Lambda^\perp(A) can be done by the check Ay \equiv 0 \mod{q}.

The more practical constructions, including the NIST submissions Falcon and Dilithium, often choose orthogonal lattices over rings or modules as the basis of their constructions. For the rest of this blog post, we will generally not go into the details of the specific rings or modules used, for simplicity, but we will mention what lattice constructions are used in practice.

Constructing Signatures Using Hash-and-Sign and CVP

The first construction for lattice-based signature schemes uses the hash-and-sign paradigm, and relies on the hardness of the CVP problem. The hash-and-sign paradigm was first introduced by Bellare and Rogaway [BR96] to construct the RSA Full Domain Hash (FDH) signature scheme, and relies on a secret trapdoor function for the construction of signatures.

The basic idea is simple: Suppose we have a trapdoor function f such that f is efficiently computable, but very hard to invert (i.e. f^{-1} is difficult to compute) without additional information. To sign a message m, we can hash m to a point t = H(m) in the range of the function f, and use the secret f^{-1} to compute the signature \sigma = f^{-1}(t). To verify a signature (m, \sigma), simply compute t and check whether t.

In particular, choosing the trapdoor permutation function f(\sigma) = \sigma^e \mod(N) and inverse f^{-1}(t) = t^d \mod(N) in the above set-up recovers the RSA FDH scheme.

The Hard Problem

To see how to construct a hash-and-sign trapdoor function using lattice-based primitives, consider the closest vector problem. The CVP_\gamma is a hard problem to solve2 for a random lattice, but it is easy to verify a given solution: given a target \vec{t} and a candidate solution (i.e. candidate close lattice vector) \vec{v}, it is easy to check that \vec{v} is in a given lattice, by checking whether it can be written as an integer linear combination of the basis vectors, and whether \vec{v} is within the specified distance of the target by computing |\vec{v} - \vec{t}|.

However, in order to use the CVP_\gamma to construct a trapdoor function, we need to find a way to tie the hardness of the CVP_\gamma to some secret data, i.e. ensure that a close vector can easily be found using the secret data, but is very hard to find without it. The central idea used here is the observation that not all lattice bases are created equal, and that some bases allow us to solve hard lattice problems such as CVP_\gamma more efficiently than others. Crucially, while any basis can be used to verify the correctness of a CVP_\gamma solution (as they all define the same lattice, and thus can all be used to check a candidate solution’s membership in the lattice), the quality of a CVP_\gamma solution one can find (i.e. the distance from the target, measured by the size of the \gamma factor) depends on the basis one started off with. To see why, consider the following intuitive algorithm for solving CVP_\gamma, called Babai’s round-off algorithm:

Given a basis B and a target point \vec{t}, one can use the known basis to round to a nearby lattice point as follows: write the target point as a linear combination of the basis vectors, \vec{t} = \sum_{i = 1}^n a_i \vec{b_i}. Then, round each coefficient a_i to the nearest integer, to obtain v = \sum_{i = 1}^n \lfloor a_i \rceil \vec{b_i}. These operations can be expressed as v = \lfloor B^{-1}\vec{t}\rceil B. Since any integer linear combination of lattice vectors is a lattice vector, the result \vec{v} is a lattice vector near \vec{t}, which corresponds to the nearest corner of the fundamental parallelepiped translate containing \vec{t}.

Intuitively, shorter (and more orthogonal) bases lead to more reliable results from Babai’s rounding algorithm. This can be formalized by observing that the maximum distance from the target contributed at step i is given by \|\frac{1}{2}\vec{b_i}\| (since the solution found will be at one of the corners of the containing fundamental parallelepiped), and hence by the triangle inequality the distance \|\vec{t} - \vec{v}\| is bounded by \frac{1}{2}\sum_{i=1}^n \|\vec{b_i}\|.

This can be seen in practice, by finding instances where two different bases find different solutions to the CVP_\gamma, such as when the nearest lattice point is not in the fundamental parallelepiped containing the target point. Two examples of differing solutions can be seen in the following figure:

Thus, to instantiate our trapdoor algorithm, one needs to find a good basis, that allows us to solve CVP_\gamma to within a certain bound, as well as a bad basis, that does not allow solving CVP_\gamma without significant computational costs, but still allows one to verify solutions.

Note:
One method for generating this pair of bases is to choose a “good” basis, and apply a transformation to it to obtain a “bad” basis. A common choice of “bad” basis is the Hermite Normal Form (HNF) of the lattice, as it is in a sense the worst possible basis: the same HNF can be generated from any basis of a given lattice, and thus the HNF reveals no information about the basis it was generated from.

A First Attempt

A signature scheme based on this idea was first proposed in 1997 by Goldreich Goldwasser and Halevi, known as the GGH signature scheme [GGH]. At its core, the GGH signature scheme chooses a “good” basis for its secret key, and computes a matching “bad” public basis to use for verifying.

  • To sign a message m using GGH, one maps the message to a random target point \vec{t} in the underlying space, and uses the secret (“good”) basis to find a solution \vec{v} to the CVP_\gamma with target \vec{t} using Babai’s rounding algorithm. The signature is then \vec{\sigma} = \vec{t} - \vec{v}.
  • To verify the signature \vec{\sigma}, one recomputes \vec{t} from the message m, checks that \vec{t} - \vec{\sigma} is a lattice vector using the public basis, and that \vec{\sigma} is sufficiently short (by checking that \|\vec{\sigma}\| is below some publicly known bound, chosen as part of the signature scheme parameters).

(One could equivalently define the signature as the value \vec{\sigma} = \vec{v}, and check that \vec{t} - \vec{\sigma} is short and that \vec{\sigma} is a lattice vector during verification).

The GGH signature scheme was chosen as a foundation for the original NTRUSign signature scheme, by instantiating it with a lattice defined over a special class of rings (the NTRU lattice) which allows for a much more compact representation of the underlying lattice, leading to better efficiency and smaller keys.

Breaking the GGH/NTRUSign Signature Scheme

Unfortunately, the GGH signature construction leaks information about the secret basis with every new signature. Indeed, if a secret basis B was used to generate GGH signatures (where B is represented as a matrix), each signature can be mapped to a point in the fundamental parallelepiped defined by the (secret) basis B, and thus leak information about this secret basis.

Indeed, if m is mapped to the target point \vec{t}, the nearby lattice point found is \vec{v} and the corresponding signature is \vec{\sigma}, then we can rewrite \vec{v} = \lfloor \vec{t}B^{-1}\rceil B = \left(\vec{t}B^{-1} + \vec{e}\right)B = \vec{t} + \vec{e}B, for \vec{e} \in [-1/2,1/2]^n by choice of \vec{v} and the definition of Babai’s rounding algorithm, and hence

\vec{\sigma} = \vec{t} - \vec{v} = \vec{e} \in [-1/2,1/2]^nB = \{xB: x \in [-1/2,1/2]^n \}.

This can be seen graphically in the following figure, where we see that each new message signature pair corresponds to a point in the fundamental parallelepiped defined by the basis used to generate it. The following figure plots the \vec{\sigma} = \vec{t} - \vec{v} values obtained from signatures generated using the two bases defined above:

Nguyen and Regev [NR06] showed that this method can be used to recover the secret basis with a few hundred message signature pairs, by using standard optimization techniques to recover the basis from these points in the fundamental parallelepiped.

Note:
The Nearest Planes Algorithm is an algorithm very similar to Babai’s round-off algorithm for solving CVP_\gamma that rounds based on the Gram-Schmidt vectors of the basis instead of the basis vectors themselves, and has very similar asymptotic hardness guarantees. While the GGH construction chose to use Babai’s rounding algorithm to solve CVP_\gamma, the Nearest Planes Algorithm can equivalently be used instead.

A Secure Signature Scheme Based on CVP – GPV08 Signatures

Despite the flaws in the GGH construction, the high-level idea of using a short basis as a trapdoor can still be made to work. In 2008, Gentry, Peikert and Vaikuntanathan [GPV08] showed how this lattice trapdoor framework can be adapted to create provably secure signatures.

The fundamental idea is simple: given a set of signatures \vec{\sigma}_m, we wish the distribution of these signatures to leak no information about the trapdoor function used to generate them. In particular, if the distribution of the signatures (over some fixed domain) is independent of the secret values used, no information can be leaked from the signatures using this method. Note that this was not the case with the GGH signature scheme, as the domain of the distribution was closely related to the geometry of the secret values.

Getting this to work in the lattice setting requires a slight generalization of the usual definition of trapdoor functions. A trapdoor permutation f^{-1}(t) = t^d \mod(N), used for instance in RSA FDH, defines a unique inverse for each element of the range. Choosing elements of the range of f uniformly (which could be done for instance by hashing a message to a random element of the range) thus results in a uniform distribution of signatures over to the domain of f (or range of f^{-1}) and prevents the leak of any information about the secret integer d from the distribution of the signatures.

However, if we want to base our lattice trapdoors on the hardness of CVP_\gamma, there are multiple lattice points within a fixed, relatively short distance from the target, and hence each element of the range has multiple possible preimages. One must thus ensure that the distribution over these preimages obtained during the signing process leaks no information about the inversion function. That is, we want to define a trapdoor function f: D\to R that can only be inverted efficiently using some secret data, and such that the domain D and distribution P(D) over the domain obtained by choosing a uniformly random element of the range R (e.g. by hashing a message m to an element of R) and inverting the trapdoor function (computing f^{-1}) are independent of the secret data used to compute f^{-1}.

Thus, the trapdoor inversion function f^{-1} must guarantee that the output is both correct and that it follows the correct distribution, i.e. that if \vec{\sigma}_m = f^{-1}(H(m)), we must have f( \vec{\sigma}_m) = H(m) and that the distribution of all \vec{\sigma}_m values is exactly P(D). This can be formalized using conditional distributions, i.e. by requiring that \vec{\sigma}_m is sampled from the distribution P(D), conditioned on the fact that f( \vec{\sigma}_m) = H(m). This generalized definition was formalized in [GPV08], in which the authors called functions that satisfy these properties “Preimage Sampleable Functions”.

Given such a preimage sampleable (trapdoor) function f and its inverse f^{-1}, one can define a trapdoor signature scheme in the usual way:

Sign(m):

1. Compute \vec{t} = H(m)
2. Output \vec{\sigma}_m = f^{-1}(\vec{t})

Verify(m, \vec{\sigma}_m):

1. Check that \vec{\sigma}_m is in the domain D
2. Check that f( \vec{\sigma}_m) = H(m).

Note that this definition avoids the problems with the GGH signature scheme. Indeed, if the domain D and the distribution of signatures P(D) over the domain are independent of the secret values, signatures chosen from P(D) cannot leak any information about the secret values used to compute them.

Preimage Samplable Trapdoor Functions from Gaussians

However, it is not immediately clear that such preimage samplable functions even exist, or how to compute them. In [GPV08], the authors showed that a Gaussian distribution can be used to define a family of preimage samplable functions, due to some nice properties of Gaussian distributions over lattices.

The basic intuition as to why Gaussian distributions are particularly useful in this case is that a Gaussian of sufficient width overlaid over a lattice can easily be mapped to a uniform distribution over the underlying space, and is thus a great candidate for instantiating a preimage samplable function. Indeed, consider sampling repeatedly from a Gaussian distribution centered at the origin, and reducing modulo the fundamental parallelepiped. As depicted in the following figure, the distribution that results from this process tends to the uniform distribution (over the fundamental parallelepiped) as the width of the Gaussian increases, and relatively small widths are sufficient to get close to a uniform distribution.

Thus, we can define the distribution P(D) as a (truncated) Gaussian distribution \rho_s(D) of sufficient width s, with the domain D \subset \mathbb{R}^n chosen to be an area that contains all but a negligible fraction of the Gaussian distribution \rho_s(\mathbb{R}^n)3. By the properties of the Gaussian, if \mathcal{P}(\Lambda) is the fundamental parallelepiped defined by the public basis for the lattice \Lambda, the distribution of f(\vec{x}) = \vec{x} \mod \mathcal{P}(\Lambda) will be uniform, for \vec{x} distributed as \rho_s(D).

To show that f is a preimage samplable function and use it to instantiate a signature scheme, it remains to find a method to compute f^{-1}(\vec{t}) efficiently (given some secret values), i.e. define a method for sampling \vec{\sigma} from \rho_s(D), conditioned on f(\vec{\sigma})= \vec{t}, for uniformly random targets \vec{t} \in \mathcal{P}(\Lambda). To define this sampling method, note that all vectors \vec{x} with f(\vec{x}) = \vec{x} \mod{P}(\Lambda) = \vec{t} are exactly the elements of the shifted lattice \vec{t} + \Lambda. Thus, sampling from \rho_s, conditioned on f(\vec{\sigma}) = \vec{t} is equivalent to sampling from the distribution \rho_s restricted to \vec{t} + \Lambda. In practice, this is done by sampling a lattice vector \vec{v} from the appropriate offset distribution \rho_{s,-\vec{t}\:}(\Lambda)4, and outputting \vec{t} + \vec{v}. Alternately, we can sample from \vec{w} \sim \rho_{s,\vec{t}\:}(\Lambda) and output \vec{t} - \vec{w}, since both the Gaussian distribution and the lattice \Lambda are invariant under reflection.

The final piece of the puzzle is to figure out how to sample from \rho_{s,\vec{t}}(\Lambda) efficiently. At the core of [GPV08] was a new efficient Gaussian sampler over arbitrary lattices, which is based on the observation that one can sample from the desired Gaussian distribution over a lattice, if one knows a “good” quality basis (chosen as the secret trapdoor for this scheme)5. The resultant algorithm can be thought of as a randomized version of the nearest planes algorithm, where instead of selecting the nearest plane at each step, one selects a nearby plane, according to the (discrete) Gaussian distribution over the candidate planes. In [Pei10], Peikert showed that Babai’s round-off algorithm can be similarly randomized coordinate-by-coordinate, at the cost of an extra perturbation technique to account for the skew introduced by the fact basis vectors are generally not an orthogonal set.

The output of this sampling algorithm is effectively chosen randomly from the solutions to the CVP_\gamma problem, and since the resultant signatures are distributed as a Gaussian distribution that is independent from the basis, no information about the geometry of the secret basis used by the sampler is leaked. This can be formalized by noting that by definition, we have \vec{t} - \vec{w} \in D, and hence \vec{w} \in \Lambda is at a distance of at most \|\vec{t} - \vec{w}\| \leq \|D\| from the target \vec{t}. Choosing a sufficiently small domain D as part of the definition for the signature scheme ensures \vec{w} is a solution to the CVP_\gamma problem with target \vec{t}.

Putting this all together, we can define a CVP_\gamma-based trapdoor signature scheme as follows:

Sign(m):
1. Compute \vec{t} = H(m) to be a uniformly random point in \mathcal{P}(\Lambda)
2. Compute \vec{\sigma}_m = f^{-1}(\vec{t}):
    1. Sample \vec{w} \sim \rho_{s,\vec{t}\:}(\Lambda), using the secret basis for \Lambda.
    2. Set \vec{\sigma}_m = \vec{t} - \vec{w}. Note that \vec{\sigma}_m is distributed as \rho_s(D) conditioned on f(\vec{\sigma}_m) = \vec{t}.
3. Output the signature \vec{\sigma}_m
Verify(m, \vec{\sigma}_m):
1. Check that \vec{\sigma}_m is in the domain D
2. Recompute \vec{t} = H(m), and check that f(\vec{\sigma}_m) = \vec{\sigma}_m \mod{\mathcal{P}(\Lambda)} = \vec{t} (or, equivalently, check that \vec{t} - \vec{\sigma}_m is a lattice vector, since \vec{x} \equiv \vec{0} \mod{\mathcal{P}(\Lambda)} if and only if \vec{x} \in \Lambda).

The authors of [GPV08] used this construction to define an efficient trapdoor signature scheme based on the CVP_\gamma problem in lattices. Their particular instantiation is defined over a family of lattices that allows particularly efficient operations, and thus yields an efficient signature scheme. The following section goes into details about their construction, which follows the same high-level approach as was covered here. However, it is a little more technical and can be skipped if only the high-level intuition is desired.

As a nice bonus, this lattice-based trapdoor scheme has nice provable security properties. It can be shown that the (average-case) security of this scheme can be reduced to the worst-case hardness of the well-studied hard lattice problem Shortest Independent Vector Problem (SIVP_\gamma). This is particularly nice, as the worst-case hardness of a problem is often easier to analyze than the average-case hardness (i.e. the hardness of a randomly-selected instance, such as when keys are chosen at random).

The secure trapdoor construction of [GPV08] was eventually adapted in the design of the NIST candidate Falcon, which consists of [GPV08] trapdoor signatures instantiated over a special class of compact lattices called the NTRU lattices. Falcon is the smallest of the NIST PQC finalists, when comparing the total size of public key and signature, and has very fast verification. However, the implementation of Falcon has a lot of complexity – in particular, implementing the discrete Gaussian sampler securely and in constant-time is tricky, as the reference implementation uses floating point numbers for this computation.

Details of the [GPV08] Trapdoor Signature Construction

At a high-level, the [GPV08] construction relies on instantiating the above construction over a discrete domain and range, in order to allow for more efficient computations. In particular, the construction is defined over the q-ary lattices \Lambda(A) and \Lambda^\perp(A). This is done for two reasons: first, this allows for more efficient membership verification, as checking whether x is in the q-ary lattice \Lambda^\perp(A) only requires checking whether Ax \equiv 0 \mod{q}. Second, restricting ourselves to a discrete range and domain simplifies many computations, and allows us to better formalize what it means for target points to be distributed uniformly over the range, as it is easier to map the outputs of a hash function to a discrete domain than to a continuous one.

However, working with discrete q-ary lattices requires modifying the definitions and distributions to work over this new domain and range. In particular, we define the discrete Gaussian distribution over a lattice as the discrete-domain version of the Gaussian distribution which preserves these nice properties of uniformity over the chosen discrete domain in a natural way. Specifically, a smoothing discrete Gaussian distribution can be defined over a superlattice, a finer-gridded lattice \Lambda^\prime such that \Lambda \subseteq \Lambda^\prime. The smoothing discrete Gaussian distribution over the superlattice, D_{\Lambda^\prime}, can then be defined in such a way that it results in a uniform distribution when reduced modulo the fundamental parallelepiped. Note that the range of this mapping, i.e. the set of possible values of \Lambda^\prime \mod \mathcal{P}(\Lambda) corresponds exactly to the set of cosets \Lambda^\prime\setminus \Lambda.

In particular, in the case of q-ary lattices, we can choose \Lambda^\prime = \mathbb{Z}^m and \Lambda = \Lambda^\perp(A). From the definition, we get that a sufficiently wide discrete Gaussian distribution D_{\mathbb{Z}^m} that is smoothing over the lattice \Lambda^\perp will result in a uniform distribution over the set of cosets \mathbb{Z}^m \setminus \Lambda^\perp when reduced modulo the fundamental parallelepiped \mathcal{P}(\Lambda^\perp). Additionally, we can use a correspondence between the set of cosets \mathbb{Z}^m \setminus \Lambda^\perp, and the set of syndromes6 U = \{\vec{u}: \vec{u} = A\vec{x} \text{ for some } \vec{x} \in \mathbb{Z}^m\} to show that if we sample \vec{x} \sim D_{\mathbb{Z}^m}, choosing the width so that D_{\mathbb{Z}^m} is smoothing over the lattice \Lambda^\perp, then the distribution of A\vec{x} will be uniform over the set of syndromes U.

Thus, to instantiate the secure trapdoor construction of [GPV08] with q-ary lattices, messages are mapped uniformly to the set of syndromes U, and signatures \vec{\sigma} are sampled from the (smoothing) distribution D_{\mathbb{Z}^m}, conditioned on A\vec{\sigma} being equal to the syndrome corresponding to a particular message. Finally, we can choose parameters such that we can guarantee that for almost all choices of A, U = \mathbb{Z}_q^n. Thus, messages only need to be mapped uniformly to \mathbb{Z}_q^n, which can be done in a straightforward manner using an appropriately defined hash function.

As before, sampling from the (smoothing) distribution D_{\mathbb{Z}^m}, conditioned on A\vec{\sigma} = \vec{u} can be done by mapping the syndrome back to the corresponding coset \vec{t}, sampling a lattice vector \vec{w} \in \Lambda^\perp from the shifted distribution D_{\Lambda^\perp, \vec{t}} (corresponding to sampling from D_{\mathbb{Z}^m}(\vec{t} + \Lambda^\perp)), and outputting \vec{t} - \vec{w}. Note that the vector \vec{w} is a solution to the CVP problem with lattice \Lambda^\perp and target \vec{t}. Putting everything together, we can choose f(\vec{e})  = A\vec{e} \mod{q}, and instantiate a trapdoor signature scheme as follows:

Sign(m):
1. Choose \vec{u} = H(m) \in \mathbb{Z}_q^n to be a uniformly random syndrome from U = \mathbb{Z}_q^n
2. Compute \vec{\sigma}_m = f^{-1}(\vec{u}): 
    1. Choose \vec{t} \in \mathbb{Z}^m, an arbitrary preimage such that f(\vec{t}) = A\vec{t} = \vec{u} (this can be done via 
        standard linear algebra)
    2. Sample \vec{w} \sim D_{\Lambda^\perp, \vec{t}}, the discrete Gaussian distribution over \Lambda^\perp centered at \vec{t}.  
    3. Let \vec{\sigma}_m = \vec{t} - \vec{w}.  Note that \vec{\sigma}_m is distributed as D_{\mathbb{Z}^m}, conditioned on A\vec{t} = \vec{u}.
3. Output the signature \vec{\sigma}_m. 
Verify(m, \vec{\sigma}_m):
1. Check whether \vec{\sigma}_m is contained in the domain D (in practice, this amounts to 
    checking whether \| \vec{\sigma}_m\| is sufficiently small).
2. Check whether A \vec{\sigma}_m = H(m). Note that A \vec{\sigma}_m = A(\vec{t} - \vec{w}) = A\vec{t} - A \vec{w} = \vec{u} + 0, by choice
    of \vec{t} and definition of \Lambda^\perp.

Conclusion

Lattice-based [GPV08]-style trapdoor signatures are a generalization of the classical hash-and-sign signatures paradigm, that randomizes the signing process in order to account for the existence of multiple preimages and avoid leaking information about the discrete structure of the lattice. This approach allows the resultant signatures to be very short, at the cost of some implementation complexity.

While the construction may seem intimidating at first, this write-up attempts to have made this modified lattice-based construction a little more approachable. Stay tuned for the second part of this blog post series, which will describe an alternate construction for lattice-based signatures, based on the Fiat-Shamir paradigm.

Acknowledgements


I’d like to thank Paul Bottinelli, Giacomo Pope and Thomas Pornin for their valuable feedback on earlier drafts of this blog post. Any remaining errors are mine alone.

Footnotes

1: Formally, two bases B_1 and B_2 define the same lattice if and only if there is an unimodular matrix (a square, integer matrix with determinant \pm 1, or, equivalently, an integer matrix which is invertible over the integers) U such that B_1 = UB_2.

2: For an appropriately chosen instance of the CVP_\gamma problem. Concretely, one usually chooses values around n = 512 and \gamma = \widetilde{O}(n) for cryptographic applications.

3: The exact width needed can be formalized using a quantity known as the smoothing parameter, which relates the width of a Gaussian distribution \rho_s to the distance from uniform of the reduced distribution of \rho_s \mod(\Lambda). It can be shown that relatively narrow Gaussians are sufficient to obtain a negligible distance from uniform – for instance, the lattice of integers \mathbb{Z} has a smoothing parameter of \approx 5 for \varepsilon = 2^{-128} distance from uniform. The domain D can simply be defined as all points within a certain distance from the origin, with the distance defined chosen as a small multiple of the width s, since the exponential decay of the Gaussian function means almost all of the weight is given to points near the origin.

4: The offset distribution \rho_{s, \vec{c}} is simply a Gaussian distribution of width s that is centered at the point \vec{c}, and can be defined as \rho_{s, \vec{c}}(\vec{x})  = e^{-\|\vec{x} - \vec{c}\|^2/s^2}

5: Any basis can be used to implement this sampler, but one can only sample from Gaussian distributions that are sufficiently wider than the longest vector in a known basis. One can thus choose a width such that the Gaussian still maps to the uniform distribution under f, but such that it is infeasible to sample from the Gaussian distribution without knowledge of the secret basis to instantiate the signature scheme.

6: The term syndrome comes from the terminology used for error correcting codes, due to similarities between the q-ary lattices and the error syndrome, which can be used to locate errors in linear codes. Similarly, the matrix A is sometimes called the parity-check matrix for the lattice \Lambda^\perp(A), in analogy to the parity check matrix of linear error correcting codes.

References

[Ajtai96]: M. Ajtai, Generating Hard Instances of Lattice Problems, 1996, https://dl.acm.org/doi/10.1145/237814.237838.

[NR06]: P. Nguyen and O. Regev, Learning a Parallelepiped:Cryptanalysis of GGH and NTRU Signatures, 2006, https://cims.nyu.edu/~regev/papers/gghattack.pdf.

[GPV08]: C. Gentry et al., How to Use a Short Basis:Trapdoors for Hard Lattices and New Cryptographic Constructions https://eprint.iacr.org/2007/432.pdf.

[Falcon]: P. Fouque et al., Falcon: Fast-Fourier Lattice-based Compact Signatures over NTRU, 2020, https://falcon-sign.info/falcon.pdf.

[Dilithium]: S.Bai et al., CRYSTALS-Dilithium Algorithm Specifications and Supporting Documentation (Version 3.1), 2021, https://pq-crystals.org/dilithium/data/dilithium-specification-round3-20210208.pdf.

[BR96]: M. Bellare and P. Rogaway, The Exact Security of Digital Signatures – How to Sign with RSA and Rabin, 1996, https://www.cs.ucdavis.edu/~rogaway/papers/exact.pdf.

[GGH]: O. Goldreich et al., Public-Key Cryptosystems from Lattice Reduction https://www.wisdom.weizmann.ac.il/~oded/PSX/pkcs.pdf.

[Pei10]: C. Peikert: An Efficient and Parallel Gaussian Sampler for Lattices, 2010, https://eprint.iacr.org/2010/088.pdf.