Editor’s note: since the publication of this blog post, an expanded and more technical discussion of the implementation process has been written, and is available on eprint: https://eprint.iacr.org/2022/1283.
Introduction
Last weekend (July 30th) a truly incredible piece of mathematical/cryptanalysis research was put onto eprint. Wouter Castryck and Thomas Decru of KU Leuven published a paper “An efficient key recovery attack on SIDH (preliminary version)” describing a new attack on the Supersingular Isogeny Diffie-Hellman (SIDH) protocol together with a corresponding proof-of-concept implementation.
SIDH is at the core of the Post-Quantum key encapsulation mechanism SIKE, which was expected to continue to round four of the NIST Post-Quantum Project for consideration of standardisation. The paper says that their proof of concept code can break the proposed NIST level 1 parameters (supposedly approximating security on-par with AES-128) in an hour of single core computation, and the strongest parameter set in less than 24 hours.
However, the proof of concept code published has been written using the computer algebra software system Magma. Magma is a very efficient and powerful piece of software, but it is difficult for people to obtain access to. This meant that despite being able to run the attack over a lunch break, most of the community was unable to verify the result at all.
Motivated by a beautiful attack and a love of open-source software, a plan was made to read the attack and implementation and then reimplement it in SageMath; a free, open-source mathematics software system. This was not only a great opportunity to learn exactly how the attack came together, but the effort should also then open up the research to the cryptographic community, who could verify the attack themselves. There’s nothing more convincing than seeing the secret key appear before your very eyes!
This blog post is about the attack, but it’s mainly a story about how the code was reimplemented and the help which was received from collaborators along the way. It’s been a wild week and there’s a lot to learn in more detail, but for those eager to break some isogeny based crypto protocols, the implementation is now available on a public GitHub repository. Thanks to some additional performance enhancements that we’ll talk about along the way, you can break the SIKE NIST level 1 parameter set with your laptop, a fresh download of SageMath and only 10 minutes of your time.
Approximate Running Time | $IKEp217 | SIKEp434 | SIKEp503 | SIKEp610 | SIKEp751 |
---|---|---|---|---|---|
Paper Implementation (Magma) | 6 minutes | 62 minutes | 2h19m | 8h15m | 20h37m |
Our implementation (SageMath) | 2 minutes | 10 minutes | 15 minutes | 25 minutes | 1-2 hours |
Table of Contents
The search for quantum-safe cryptography
To understand the importance of the attack, it helps to put it in context. In 2016, NIST announced the Post-Quantum Cryptography Project. The aim was to call on cryptographers to submit algorithms split between two categories: key encapsulation mechanisms (KEMs) and digital signatures. The motivation is that the asymmetric cryptography currently in place — Diffie-Hellman key exchanges using elliptic curves for a KEM and ECDSA/EdDSA for digital signatures — can be efficiently broken by an attacker with access to a sophisticated quantum computer using Shor’s algorithm.
Although the construction of such a quantum computer has not been achieved, history tells us that the uptake of new algorithms is slow (we still see 3DES and MD5 in the wild, for example). So NIST believe the best plan is to act preemptively and to start working on getting new, quantum-safe algorithms out there as soon as possible.
Constructing new cryptographic algorithms is complicated. Furthermore, for asymmetric algorithms, we rely on the existence of some trapdoor function which is easy to perform one way and hard to undo the other. Typically, mathematics is used to create these functions (multiplication/factoring for RSA or exponentiation/discrete logarithms for elliptic curves). These mathematical trapdoors always come with some associated structure. The hope is that we understand the structure enough that we can confidently assume certain problems are hard to solve. In a quantum setting, it is the Abelian group structure of the ring of integers modulo N and the group of points on an elliptic curve which results in the break of RSA and ECC.
The balancing act of structure and cryptographically hard problems is at the heart of why projects such as the NIST PQCrypto Project take so long, with multiple rounds and iterative algorithm design. Cryptographic protocols can be designed and studied for years only to break after one very clever idea. This happened recently when Ward Beullens published Breaking Rainbow Takes a Weekend on a Laptop in June 2022, effectively knocking Rainbow out of the PQC project.
Last month, NIST recently announced the end of round three of the project and with it, their first selection of algorithms to be standardised for cryptographic applications:
- CRYSTALS-Kyber (KEM).
- CRYSTALS-Dilithium, Falcon, and SPHINCS+ (Digital signatures).
To ensure diversity of trapdoor functions, NIST are starting round four. The hope is to find new KEM algorithms which have different hardness assumptions to Kyber, increasing the chances of having a long-lasting, quantum-safe KEM. A recent blog post by Thomas Pornin discusses in more detail the round three selections and a history of the NIST PQC project.
SIKE: Supersingular Isogeny Key Exchange
One of the candidates selected for round four is SIKE (Supersingular Isogeny Key Encapsulation), an isogeny based KEM which uses SIDH to perform the key exchange. This blog post won’t be a precise discussion of isogeny based cryptography, but for those who are interested here are some links to click through for a great first introduction:
- Supersingular isogeny key exchange for beginners by Craig Costello.
- Mathematics of Isogeny Based Cryptography by Luca De Feo.
- Are Isogenies for Real? [Video] by Luca De Feo.
To give some intuition though, we give an inaccurate but morally correct overview of what’s happening by first making a stop past something more familiar.
In an elliptic curve key exchange, a shared secret is found in the following way. Alice and Bob both start with a fixed point and using a secret number they “move” from this point to their new points and , which are made public. They send these to each other and then Alice (Bob) moves from () as they did before, using the same secret number on the new point. By doing this, they end up at the same “place”, a point , and this is used to derive a key for the rest of their communication.
In SIDH a very similar thing happens. Alice and Bob both start from the same place, but now instead of the start being a point on a curve, it is an elliptic curve itself. For reasons that aren’t necessary when so many other details are missing, not any old curve will do here. A special type of curve is used, which mathematicians know as a supersingular elliptic curve.
Alice and Bob then “move” from a public starting curve to some new curve, which will be part of the public data. This “movement” between curves is performed by creating a secret isogeny, which is a clever map which takes Alice from one curve to some other supersingular curve (while also preserving the group structure of the curve). The isogeny can be generated efficiently because of the clever parameters SIDH uses and for this post, it’s enough to know that Alice creates her secret isogeny by generating a secret integer. This is mixed into some fixed elliptic curve points which are defined by the SIKE parameters. This resulting secret point is what is used to generate the secret isogeny. The takeaway is: if an attacker can recover this secret integer, the whole protocol is broken.
To perform a key exchange, Alice and Bob both generate random numbers and use these to create secret isogenies. They use these to move to some new curves and and they share these curves with each other. The Isogeny path problem is that given two elliptic curves, it is generally very hard to determine the isogeny which links them. If you want a visual picture, the isogenies linking supersingular curves make a very messy graph and it’s easy to get lost. This is similar in feeling to how given two points in an elliptic curve key exchange, the discrete log problem is that it’s assumed to be hard to recover the integer which relates them.
In SIDH things aren’t quite as simple as the elliptic curve example. Given each other’s public curves, if Alice and Bob both naively use their isogenies again to try and move to the same place, they do not end up on a shared curve. All is not lost though, SIDH fixes this by including additional information in the exchange. Not only does Alice (Bob) send Bob (Alice) their public curve, they also use their isogeny and use it to map a pair of public points from the starting curve to their new curve. These extra points are known as the torsion, or auxiliary points. Sending a package of the mapped curve with the pair of mapped points is enough to ensure Alice and Bob end up on a shared secret curve (technically, up to isomorphism, but if this doesn’t make sense, forget you read it) and this can be used to derive keys.
SIKE builds on the SIDH protocol with fixed parameters and key encapsulation. But for our purposes for this attack, breaking SIDH also breaks all parameter sets of SIKE.
Since SIDH has been proposed, the inclusion of additional information by sending the image of the torsion points has worried researchers. The concern was that the isogeny path finding problem could remain hard while the potentially easier problem, known as the Supersingular Decision Diffie-Hellman problem, could be broken through some information leaked out by how the secret isogeny acts on these auxillary points.
This is the problem which Wouter Castryck and Thomas Decru have shown is easy! It turns out, the structure which is currently used in SIDH to make a sensible key-exchange mechanism leaked too much information about secret values. Through some genius mathematics and a deep understanding of the protocol, the Castryck-Decru attack recovers Bob’s secret isogeny in polynomial time.
This blog post is a celebration of this attack, and to talk about it, we talk about its implementation. The proof-of-concept code that Castryck and Decru shared with their paper was written to run in a special computer algebra software package called Magma. So, what’s Magma?
Computer algebra software
Cryptographic attacks which rely on advanced mathematics are often written using specialised mathematical software. For cases when the code needs to be hyper-optimised, the code is then usually translated to a more performant language after a proof of concept is developed. However, more often than not, these pieces of software are advanced enough to do what the researchers need. Let’s review two of the most commonly used software systems.
- Magma is a computer algebra software package maintained by the University of Sydney. It is known for having expansive coverage, with efficient implementations of computational algorithms from algebra, number theory and algebraic geometry. It is also closed source, expensive and only available to people associated with institutions who maintain licence distribution.
- SageMath is a free, open-source mathematics software system licensed under the GPL. Its mission is to “Create a viable free open source alternative to Magma, Maple, Mathematica and Matlab.” SageMath is built on top of Python, but many algorithms come from other open-source packages and are accessed through wrappers and interfaces, allowing high performance computation.
Because of the barriers to getting hold of Magma, many people active in the cryptographic research community don’t have access. However, if you are interested and want to run snippets of code, the Magma calculator allows cloud based computations (albeit with a two-minute run time limit).
In contrast, everyone with a computer has access to SageMath. Personally, I have used it extensively to learn about cryptography, build and deliver cryptography challenges for CryptoHack, and even occasionally to implement maths papers for fun! It’s an incredible piece of software.
Overview of the attack
Another disclaimer before starting this section, this blog post does not aim to give a comprehensive discussion of how Castryck and Decru have broken SIDH. The mathematics is very advanced, and requires a deep understanding of how SIDH works, as well as the more esoteric research of Abelian surfaces and Richelot isogenies.
The hope is only to give enough context that the rest of the post is motivated and enjoyable to read. So before starting, here are some great resources from the community discussing the result which the interested reader can browse through:
- The paper itself! It’s a nicely written paper and if you know a little about isogenies already there’s a lot to learn from this document.
- Breaking supersingular isogeny Diffie-Hellman (SIDH) by Steven Galbraith gives a great FAQ on the attack, with clear answers.
- A detailed and mathematical Twitter thread by Peter Kutas.
- A geometry first discussion of the attack by Rémy Oudompheng.
- An illustration of what “gluing” elliptic curves via a genus 2 correspondence could look like by Rémy Oudompheng.
- Explicit Endomorphisms and Correspondences the final sections of Benjamin Smith‘s PhD thesis covers Richelot correspondences and (2,2)-isogenies.
Attacking the structure of SIDH
Note: if you’re happy just accepting there’s a clever mathematics which makes this attack work, you can skip the next two sections!
To allow the attack to be successful, the attack uses several properties of the SIDH protocol and SIKE parameters. Whether all of these conditions are necessary for the attack to work are part of ongoing research, but to set the scene, let’s look at what is used.
The public key contains the image of the torsion points
- This is totally vital for the attack. They are also totally vital for SIDH to be a sensible key exchange protocol, so in its current form, SIDH cannot avoid this part of the attack.
- Knowing how the secret isogeny acts on the torsion points has been worrying for a long time. In Improved torsion-point attacks on SIDH variants, the authors of the paper heavily reduced the parameter space for SIDH, but the attack did not extend to the SIKE parameters.
- In section 8.1 of the paper, arbitrary torsion points are discussed. It is thought that changing the form of the prime (and hence changing the torsion points) should inherently change nothing about the attack.
The secret isogenies have a fixed degree
- Alice computes an isogeny of degree and Bob computes an isogeny of degree . This is the core of how the algorithm works, but it is also core to how the attack works.
The above two properties are the most important and are also special to SIDH. For this reason, people believe the attack cannot be generalised to other isogeny based schemes such as CSIDH or SQISign, which do not have isogenies of a fixed degree or additional torsion points.
The starting curve has a known endomorphism ring
- Generating supersingular curves randomly is hard. Of the possible elliptic curves, only (about) are supersingular. For cryptographically large , finding a supersingular curve by guessing is not reasonable.
- Luckily, we have a way to write down a special supersingular curve for a given prime (which comes from the theory of complex multiplication). However, this method also gives us more structure: we learn the endomorphism ring of the curve.
- One could use one of these curves, then generate an isogeny to walk to some random curve. However, knowing the starting curve and the isogeny used to get the new curve also leaks the endomorpishm ring of the new curve.
- In sections 8.2 and 8.3, Castryck and Decru outline that the attack should weaken the security of SIKE parameters, even when the endomorphism ring of the starting curve unknown.
The Glue-and-Split oracle
From a high level, Castryck and Decru’s attack recovers the secret integer (in base 3) which is used to generate the secret isogeny; it does not directly compute the secret isogeny itself. The algorithm works by taking a step along the unknown path and asking the oracle if the step was correct. Depending on the return value, a new step can be guessed, or it can continue down the path to discover the next secret digit.
Walking down Bob’s secret path, there are only one of three directions to take after each step. This means for each step that is taken, at most two calls to the oracle are needed. This is what makes the attack so efficient. Every step (except for the first few, depending on the parameter choices) can be validated one by one and the secret integer is recovered digit-by-digit.
The genius of the attack was finding a method to validate whether the step taken is on the right path. As the constructed oracle only requires public data, the SIDH protocol as currently implemented is totally broken. Due to the efficiency of the attack, the common defense of increasing the bit-size of the parameter space is not suitable.
The oracle begins with the collected public data. A cleverly constructed isogeny allows the creation of a new curve from the starting curve . Very loosely, the oracle takes these two curves and makes a new object from their product, which can be seen as a higher-dimension abelian surface. The Glue-and-Split oracle then takes pairs of points from : and : and represents them as points on this higher-dimensional object and (these are points on the Jacobian of a hyperelliptic curve).
This hyperelliptic curve and pair of new points are mapped through a chain of isogenies (known as Richelot isogenies). At the end of this chain, if the hyperelliptic curve can be decomposed back into a product of elliptic curves, then the correct digit must have been guessed. The reason this all works is because of a theorem by Kani (1997) and the ability to construct the auxiliary isogeny from to (which in the current implementation abuses the known endomorpishm ring of the curve).
Implementing the attack
The following discussion is a fairly informal write-up of the 24-hour period starting from an empty repository and ending with a efficient implementation of the attack. The hope is that this not only helps to give a good review of the pieces that come together for the attack to work, but also gives an impression of the problems which arise when implementing mathematical algorithms (and other issues introduced by rushing fingers a little too excited to type precisely).
Learning Magma: or how I came to love syntax bugs
The first step of converting Magma to SageMath was understanding how to translate the syntax. Some changes, like variable declaration with a := 1;
rather than a = 1
were simple to fix up.
Additionally, many of the higher-level mathematical objects such as EllipticCurve()
orPolynomialRing()
had almost identical representations. For anything I didn’t recognise, it was usually enough to find the function in the Magma Documentation, read the expected behaviour and find the relevant function in the SageMath Documentation. In some cases Magma had support for structures which SageMath didn’t perfectly mirror.
One example of this was that Magma can work with multivariate function fields:
// magma
Uff:= FunctionField(Fp2, 4);
However, when trying to define this in SageMath, it was found that only univariate function fields could be constructed directly. The workaround for this was found in the community support forum where it was explained that you could create a suitable object by first defining a multivariate polynomial ring and then creating the fraction field from it:
# SageMath
Uff_poly.= PolynomialRing(Fp2, 4)
Uff = Uff_poly.fraction_field()
Mathematics aside, the difference which caused the most bugs during conversion was very simple. Magma accesses elements in arrays using 1-index, and when looping through a range, it is inclusive of the upper bound. In contrast, SageMath is 0-indexed and does not include the upper bound. In this sense, Magma behaves in the “old-style” similar to Fortran or Pascal, where as (via Python) SageMath follows the 0-index convention started with C (or rather its predecessor B).
As an example: printing out integers from an array in both languages would be achieved as:
// Magma
my_array := [2,3,5,7,11];
for i in [1..5] do
print my_array[i];
end for;
// output: 2,3,5,7,11
# SageMath
my_array = [2,3,5,7,11]
for i in range(0,5):
print(my_array[i])
# output: 2,3,5,7,11
This meant that careless copy-pasting and tidying could easily introduce off-by-one errors throughout the code. This is exactly what happened and correcting these syntax typos was being done all the way up to the code working!
Getting organised
The first goal was to reimplement the SIKE_challenge.m
file, which was an implementation of the attack which was said to have solved Microsoft’s $IKEp217
challenge (this was announced last year, with a cash bounty of $50,000 for the first team to crack it). The prime used has only half the bits of the NIST level 1 parameters (SIKEp434
) and supposedly ran in approximately 5 minutes using the Magma script (too long for the free Magma calculator, sadly…). As such, it was the perfect place to start.
The work to reimplement the attack was split between fairly easy but busy work translating SIKE_challenge.m
into valid SageMath and more careful and mathematical work reimplementing the functions in the helper file richelot_aux.m
. If this attack worked it would then be a case of changing a handful of lines for the attack on SIKEp434
, which was said to take about one hour to complete when running the Magma files.
Opening up richelot_aux.m
, the first thing to do was to read through the functions and get an idea of the work ahead:
Does22ChainSplit()
- The main oracle, which given the necessary curves and points returns
True
when the correct digit is guessed. - First thought: not too hard.
- The main oracle, which given the necessary curves and points returns
FromProdToJac()
andFromJacToJac()
- Helper functions for
Does22ChainSplit()
which takes us from points on an elliptic curve to points on the Jacobian of a hyperelliptic curve and then performs the Richelot isogenies. - First thought: very long with a couple scary lines which SageMath might have trouble with
- Helper functions for
Pushing3Chain()
- Compute a chain of isogenies given a curve, quotienting out point of order .
- First thought: easy. Very similar to code I’ve written before.
- In fact thanks to a recent update to SageMath by Lorenz Panny, this could probably be swapped out for
E.isogeny(K, algorithm="factored")
. However, to align the code with the PoC, it was decided to reimplement the function as it appeared in the Magma code.
Pushing9Chain()
andOddCyclicSumOfSquares()
- Obsolete code, which could be simply ignored.
OddCyclicSumOfSquares()
is almost certainly the code which was used to precompute the valuesu,v
inuvtable.m
. As there’s no need to recompute this array, the function is not needed.
- Obsolete code, which could be simply ignored.
As the function is short, here’s the Magma, then SageMath version of Pushing3Chain()
. This is a fair representation of how similar code written in Magma and SageMath is:
// Magma
function Pushing3Chain(E, P, i)
// compute chain of isogenies quotienting out a point P of order 3^i
Fp2 := BaseField(E);
R:= PolynomialRing(Fp2);
chain := [];
C := E;
remainingker := P;
for j in [1..i] do
kerpol := x - (3^(i-j)*remainingker)[1];
C, comp := IsogenyFromKernel(C, kerpol);
remainingker := comp(remainingker);
chain cat:=[comp];
end for;
return C, chain;
end function;
# SageMath
def Pushing3Chain(E, P, i):
# Compute chain of isogenies quotienting out a point P of order 3^i
Fp2 = E.base()
R.= PolynomialRing(Fp2)
chain = []
C = E
remainingker = P
for j in range(1, i+1):
kerpol = x - (3^(i-j)*remainingker)[0]
comp = EllipticCurveIsogeny(C, kerpol)
C = comp.codomain()
remainingker = comp(remainingker)
chain.append(comp)
return C, chain
Aside from worrying about the helper functions, Does22ChainSplit()
was just as simple to reimplement. SIKE_challenge.m
itself was about 300 lines of syntax changes (switching out loops, populating arrays with integers). There was a bit of work composing some isogenies, computing Weil pairings and doing some elliptic curve arithmetic, but thanks to previous experience in writing similar code, the conversion went fairly smoothly.
Two functions to go, this was going to be done by lunch!
Warning: falling back to very slow toy implementation
The first major difficulty came while reimplementing FromProdToJac()
. At a high level, this function takes points on an elliptic curve and points , on the elliptic curve and computes the image of the points and on the Jacobian of a hyperelliptic curve. Hmm ok maybe that’s not such a high level.
Brushing aside what it does, let’s talk about how it tries to do this.
First, five multivariate equations in four variables are defined. Although the lines which do this look dense, the similarity between Magma and SageMath meant not much work was needed at all. The goal is to find a solution to all five equations, which can then be used to construct the necessary points on the Jacobian of the target hyperelliptic curve. Details of this process are is described in section 6.1 of the paper.
The standard method to solve systems of equations like this is to first build a Gröbner basis from the equations. Magma comes with GrobnerBasis()
and it is very efficient and works with a wide range of polynomial rings. The following code snippet doesn’t obviously use GrobnerBasis()
, instead a scheme is created from an affine space and the set of equations. Calling Points(V)
on the scheme finds the set of points, which are equivalently the set of solutions to the polynomials! Points(V)
does this by (in part) calling GrobnerBasis()
under the hood.
A4:= AffineSpace(Fp2, 4);
V := Scheme(A4, [eq1, eq2, eq3, eq4, eq5]);
// point with zero coordinates probably correspond to "extra" solutions,
// we should be left with 4 sols (code may fail over small fields)
realsols := [];
for D in Points(V) do
Dseq := Eltseq(D);
if not 0 in Dseq then
realsols cat:= [Dseq];
end if;
end for;
Rewriting this in SageMath, we get something that looks very similar
A4.= AffineSpace(Fp2, 4)
V = A4.subscheme([eq1, eq2, eq3, eq4, eq5])
# point with zero coordinates probably correspond to "extra" solutions,
# we should be left with 4 sols (code may fail over small fields)
realsols = []
for D in V.rational_points():
Dseq = list(D)
if not 0 in Dseq:
realsols.append(Dseq)
Again, like Magma, this calls grobner_basis()
under the hood to find the set of points. However, running this code, we get the following message from SageMath:
verbose 0 (3848: multi_polynomial_ideal.py, groebner_basis) Warning: falling back to very slow toy implementation.
Uh oh… Just how slow is very slow? When running the attack, FromProdToJac()
would be called for each oracle request. This meant it would be called a few hundred times for the easiest $IKEp217
challenge and a magnitude more for the hardest parameter set.
To see how slow very slow was, the code was left running while some fresh coffee was brewed and coming back to the terminal, a second warning was now showing:
verbose 0 (3848: multi_polynomial_ideal.py, groebner_basis) Warning: falling back to very slow toy implementation.
verbose 0 (1081: multi_polynomial_ideal.py, dimension) Warning: falling back to very slow toy implementation.
Okay, so technically this is progress, but considering the Magma file was totally finished within five minutes, we would need a smarter way to solve this problem if there was any hope to have this script recover the secret key.
When in doubt, make things easier
The usual method when solving problems like this using SageMath is to go crawling through the documentation. This wasn’t the first time a problem like this had come up while implementing algorithms and so the hope was some new ideas would start jumping out if enough documentation was read through.
Years of research experience quickly suggested that the first thing to do was to simply reduce the complexity of the problem. The file SIKE_challenge.sage
was rewritten as the new baby_SIDH.sage
, which shared a very similar structure, but now with a much smaller, 64-bit prime. The hope was to find something which worked reasonably on this smaller problem then worry later about making it more efficient after it was confirmed that the attack worked.
To create baby SIDH SIKEp64
, first a prime was found such that and :
# Baby SIKEp64 parameters
a = 33
b = 19
p = 2^a*3^b - 1
Then reusing some old code from other isogeny projects, fresh public torsion points were generated as well:
def get_l_torsion_basis(E, l):
n = (p+1) // l
return (n*G for G in E.gens())
P2, Q2 = get_l_torsion_basis(E_start, 2^a)
P3, Q3 = get_l_torsion_basis(E_start, 3^b)
# Make sure Torsion points are
# generated correctly
assert 2^(a-1)*P2 != infty
assert 3^(b-1)*P3 != infty
assert P2.weil_pairing(Q2, 2^a)^(2^(a-1)) != 1
assert P3.weil_pairing(Q3, 3^b)^(3^(b-1)) != 1
Using the baby parameters with a SIDH key generation, the public data could be pushed back into the attack and…
verbose 0 (3848: multi_polynomial_ideal.py, groebner_basis) Warning: falling back to very slow toy implementation.
verbose 0 (1081: multi_polynomial_ideal.py, dimension) Warning: falling back to very slow toy implementation.
After letting this run for about 30 minutes the program was exited. It was obvious that this method was the wrong avenue for the SageMath implementation. Back to the drawing board.
The things you can do with friends by your SIDH
While fishing around more and more specific searches such as “sagemath fast groebner basis multivariate polynomial ring” (this is the page that seems like the solution should be in, but nothing ever quite worked) I additionally asked my CryptoHack friends for advice and made a tweet explaining the problem.
SageMath is used by a lot of people in the crypto community, and often people find clever tricks when solving puzzles and CTF challenges which come in handy in times like this. The hope was that someone had solved a similar problem before and could point toward the correct way to construct the solution.
Pretty quickly after reaching out to people, some really cool suggestions were offered. The power of the internet!
- Lorenz Panny suggested to Weil restrict the equations by introducing an extra variable and move from working in to
- This meant the toy implementation of Gröbner was not needed anymore as the polynomial ring would be defined over a simpler field, but now the system of equations was so complicated, the code was still too slow
- This meant the toy implementation of Gröbner was not needed anymore as the polynomial ring would be defined over a simpler field, but now the system of equations was so complicated, the code was still too slow
- This meant the toy implementation of Gröbner was not needed anymore as the polynomial ring would be defined over a simpler field, but now the system of equations was so complicated, the code was still too slow
- Tony Shaska suggested that instead of computing the Gröbner bases, a faster method could be to instead use resultants to remove variables from the equation one at a time and then solve the equation in a univariate polynomial ring and work backwards.
- This is a clever idea but SageMath was still too inefficient. The only method I had to compute resultants for the polynomial ring was using the determinant of the Sylvester matrix and this is very slow.
- Bryan Gillespie suggested to try and use the
Macaulay2
interface to compute the Gröbner basis. This doesn’t come with SageMath by default, but is free and open-source and can be included pretty easily. It’s also known for being pretty fast- This might work, but the current SageMath interface doesn’t allow for sending extension fields to
Macaulay2
(it’s not even certain from the documentation thatMacaulay2
can do this, but the SageMath interface certainly can’t). For this to have a chance at working, one would first have to re-write the interface.
- This might work, but the current SageMath interface doesn’t allow for sending extension fields to
The solution came from Rémy Oudompheng, who saw a way to avoid the problem altogether:
Are you trying to lift a pair (P, Pc) to the Jacobian? I wonder if it’s easier to lift (P, 0) to a divisor on H, lift (0, Pc) to a divisor and add them? I may be confused but it feels like it gives the answer without solving any equation
Rémy joined me in the CryptoHack discord and we started chatting more about how this could solve the problem. His novel solution to the lifting described in section 6.1 seemed to be working, and what’s more, the same ideas would carry over to the JacToJac()
function. Only days after the initial attack, it was wonderful seeing new perspectives on how to efficiently solve the problem.
This is a beautiful result and I’m really happy to have had the time working with Rémy on this. I never would have had the above insight to dodge the slow toy implementation and it gave me an opportunity to learn more about hyperelliptic curves.
Piecing it all together
While Rémy worked on his novel implementation for FromProdToJac()
and FromJacToJac()
, the remaining work was to go through the rest of the Magma code and convert it to valid SageMath. With these last two functions finished and the rest of the attack all scripted, it was time to see whether the algorithm could recover Bob’s private key given only public data generated from the baby SIDH parameters.
Before the gratification of a successful run, as is usual with late night coding, some additional off by one errors were introduced and then removed while Rémy pushed the new hyperelliptic lifting and Richelot isogeny code. We both ran our script, which failed dramatically in the last few lines when constructing the private key thanks to more syntax errors:
key = sum(skB[i]*3^(i-1) for i in range(1..b-2))
TypeError: 'generator' object cannot be interpreted as an integer
Off by one and a remaining ..
in the range!! Fixing this to what should have been written all along:
# Magma
# key := +[skB[i]*3^(i-1) : i in [1..b-3]];
# SageMath
key = sum(skB[i]*3^i for i in range(b-3))
the following output appeared in the terminal:
Bridging last gap took: 0.1307520866394043
Bob's secret key revealed as: 15002860
In ternary, this is: [1, 1, 1, 1, 0, 0, 0, 2, 0, 0, 2, 0, 1, 0, 0, 1]
Altogether this took 43.73990249633789 seconds.
It worked!! The attack successfully recovered Bob’s private key in less than a minute, all thanks to some brilliant mathematics. It was so exciting to see Wouter Castryck and Thomas Decru’s attack run in real time on a laptop.
However, a 64-bit prime wasn’t close to being secure from previously known attacks. So, with confidence that the code was correct, the next test was to see whether the implementation was efficient enough to recover private keys on serious SIDH instances. Could it keep up with the Magma implementation?
Monkeying around with cache optimisation
Running the same attack on the SIKE challenge, the code worked but it was incredibly slow. Something along the way was inefficient as our prime grew, so the new task was to try and identify the sluggish code and clean it up. Profiling the script, most of the run time is spent in JacToJac()
. This isn’t surprising, as it’s run approximately 100 times for each oracle call, so it is expected to be dominant in the profiling. However, the recorded slow-down from the baby SIDH parameters seemed much more significant that one would expect by approximately tripling the bit-size of the prime.
With some further analysis, the dramatic slowdown of the algorithm was identified. The root of the problem was due to a SageMath performance issue where rather than caching FiniteField
objects they are reconstructed every time they are called. The function JacToJac()
is particularly effected due to the heavy use of the group law for the Jacobian of a hyperelliptic curve.
When testing equality of points, the code invokes GF(p^k)(...)
for all coefficients. The constructor of the FiniteField
includes a primality test of p
for every call. As this is called on every coefficient of every point when performing arithmetic operations, we’re constructing objects and performing primality tests thousands of times. The larger the prime, the more expensive this construction becomes.
Rémy decided to fix this issue this by patching SageMath itself, modifying sage.categories.fields
so that the vector space is cached:
from sage.misc.cachefunc import cached_method
@cached_method
def vector_space(self, *args, **kwds):
...
This ensures that each distinct vector field is constructed only once.
With this fix, the implementation broke the $IKEp217
challenge in only 15 minutes. Not bad when compared to the purported Magma time of approximately 5 minutes.
Bridging last gap took: 6.489821910858154
Bob's secret key revealed as: 5xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx2
In ternary, this is: [0, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, 1]
Altogether this took 795.5218527317047 seconds.
sage SIKE_challenge.sage 785.42s user 22.78s system 101% cpu 13:19.19 total
Overnight, Rémy also ran the attack on the SIKEp434
parameter set. The secret key was recovered in only an hour and a half, amazing result when the Magma implementation took approximately one hour!
Bridging last gap took: 14.06521987915039
Bob's secret key revealed as: 107365402940497059258054462948684901858655170389077481076399249199
In ternary, this is: [2, 1, 2, 1, 1, 0, 1, 2, 0, 2, 1, 0, 1, 2, 2, 2, 1, 0, 2, 1, 0, 2, 1, 2, 2, 2, 1, 1, 1, 0, 0, 2, 2, 0, 1, 1, 2, 2, 2, 0, 2, 1, 0, 1, 0, 0, 0, 0, 0, 0, 2, 2, 0, 2, 0, 0, 2, 1, 1, 1, 0, 0, 2, 1, 2, 1, 0, 2, 1, 2, 1, 0, 1, 1, 0, 2, 1, 0, 2, 1, 0, 0, 1, 1, 0, 0, 2, 2, 2, 0, 2, 2, 0, 1, 1, 1, 0, 0, 0, 2, 1, 0, 2, 0, 1, 0, 1, 0, 1, 2, 1, 2, 2, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0, 0, 0, 1, 1, 2, 0, 1, 1, 1, 0, 1, 1]
Altogether this took 5233.838165044785 seconds.
The next morning, inspired by these results, the goal was to find a way to have this same performance without directly patching SageMath. The motivation for this reimplementation was to allow people to run the code themselves and it was important to make this as easy as possible.
A gentler fix was to set the flag proof.arithmetic(False)
in the code. This globally tells SageMath to use (among many things) a much faster, probabilistic primality test. We’re not worried about false positives this could (very rarely) introduce, as we are working with a known, fixed prime. As an example of how dramatic this speed up is, a primality test of a 1024 bit integer is more than 1000 times as fast:
sage: p = random_prime(2^1024)
sage: time is_prime(p)
CPU times: user 2.83 s, sys: 13.4 ms, total: 2.85 s
Wall time: 2.86 s
True
sage: proof.arithmetic(False)
sage: time is_prime(p)
CPU times: user 2.1 ms, sys: 0 ns, total: 2.1 ms
Wall time: 2.11 ms
True
This doesn’t address the construction of the vector space again and again, but by dropping the expensive primality test on every call the hope that it’s fast enough (or at least a good start).
By including the proof flag into the script, SIKE_challenge.sage
broke the $IKEp217
challenge in 30 minutes without any additional patches:
Bridging last gap took: 9.461672067642212
Bob's secret key revealed as: 5xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx2
In ternary, this is: [0, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, 1]
Altogether this took 1799.0663061141968 seconds.
However, while this code was running Robin Jadoul found a way to achieve the same result as Rémy’s SageMath patch with the following in-line monkey patch by including
Fp2. = GF(p^2, modulus=x^2+1)
type(Fp2).vector_space = sage.misc.cachefunc.cached_method(type(Fp2).vector_space)
This ensures the vector field is cached as in Rémy’s patch, but the fix can be done during run time. This allows all users of the script to get the speed up without modifying the SageMath source. This is a really important fix, so huge thanks to Robin. Without it, the hardest parameter sets would have been out of reach without hard-patching SageMath.
Optimising using mathematics
The next set of performance enhancements are thanks to Rémy’s optimizations of the function JacToJac()
. This is the obvious place to focus as it’s where the attack spends most of its time. Optimisations can be viewed in the following pull requests:
- Performance improvements for Richelot isogenies #7
- All Gröbner basis computations can be replaced by eliminating variables with simple substitutions.
- Faster chains of Richelot isogenies #8
- The previous code repeatedly took fixed powers of a point on the Jacobian. These powers can be pre-computed resulting in a 2x-3x faster execution time.
- Make Richelot isogeny chains subquadratic #9
- The number of squaring operations is reduced from to providing a modest 10-15% speedup for larger parameter sets.
Accumulatively, these performance enhancements are fantastic and we see more than a 3-times speed up for the code, pushing the SageMath implementation to be more performant than the Magma implementation!
Optimising using better SageMath practices
The last speed ups came from running profilers on our code and adjusting how objects were called to avoid slowdown from how Python was constructing and manipulating polynomials, points and Jacobians and isogenies. A non-exhaustive list of tricks we used:
- Hard-code dimension of curve #10
- When constructing the Jacobian of a hyperelliptic curve, an expensive, redundant computation of the curve’s dimension is performed (a curve’s dimension is always 1). We applied a monkey patch to address this.
- Twice faster Jacobian quotients #13
- Using points rather than polynomials for the isogeny kernel uses Vélu’s formulas rather than Kohel’s formulas, which are faster.
- Elliptic isogenies were made more performant by passing the known degree as an optional argument and removing the internal validity check.
Breaking SIDH on a laptop
To celebrate, let’s look at the recored runtimes of our implementation across all parameters.
Vanilla | No Proof | Monkey Patch | Current Version | |
---|---|---|---|---|
Baby SIDH (SIKEp64) | 1 minute | 1 minute | 1 minute | 5 seconds |
$IKEp217 Challenge | – | 30 minutes | 15 minutes | 2 minutes |
SIKEp434 | – | – | 1.5 hours | 10 minutes |
SIKEp503 | – | – | 3.5 hours | 15 minutes |
SIKEp610 | – | – | – | 25 minutes |
SIKEp751 | – | – | – | 1-2 hours |
Although most digits of the key can be recovered one by one, the first set of digits must be collected together. For SIKEp64
, $IKEp217
and SIKEp434
only the first two digits need to be collected together, with a worst case of calls to the oracle to recover the values.
However:
- For
SIKEp503
the worst case is we make calls for the first 4 digits. - For
SIKEp610
the worst case is we make calls for the first 5 digits. - For
SIKEp751
the worst case is we make calls for the first 6 digits.
This means that in the worst case when attacking SIKEp751
more than half of the computation time is spent collecting the first 6 of the 239 digits!
Estimating the running time
We can estimate an average running time from the expected number of calls to the oracle Does22ChainSplit()
. This still won’t be totally accurate but gives some rough estimates which seem to agree with our recorded values.
- For the first digits, at most calls to
Does22ChainSplit()
are needed and half of this on average - For the remaining digits
Does22ChainSplit()
is called only once when and twice when or . It is then expected on average to call the oracle once a third of the time and twice for the remaining two thirds of the digits.
Expressing the approximate time cost of a single call of Does22ChainSplit()
as c, the estimate the total cost can be expressed as:
Which looks slightly cleaner written as:
Parameters | c | Average Cost | Worst Case Start | |
---|---|---|---|---|
SIKEp64 | 0.2s | 2 | 5 seconds | 7 seconds |
$IKEp217 | 1s | 2 | 2 minutes | 2 minutes |
SIKEp434 | 3.4s | 2 | 13 minutes | 13 minutes |
SIKEp503 | 4.5s | 4 | 22 minutes | 25 minutes |
SIKEp610 | 6s | 5 | 43 minutes | 1 hour |
SIKEp751 | 8.4s | 6 | 1.75 hours | 2.6 hours |
Conclusions
As a community, the plan is to keep working on our implementation, attempting to make the code more readable and performant. The end-goal is for this implementation to be a valuable resource that students and researchers can use to learn about this truly beautiful attack.
Furthermore, the community should continue to work together on SageMath. It’s an incredible resource, and the hope is that this blog post is an indication of how versatile and powerful it can be at implementing very high-level mathematics.
Some of problems we encountered along the way have already been submitted to be taken into the next release of SageMath. In particular, Lorenz Panny has fixed the need for including the monkey patch for the Finite Field caching.
The performance enhancements that we have included in our implementation just show how much more room there is to develop this attack and our understanding of the relationship between elliptic curves and higher dimensional Jacobians in cryptanalysis.
Congratulations to Wouter Castryck and Thomas Decru!
What’s next for isogenies?
- With only a preliminary version of the attack on eprint, it will take the full paper, and community research time to really understand how far the Castryck-Decru attack can be generalised.
- Unlike some historic attacks, simply increasing key sizes for SIKE is not sufficent. If SIDH is to reappear as a cryptographic protocol, it will need a design overhaul to remove all structure which this attack relies on for key recovery.
- As this blog post is edited, a proposal to modify SIDH to protect against this attack: Masked-degree SIDH by Tomoki Moriya has appeared on eprint.
- Other isogeny protocols such as CSIDH and SQISign seem to be safe from this attack. This is because neither of these protocols have a known secret isogeny degree or the image of the torsion points in public data.
- More generally, the appearance of such a brilliant attack on a cryptosystem that stood unbroken for a decade will shake confidence in constructing protocols using isogeny problems. From the experts, the message is that there’s a lot of research to do now, which can only lead to more exciting results.
Acknowledgements
Many thanks to Rémy Oudompheng for collaboratoring with me on this project and teaching me so much about higher-genus isogenies. My additional thanks to Rémy Oudompheng and Lorenz Panny for feedback on my description of the attack, and my collegues at NCC Group: Paul Bottinelli, Kevin Henry, Elena Bakos Lang and Thomas Pornin for their valuable feedback on an earlier draft of this blog post.