Skip to navigation Skip to main content Skip to footer

Real World Cryptography Conference 2023 – Part I

10 May 2023

By Kevin Henry

The annual Real World Cryptography Conference organized by the IACR recently took place in Tokyo, Japan. On top of 3 days of excellent talks, RWC was preceded by the 2nd annual FHE.org Conference and the Real World Post-Quantum Cryptography Workshop and followed by the High Assurance Crypto Software Workshop.

Nearly all of NCC Group’s Cryptography Services team was in attendance this year, with those that could not make it attending remotely. Several of our members also participated in the co-located FHE.org, RWPQC, and HACS events. Some of our favorite talks and takeaways are summarized here, with more forthcoming in a future post.

  1. Real World Post-Quantum Cryptography (RWPQC)
  2. TLS-Anvil: Adapting Combinatorial Testing for TLS Libraries
  3. How We Broke a Fifth-Order Masked Kyber Implementation by Copy-Paste
  4. WhatsApp End-to-End Encrypted Backups
  5. tlock: Practical Timelock Encryption Based on Threshold BLS
  6. Ask Your Cryptographer if Context-Committing AEAD Is Right for You

Real World Post-Quantum Cryptography (RWPQC)

The RWPQC workshop was a co-located event held the day before Real World Crypto in Tokyo. The workshop consisted of a mix of invited talks and roundtable discussions on various real-world challenges facing post-quantum cryptography.

The first invited talk was an update from NIST on the PQC standardization process, which discussed the timeline for the draft standards as well as round 4 of the PQC process and the new on-ramp for signature schemes. This was followed by short “Lessons Learned” talks from each of the four selected candidates. Multiple “Lessons Learned” talks discussed problems with the incentive structure of the NIST PQC process, and possible ways of improving the consistency of the cryptanalysis of proposed schemes in the future. Vadim Lyubashevsky ended his short discussion on Dilithium mentioning the “bystander effect” and the complications of producing the NIST submissions with tight deadlines and big teams, and speculated on the effectiveness of small teams.

In the next invited talk, Douglas Stebila provided an update about the PQC standardization efforts at the IETF, where he described the varied efforts to integrate the soon-to-be-standardized algorithms into the various protocols maintained by the IETF, and highlighted the amount of work still left for practical PQC usage in many protocols. There was also an update from ANSSI, a French organization responsible for information systems security in France, which provided an overview of their migration recommendations, based on their experiences with industry vendors. For ANSSI, the focus of PQ integration was on the proper development of hybrid cryptography, and with their firm stance of the subject it seems many other organizations will follow this. In the final invited talk of the day, Vadim Lyubashevsky wrapped up the workshop with a talk about the strategies and current state of creating efficient lattice-based zero-knowledge proofs.

The invited talks were interspersed with roundtable discussions, which provided a more free-form discussion on various PQC-related topics. There were three roundtable discussions. The first, about “Implementation and side channels”, brought experts from industry and academia together to discuss challenges in side-channel proofing PQC algorithms. This panel discussed the need for side-channel protections at both the hardware and software level, and highlighted the importance of hardware and software experts working together to ensure that one’s hard work could not be bypassed by a security flaw from the other side. The second, an “Industry side discussion”, featured members of various companies who discussed the challenges of the PQC migration in their respective fields. This discussion tied in to the invited talks on the migration efforts at the IETF and ANSSI, and focused on the large amount of work still left to do in the PQC migration. Additionally, the discussion brought up that novel classical primitives still being invented today only add to the pile of work necessary for the PQC migration, and highlighted the need for more flexible quantum-safe constructions.

The last panel was focused on the “Current state of cryptanalysis” and brought together experts from all fields of post-quantum cryptography to discuss their thoughts on current and future state of cryptanalysis and security of various post-quantum primitives. In particular, the panel members included Ward Buellens, who spoke about breaking Rainbow and the future of multivariate cryptography, and Chloe Martindale, who discussed the recent break of SIKE and future works in isogeny cryptography and cryptanalysis. The other panel members discussed lattice, code and hash-based primitives, which were generally believed to remain secure, although it was highlighted that care must be taken when instantiating them, as (accidental or otherwise) deviations from best practices can be disastrous.

The first RWPQC workshop was a great success and generated many useful discussions across industry and academia. While there is a lot of work still needed before quantum-safe cryptography can be used everywhere, this workshop highlighted the many people currently working towards this goal. Looking forward to seeing the progress at future RWPQC workshops!

Elena Bakos Lang and Giacomo Pope

TLS-Anvil: Adapting Combinatorial Testing for TLS Libraries

In this talk, Marcel Maehren presented TLS-Anvil, a TLS testing tool developed by researchers from Ruhr University Bochum and Paderborn University in Germany (the full list of authors is available on the research paper) and recently presented at USENIX Security 2022.

The size and complexity of the TLS protocol has made it difficult to systematically test implementations and validate their conformance to the many governing RFCs. Specifically, the different parameters that can be negotiated strongly impact the ability to test implementations. Some of the requirements only apply to certain ciphersuites, while other ones are invariant among all negotiated parameters. One example Marcel gave of the latter is the following requirement:

The receiver MUST check [the] padding and MUST use the bad_record_mac alert to indicate padding errors.

This requirement should be met irrespective of the key exchange algorithm, signature algorithm, or specific block cipher choice.

However, the total number of ciphersuites and parameter combinations makes it close to impossible to specify tests for each of these individual requirements. Additionally, some other factors increase the complexity of systematic testing; TLS implementations are not required to support all algorithms and some specific parameter values are not allowed to be combined (such as a ciphersuite using RSA for signatures but using an ECDSA server certificate). TLS-Anvil uses a combinatorial approach called t-way testing which makes it possible to efficiently test many combinations of TLS parameters. In total, the tool defines 408 test templates and was tested on 13 commonly used TLS implementations, such as OpenSSL, mbed TLS, and NSS.

The tool uncovered over 200 RFC violations in these different libraries, including 3 exploitable vulnerabilities:

  • a padding oracle in the MatrixSSL Client due to a segmentation fault triggered by the use of HMAC-SHA256-CBC, because of an incorrect initialization of SHA256;
  • a DoS in the MatrixSSL Client, triggered when sending Server Hello messages with contradicting length fields;
  • and an authentication bypass for wolfSSL in TLS 1.3, where an empty Certificate message resulted in wolfSSL ignoring the subsequent Certificate Verify message.

Marcel concluded his presentation by highlighting some interesting future work, such as using a similar approach to test other protocols (like QUIC).

– Paul Bottinelli

How We Broke a Fifth-Order Masked Kyber Implementation by Copy-Paste

In the first presentation of the PQC track, Elena Dubrova presented work done in collaboration with Kalle Ngo and Joel Gartner involving power side-channel attacks on the CRYSTALS-Kyber algorithm using deep learning techniques. This algorithm was recently selected by NIST for standardization for post-quantum public-key encryption and key-establishment. Here, masking refers to a countermeasure involving splitting a secret into multiple partially-randomized shares to obfuscate the underlying arithmetic behavior of the cryptographic algorithms (and fifth-order refers to the secret split five times).

The central idea of the work, performed with power measurement traces from an ARM Cortex-M4 MCU, involved a new neural network training technique called recursive learning (colloquially: copy-paste). This technique involves copying weights from an existing working model targeting less masking into a new model targeting more masking. Thus, a first order solution (which was presented in 2021) is used to bootstrap a second order solution and so forth. This is a particularly intriguing use of transfer learning.

Deep learning is able to utilize very noisy traces for training, and surprisingly few traces for the actual attack. In this work, 30k training traces were used (on code compiled with -O3), and weights were cut-pasted from one model to the next. In the end, the probability of recovering a message was over 99% with only 20 attack traces. When constrained to 4 test traces, the probability remained above 96%. One of the reasons this talk was so interesting is that there seems no simple, low-cost and effective countermeasures that can be realistically taken to prevent these power side-channel attacks.

Eric Schorn

WhatsApp End-to-End Encrypted Backups

Kevin Lewi discussed the implementation of WhatsApp End-to-End Encrypted Backups in Real World Crypto 2023 session on “Building and Breaking Secure Systems”.

Kevin first explained the motivation for this service. When a message is end-to-end encrypted (E2EE), only the intended sender and recipient of a given message can decrypt it. Cleartext backup of messages to the cloud was at odds with the desired goal of end-to-end encryption of messages. Specifically, Kevin noted that cloud providers can, and did access user backups of plaintext messages in the past.

Users had to make the difficult choice of enabling or disabling backups at the cost of either lower security assurances, or not recovering their lost messages (if users misplaced their phones for instance), which Kevin describes as a “natural tension between usability, and privacy”.

Kevin then proceeded to describe the solution implemented by WhatsApp, and progressively available to users starting at the end of 2021, to address this problem. Users can either write down their backup encryption keys somewhere safe or set a memorable password to gate access to their secret backup encryption key, with the system enforcing a maximum password entry attempt counter. The former use case is probably targeted at the more tech savvy users; the presentation was focused on the latter use case, which aims to strike a balance between security, and usability for most users. In December 2022, 100 million users had enabled encrypted backup according to WhatsApp.

Kevin explained that there are three parties involved in running the WhatsApp End-to-End Encrypted Backups protocol: WhatsApp users, the cloud providers storing the users’ encrypted backups, and the key vaults managed by WhatsApp. The details of the privacy solution integrating these parties are complex and are sure to interest an audience with varied interests. Briefly, and to enumerate some of the salient aspects of the solution, the architecture is underpinned by hardware security modules (HSM) to protect user’s keys, employs the OPAQUE protocol, a password-authenticated key protocol, and Merkle trees to protect the integrity of sensitive data. Kevin’s presentation provides a good overview of these components, and how they interact with each other.

For those who want to know more, WhatsApp has published a security white paper on the same topic in 2021. NCC Group Cryptography Services team is not foreign to the subject at hand; the team performed a security assessment of several aspects of the solution in 2021 and published reports detailing our findings on both OPAQUE and its use in WhatsApp’s encrypted backup solution.

Gérald Doussot

tlock: Practical Timelock Encryption Based on Threshold BLS

Yolan Romailler, from the drand project, opened the Advanced Encryption session with a presentation on time-lock encryption, its use-cases as a building block, and a demo. A ciphertext that is time-lock encrypted can only be decrypted at a specified future that is set during encryption. Time lock encryption as a mechanism to withhold information until a specified time, can have many applications. As Yolan put it, instead of using the usual encryption key, the decryptor uses the time passed to decrypt the ciphertext. Besides sealed-bid auctions and responsible disclosures, or even ransomware that threatens to release data if payment is not received before a deadline (which would be much nicer than the ransomware we have right now, as Yolan said!), time lock encryption can be a solution to Maximal Extractable Value (MEV) and front-running issues in the blockchain space.

Currently, there are 2 approaches to time-lock encryption. The first, a puzzle-based approach, is to require a chain of computations that will delay decryption by at least the desired time difference. These types of schemes have proven to be fragile against a motivated attacker that invests in application-specific integrated circuits. The second, an agent-based approach, is to have a trusted agent that releases the decryption key at the specified time. This work chooses the later approach, an agent-based publicly decryptable time-lock encryption and replaces the trusted agent with a network that runs threshold BLS at constant intervals (more accurately referred to as rounds). An unchained time-lock scheme also allows for the ciphertext to be created and published before the decryption key is released at the specified time and is more aligned with the use-cases that were mentioned.

The referenced pre-print paper reformulates Boneh and Franklin’s Identity Based Encryption (IBE) to construct a Time Lock Encryption scheme. Perhaps this quote from the abstract summarizes it the best: “At present this threshold network broadcasts BLS signatures over each round number, equivalent to the current time interval, and as such can be considered a decentralised key holder periodically publishing private keys for the BF-IBE where identities are the round numbers”. At decryption time, this signature (more accurately the private key) is used to decrypt the symmetric encryption key that was used to encrypt the plaintext, which in turn is used to decrypt the ciphertext.

Towards the end of his presentation, Yolan mentioned that the implementation is open-source and utilizes age, an open-source tool that they use to encrypt the plaintext (of any size) with a symmetric key which they wrap and encrypt with IBE. The slides also included a QR code to a live webpage where anyone can encrypt their messages to the future they desire; it even includes a vulnerability report tab that makes responsible disclosure easier than ever!

Parnian Alimi

Ask Your Cryptographer if Context-Committing AEAD Is Right for You

Sanketh Menda closed out the technical presentations with his talk Ask Your Cryptographer if Context-Committing AEAD Is Right for You (slides). This work is a natural extension of previous efforts by some of the same authors investigating the implications of commonly used Authenticated Encryption with Associated Data (AEAD) schemes and the fact that they do not provide key commitment. In short, an attacker can produce a ciphertext and authentication tag such that the ciphertext will decrypt correctly under two different keys known to the attacker. Without key commitment, correct decryption under AES-GCM, for example, does not guarantee that you are in possession of the only key that can decrypt said ciphertext without error. The lack of key commitment has been used to attack real world deployments that implicitly rely on such an assumption. Sanketh’s presentation argued that key commitment alone may not be a sufficient property, and the community should aim to provide modes of operation that are context committing.

Attacks on key commitment focus on finding two keys such that that a fixed ciphertext, associated data, and nonce will correctly decrypt under both keys. However, more general attacks may apply with one or both of the associated data and nonce are not required to be fixed. A scheme that provides key commitment may still be vulnerable to an attack where there nonce or associated data are controlled by the attacker and a second valid tuple of ciphertext data can be produced. Therefore, one must commit to the entire “context” of the AEAD, and not just the key and authentication tag. Generic workarounds for key commitment exist, such as expanding the ciphertext to include a hash of the context, or to add additional fixed-value padding to plaintexts. But these solutions are not sufficient for context commitment without further modification.

Attacks targeting a lack of context commitment are theoretical at this point, but the presentation argues they are worth considering when designing new modes of block cipher operation. To this end, the talk concludes with a brief overview of the newly proposed OCH mode of operation, a variant of OCB that has been modified to provide context commitment. The approach provides a ciphertext that is of optimal length, and is maximally parallelizable, with all the features expected of a modern AEAD mode of operation. Given the minimal performance overhead to provide full context commitment over just key commitment, there is a compelling argument to target this stronger property when standardizing new modes of operation. I look forward to the full description of OCH mode, which should be appearing on ePrint soon.

Kevin Henry