After a brief interlude, filled with several articles from the Cryptography Services team, we’re back with our final thoughts from this year’s Real World Cryptography Conference. In case you missed it, check out Part I for more insights.
- Interoperability in E2EE Messaging
- Threshold ECDSA Towards Deployment
- The Path to Real World FHE: Navigating the Ciphertext Space
- High-Assurance Go Cryptography in Practice
Interoperability in E2EE Messaging
A specter is haunting Europe – the specter of platform interoperability. The EU signed the Digital Market Acts (DMA) into law in September of last year, mandating chat platforms provide support for interoperable communications. This requirement will be in effect by March of 2024, an aggressive timeline requiring fast action from cryptographic (and regular) engineers. There are advantages to interoperability. It allows users to communicate with their friends and family across platforms, and it allows developers to build applications that work across platforms. There is the potential for this to partially mitigate the network effects associated with platform lock-in, which could lead to more competition and greater freedom of choice for end users. However, interoperability requires shared standards, and standards tend to imply compromise. This is a particularly severe challenge with secure chat apps which aim to provide their users with high levels of security. Introducing hastily designed, legislatively mandated components into these systems is a high-risk change which, in the worst case, could introduce weaknesses which, if introduced, would be difficult to fix (due to the effects of lock-in and the corresponding level of coordination and engineering effort required). This is further complicated by the heterogeneity of the field in regard to end-to-end encrypted chat: E2EE protocols vary by ciphersuite, level and form of authentication, personal identifier (email, phone number, username/password, etc.), and more. Any standardized design for interoperability would need to be able to manage all this complexity. This presentation on work by Ghosh, Grubbs, Len, and Rösler discussed one effort at introducing such a standard for interoperability between E2EE chat apps, focused on extending existing components of widely used E2EE apps. This is appropriate as these apps are most likely to be identified as “gatekeeper” apps to which the upcoming regulations apply in force. The proposed solution uses server-to-server interoperability, in which each end user is only required to directly communicate with their own messaging provider. Three main components of messaging affected by the DMA are identified: identity systems, E2EE protocols, and abuse prevention.
- For the first of these items, a system for running encrypted queries to remote providers’ identity systems is proposed; this allows user identities to be associated with keys in such a way that the actual identity data is abstracted and thus could be an email, a phone number, or an arbitrary string.
- For the second issue, E2EE encryption, a number of simple solutions are considered and rejected; the final proposal has several parts. Sender-anonymous wrappers are proposed, using a variant of the Secure Sender protocol from Signal, to hide sender metadata; for encryption in transit, non-gatekeeper apps can use an encapsulated implementation of a gatekeeper app’s E2EE through a client bridge. This provides both confidentiality and authenticity, while minimizing metadata leakage.
- For the third issue, abuse prevention, a number of options (including “doing nothing” are again considered and rejected). The final design is somewhat nonobvious, and consists of server-side spam filtering, user reporting (via asymmetric message franking, one of the more cryptographically fresh and interesting parts of the system), and blocklisting (which requires a small data leakage, in that the initiator would need to share the blocked party’s user ID with their own server).
Several open problems were also identified (these points are quoted directly from the slides):
- How do we improve the privacy of interoperable E2EE by reducing metadata leakage?
- How do we extend other protocols used in E2EE messaging, like key transparency, into the interoperability setting?
- How do we extend our framework and analyses to group chats and encrypted calls?
This is important and timely work on a topic which has the potential to result in big wins for user privacy and user experience; however, this best-case scenario will only play out if players in industry and academia can keep pace with the timeline set by the DMA. This requires quick work to design, implement, and review these protocols, and we look forward to seeing how these systems take shape in the months to come.
– Eli Sohl
Threshold ECDSA Towards Deployment
A Threshold Signature Scheme (TSS) allows any sufficiently large subset of signers to cryptographically sign a message. There has been a flurry of research in this area in the last 10 years, driven partly by financial institutions’ needs to secure crypto wallets and partly by academic interest in the area from the Multiparty Computation (MPC) perspective. Some signature schemes are more amenable to “thresholding” than others. For example, due to linearity features of “classical” Schnorr signatures, Schnorr is more amenable to “thresholding” than ECDSA (see the FROST protocol in this context). As for thresholding ECDSA, there are tradeoffs as well; if one allows using cryptosystems such as Pallier’s, the overall protocol complexity drops, but speed and extraneous security model assumptions appear to suffer. The DKLS papers, listed below, aim for competitive speeds and minimizing necessary security assumptions:
- DKLS18: “Secure Two-party Threshold ECDSA from ECDSA Assumptions”, Jack Doerner, Yashvanth Kondi, Eysa Lee, abhi shelat
- DKLS19: “Secure Multi-party Threshold ECDSA from ECDSA Assumptions”, Jack Doerner, Yashvanth Kondi, Eysa Lee, abhi shelat
The first paper proposes a 2-out-of-n ECDSA scheme, whereas the second paper extends it to the t-out-of-n case. The DKLS 2-party multiplication algorithm is based on Oblivious Transfer (OT) together with a number of optimization techniques. The OT is batched, then further sped up by an OT Extension (a way to reduce a large number of OTs to a smaller number of OTs using symmetric key cryptography) and finally used to multiply in a threshold manner. An optimized variant of this multiplication algorithm is used in DKLS19 as well. The talk aimed to share the challenges that occur in practical development and deployments of the DKLS19 scheme, including:
- The final protocol essentially requires three messages, however, the authors found they could pipeline the messages when signing multiple messages.
- Key refreshing can be done efficiently, where refreshing means replacing the shares and leaving the actual key unchanged.
- Round complexity was reduced to a 5-round protocol, reducing the time cost especially over WAN.
- One bug identified by Roy in OT extensions did not turn out to apply for the DKLS implementations, however, the authors are still taking precautions and moving to SoftSpoken OT.
- An error handling mistake was found in the implementation by Riva where an OT failure error was not propagated to the top.
- A number of bug bounties around the improper use of the Fiat-Shamir transform were seen recently. If the protocol needs a programmable Random Oracle, every (sub) protocol instance needs a different Random Oracle, which can be done using unique hash prefixes.
The talk also discussed other gaps between theory and practice: establishing sessions, e.g., whether O(n2) QR code scans required to set up the participant set.
– Aleksandar Kircanski
The Path to Real World FHE: Navigating the Ciphertext Space
Shruthi Gorantala from Google presented The Path to Real World FHE: Navigating the Ciphertext Space. There was also an FHE workshop prior to RWC where a number of advances in the field were presented. Fully Homomorphic Encryption (FHE) allows functions to be executed directly on ciphertext that ends up with the same encrypted results if the functions were run on plaintext. This would result in a major shift in the relationship between data privacy and data processing as previously an application would need to decrypt the data first. Therefore, FHE removes the need for the decryption and re-encryption steps. This would help preserve end-to-end privacy and allow users to have additional guarantees such as cloud providers not having access to user’s data. However, performance is a major concern as performing computations on encrypted data using FHE still remains significantly slower than performing computations on the plaintext. Key challenges for FHE include:
- Data size expansion,
- Speed, and
- Usability.
The focus of the presentation was on presenting a model of FHE hierarchy of needs that included both deficiency and growth needs. FHE deficiency needs are the following:
- FHE Instruction Set which focuses on data manipulation and ciphertext maintenance.
- FHE Compilers and Transpilers which focuses on parameter selection, optimizers and schedulers.
- FHE Application Development which focuses on development speed, debugging and interoperability.
The next phase would be FHE growth needs:
- FHE Systems Integration and Privacy Engineering which includes threat modeling.
- FHE used a critical component of privacy enhancing technologies (PETs).
- A key current goal for FHE is reduce the computational overhead for an entire application to demonstrate FHE’s usefulness in practical real-world settings.
– Javed Samuel
High-Assurance Go Cryptography in Practice
Filippo Valsorda, the maintainer of the cryptographic code in the Go language since 2018, presented the principles at work behind that maintenance effort. The title above is from the RWC program, but the first presentation slide contained an updated title which might be clearer: “Go cryptography without bugs”. Indeed, the core principle of it is that Filippo has a well-defined notion of what he is trying to achieve, that he expressed in slightly more words as follows: “secure, safe, practical, modern, in this order”. This talk was all about very boring cryptography, with no complex mathematics; at most, some signatures or key exchanges, like we already did in the 1990s. But such operations are what actually gets used by applications most of the time, and it is of a great practical importance that these primitives operate correctly, and that common applications do not misuse them through a difficult API. The talk went over these principles in a bit more details, specifically about:
- Memory safety: use of a programming language that at least ensures that buffer overflows and use-after-free conditions cannot happen (e.g., Go itself, or Rust).
- Tests: many test vectors, to try to exercise edge cases and other tricky conditions. In particular, negative test vectors are important, i.e., verifying that invalid data is properly detected and rejected (many test vector frameworks are only functional and check that the implementation runs correctly under normal conditions, but this is cryptography and in cryptography there is an attacker who is intent on making the conditions very abnormal).
- Fuzzing: more tests designed by the computer trying to find unforeseen edge cases. Fuzzing helps because handcrafted negative test vectors can only check for edge conditions that the developer thought about; the really dangerous ones are the cases that the developer did not think about, and fuzzing can find some of them.
- Safe APIs: APIs should be hard to misuse and should hide all details that are not needed. For instance, when dealing with elliptic curves, points and scalars and signatures should be just arrays of bytes; it is not useful for applications to see modular integers and finite field elements and point coordinates. Sometimes it is, when building new primitives with more complex properties; but for 95% of applications (at least), using a low-level mathematics-heavy API is just more ways to get things wrongs.
- Code generation: for some tasks, the computer is better at writing code than the human. Main example here is implementation of finite fields, in particular integers modulo a big prime; much sorrow has ensued from ill-controlled propagation of carries. The fiat-crypto project automatically generates proven correct (and efficient) code for that and Go uses said code.
- Low complexity: things should be simple. The more functions an API offers, the higher the probability that an application calls the wrong one. Old APIs and primitives, that should normally no longer be used in practice, are deprecated; not fully removed, because backward compatibility with existing source code is an important feature, but still duly marked as improper to use unless a very good reason to do so is offered. Who needs to use plain DSA signatures or the XTEA block cipher nowadays? Some people do! But most are better off not trying.
- Readability: everything should be easy to read. Complex operations should be broken down into simpler components. Readability is what makes it possible to do all of the above. If code is unreadable, it might be correct, but you cannot know it (and usually it means that it is not correct in some specific ways, and you won’t know it, but some attacker might).
An important point here is that performance is not a paramount goal. In “secure, safe, practical, modern”, the word “fast” does not appear. Cryptographic implementations have to be somewhat efficient, because “practical” implies it (if an implementation is too slow, to the point that it is unusable, then it is not practical), but the quest for saving a few extra clock cycles is pointless for most applications. It does not matter whether you can do 10,000 or 20,000 Ed25519 signatures per second on your Web server! Even if that server is very busy, you’ll need at most a couple dozen per second. Extreme optimization of code is an intellectually interesting challenge, and in some very specialized applications it might even matter (especially in small embedded systems with severe operational constraints), but in most applications that you could conceivably develop in Go and run on large computers, safety and practicality are the important features, not speed of an isolated cryptographic primitive.
– Thomas Pornin
Real World Cryptography 2024
NCC Group’s Cryptography Services team boasts a strong Canadian contingent, so we were excited to learn that RWC 2024 will take place in Toronto, Canada on March 25–27, 2024. We look forward to catching up with everyone next year!