Markdown Version

Session Date/Time: 17 Mar 2026 03:30

Stanislav Smyshlyaev: Hi everyone. This is CFRG. We don't have too many people in the room. We really hope that someone will join us, but we have to start. So, this is CFRG. Together with Alexey and Nick, who is attending remotely, will chair this session. And if you are here for CFRG, please stay. If you wanted something else, please find your room. So, we are ready to start. And first of all, the trivia. The session's being recorded. We have a minute-taker. Thanks a lot to Dan York, who will take notes. And we have a Jabber room relay and if you forgot to scan the QR code and register at the meeting, please do so. We have the Note Well and for all new participants, if you haven't read it yet, please do this. And all sessions, including this session, is being recorded and live streamed. We'd like to underline that there is a Code of Conduct in IETF and IRTF, and you can read about this in these two RFCs to know more. We've got two sessions. The first one has just started and as we can see, most of the participants are attending online and we have the second session that will take place on Thursday. It will be a two-hour session and I hope that we'll be able to discuss everything we have planned. The agenda and the slides have been uploaded to the Datatracker. Please find them using these links. We'd like to remind that the CFRG is not an IETF working group, but it is an IRTF research group, and that means that we conduct research and it is not some kind of standards development organization. We provide guidance, we provide recommendations for everyone who seeks for them. And we will talk a little more about this during our second presentation related to Post-Quantum KEMs. We'll start with research group document status. We had many new RFCs in the autumn, so we don't have new RFCs since November. But we have three documents in the RFC editor's queue or in IRSG review, including the CPACE draft, which is the winner of our PAKE selection contest. Together with RSA guidance, it's in IRSG review, and AEGIS is in RFC editor's queue. We've got two documents in research group last call. It's the AD limits draft and DNHPKE. And we need some opinions for both of them, especially for AD limits because we really want to finish that last call and we don't want to extend the last call once more. So please, if you think that you can add something valuable to discussion, or if you just are able to read the document, please do so and express your opinion on the list. We had the last call for the previous version of this document. Some changes were made and then we started another part of the same last call and we want to have some conclusion, so please support the document if you want to support it, or please send your comments or concerns if you want something to be changed. AD limits is a very important document for us because we want to deal with all generic questions about AEADs before thinking about adoption of some new AEAD documents and we have some requests, so it's important for the group to move on. We have 13 active drafts. All of them are important. All of them are really active. And since it's an enormous number of active documents, we don't really want to add some new work before we deal with something in our current deck, but of course we are open for all discussions. If you want to participate in the process with any of the documents, please contact the authors or the chairs because the help is always needed, especially for all post-quantum or combiners documents because we have a design team, we have a lot of people working on them, and we always need your help or your opinion. Some of the documents are active now, but they were expired for a lot of time, especially the pairing-friendly curves document, which was on pause for a long time, and now it's in active discussions again. Richard, do you want to say something? Please.

Richard Barnes: Yeah, just-- thanks. You mentioned the offer of or the need for help with some of the hybrid KEM documents especially. We published new versions just before this IETF, which I believe have nice fresh new proofs cited in them. So more review would be very appreciated on those if folks could take a look at some of the eprints that we're linking to that lay out the new proofs of the various security properties of the constructions we have. That would help us get these done and get them out to where people can use them. Thanks.

Stanislav Smyshlyaev: Thanks a lot, Richard. And yes, please, if you want to review any of the documents, including this one, please do so. We always need your opinions. We have some documents about some general questions, like guidelines for writing cryptography specifications, and we want some new process with some general stuff regarding the post-quantum KEMs and we talk about it a little later. Another important topic is our crypto review panel. It is a very significant and important part of CFRG. This is a panel of experts working from September 2016 and we had three rotations of the panel and we are ready to make another rotation in April. This is the panel of experts who help the chairs to understand the current status of the documents regarding the security assessment. We always ask them to provide the reviews before starting the research group last call and we don't do last calls before getting some reviews from the panel. And IESG and some IETF working groups can always ask for the help from the panel to review their documents when it is needed. And we thank the current panel, current nine members of the panel, and if someone wants to join, we will be happy for your self-nominating or nominating someone else. We'll send a message to the group with call for nominations in April. So if you want to join, please be ready for that. And I think that's all for chair's slides. Any questions, comments? Okay, then we can start with the second presentation. Nick Sullivan will tell us about our vision for PQ KEMs and our path forward to them. We had a lot of discussions and we think that we know how we can move on. Please, Nick.

Nick Sullivan: Okay, can you hear me? In the room?

Stanislav Smyshlyaev: Yes.

Nick Sullivan: Okay, it's a bit echoey. So we've talked about this a lot. There's been very active discussions on the mailing list about PQ KEMs beyond ML-KEM and including ML-KEM. There's been discussion from different groups, different protocols that have different requirements. We had a survey at IETF 11-- 123, a poll in the room, to ask the group what we should do with regarding specifying different PQ KEMs. There've been presentations about NTRU, Classic McEliece, FrodoKEM, NTRU Prime, people have mentioned HQC. The IETF needs guidance as to if they want to make a choice between these. And so rather than specify them in the CFRG, the audience poll suggested that we first produce a KEM security requirements document. We're moving forward with that suggestion from the group right now. So here's the proposed approach. Currently there's no requirements documents, every IETF group incorporating ML-KEM or anything else needs to write their own security requirements documents. Luckily, we have one individual draft by Scott Fluhrer and others that covers a wide array of security considerations you might have for ML-KEM. But we don't have it for any other KEMs that people are interested in. So the path forward that we're proposing here is to adopt a group of documents, under the topic of KEM PQ KEM security considerations. And if anybody who's a proponent of NTRU, Classic McEliece, any of these, want to put forward a security requirements document that is as comprehensive or more than Scott's, then we will consider that. And we're going to give the group six weeks to come together. We are not going to be accepting PQ KEMs that have not gone through extensive public cryptanalysis via an open process in this. So just keep that in mind if you have your own magic PQ KEM that no one's ever seen except for you, we're probably not going to include it in this process. So here are the sort of questions for the group. Security considerations are vague. There's not really across the IETF a very strict list of things that we want to cover here. I think we should be more comprehensive within the CFRG. But what that criteria is needs to be discussed and can be discussed. And so the questions here are, and these are for discussion on the list I think, unless somebody wants to come to the mic, but do we need a design team to come up with a common evaluation criteria for PQ KEMs? It could go either way. Is a mailing list discussion where someone lists what is the bare minimum to be covered in a security considerations document for a PQ KEM? Again, this depends on how much discussion happens on the list. And the third question's the most important: Is there anybody who's willing to write a security evaluation document for any PQ KEM other than ML-KEM? And without this, there won't be a security considerations recommendation from the CFRG for any IETF groups for anything ML-KEM-- any PQ KEM other than ML-KEM. So if people have other suggestions, people have other preferred KEMs, this is your time, you have six weeks to come forward with a security considerations document. Rowan?

Rowen May: Just a stupid question. What's the proposed output of the-- of this process?

Nick Sullivan: Right. The proposed output of this process is an adoption of a group of documents, each of which provides security considerations for IETF protocols for using an already specified PQ KEM.

Eric Rescorla: Yeah, so I guess how is a-- as a consumer of this output, and by consumer, I mean an IETF working group like TLS or lamps or something, how am I supposed to interpret that output? So you know, you do one for HQC and I'm supposed to interpret that how? That HQC is cool, totally fine to use? Like what am I supposed to read into this?

Nick Sullivan: Yeah, I would recommend taking a look at the ML-KEM document for an example of what types of things are relevant and you can think of anything from which underlying mathematical property does it depend on, to sizes, to side channel considerations, to dependencies on other documents. There's quite a lot. And this is why I wanted to have this as a mailing list discussion. And TLS, for example, is a group in which there's a lot of experts. We have many, many groups within the IETF who don't necessarily have the expertise in-house to do this. So this is not targeted towards the very crypto-heavy groups, but more towards the proliferation of groups who want to incorporate a KEM. Would like them to have security considerations that are well-vetted.

Eric Rescorla: Right. So I guess-- you say TLS is crypto-heavy and that's true, but I think it would actually be quite helpful for CFRG to, I don't know if you want to-- I mean recommendations would be too strong a term, although I'd actually prefer that, but like, you know, I think it'd be good to hear like we think like, you know, especially for the KEMs that like have kind of comparable sort of like performance properties, like which ones the CFRG has like the most confidence in and things we should adopt. You know, and it's very hard to see like ML-KEM not popping out of this given that we're already doing it across the IETF, but like just generally speaking, if you want to do ML-KEM and then you want to do another one, I think it'd be helpful to understand like from CFRG, what the view of CFRG was. Now, not getting too far over your skis, which I understand you don't want to do.

Nick Sullivan: Sure. Yeah, we don't have any considerations other than ML-KEM at this point, so there's nothing to compare. But deciding on whether this is simply a set of recommendations-- a set of guidelines versus an actual strong recommendation, that's to be determined and can be discussed during adoption calls. Next speaker. Deirdre Connolly.

Deirdre Connolly: Uh, yeah. I've been helping with the hybrid KEM stuff. Maybe mailing list discussion is sufficient. For the hybrid KEM stuff, we discussed what we wanted to evaluate for all of these hybrid KEM combiners. We talked about things beyond IND-CCA, like binding properties. We have done a lot of work on those. We're not finding them very relevant to existing protocols in the IETF. Basically, if you get IND-CCA and you're using anything that uses as a component or by itself one of these new PQ KEMs, that usually use the same-ish recipes to take a public key encryption scheme based on some hardness property, which may vary, into a KEM, they have similar-ish properties in general. And where they differ seems not to matter that much. So I don't know if we need to go much further, kind of taking the lessons learned out of the hybrid KEM stuff and some of the other documents like Scott's document and maybe just distill that into this criteria but without a design team. In terms of who will write evaluations for these other PQ KEMs, we do have a FrodoKEM document that's floating in the group. I think that's more of a specification. It is, but maybe we can either cannibalize that document into a security considerations document or ask the editors on that to maybe do that work. So there's that. And then in terms of which ones and is this a endorsement or recommendation versus just us writing up considerations? I think anyone that the research group picks to do any work on will be a little bit of a de facto-- we think this is important enough to talk about. So maybe we don't even need to pick the specific words we're using because even if we do the lowest bar of "we're just talking about the considerations of this one and this one and this one and this one, but not that one," maybe that's enough of a signal.

Nick Sullivan: Sure.

Stanislav Smyshlyaev: We have Guofei.

Guofei Gu: Hi, this is Guofei from Bouncy Castle crypto library. Yeah, I have a question about ML-KEM, that draft standard. So on the encapsulation and decapsulation key checks, the draft mentions that the library may combine this with core operations as implementers. It's our safest bet is just following the validation step in FIPS-203 to the letter or is there a some implementation wisdom to on how to integrate those checks efficiently without accidentally break the IND-CCA security. Thank you.

Nick Sullivan: Okay. We're happy that you read the draft and Scott can comment quickly, but this is-- we're not analyzing these drafts at this point but discussing process.

Scott Fluhrer: Yeah. Scott Fluhrer, Cisco Systems. When I wrote my original ML-KEM draft, I was not considering the question of is ML-KEM secure. What I was doing is answering the question, how do you use ML-KEM without shooting yourselves in the foot? The obvious question is that what is the general expected target for other drafts? Is it analysis of whether FrodoKEM is secure, or is it how to use it-- assuming it's secure, how to use it securely?

Nick Sullivan: That's a good question for this. At this point, we're looking for security evaluations for KEMs. Deciding what the final criteria for what these drafts look like is still TBD. We haven't even-- this is I imagine something the design team or mailing list discussion would cover. Russ? Scott already... Russ?

Russ Housley: Hi. I find this, you know, perplexing. As the chair of the LAMPS working group, we have in our charter that we'll only use NIST approved or CFRG approved algorithms for the PQ work. And this isn't a step to get answering whether the CFRG is approving an algorithm. So I just think it's-- it's a waste of time to get to what the IETF really needs.

Nick Sullivan: Okay. Well, CFRG, as far as I know, doesn't approve algorithms. So happy to have that discussion. Valerie?

Valery Smyslov: Valery Smyslov, ELVIS-PLUS. Naive question: are only KEMs in scope? Do you think about post-quantum signatures? This evaluation documents...

Nick Sullivan: This is exclusively about KEMs. Signatures, I imagine if there's enough interest, could go through a similar process and have a similar set of security recommendations if there's energy from folks who want to write those documents and there's need for IETF groups to have security considerations on signatures. This is the one that has had the most attention and conversation so far, not to say that signatures are not important. And it looks like the queue is locked and I'll hand it back to you Stanislav, unless there's something else.

Stanislav Smyshlyaev: I am not sure that we have time because we are behind schedule now, and I propose to move on because we really need to get back to the schedule. We can move it to the list. Thank you, Nick, for this, and of course we can continue the discussion on the list. So our third presentation is about Sigma protocols and Michele, Katie, one of you please start. Please take the clicker. It will be working in a minute. Yes, it does. Please start.

Michele Orrù: Hello everyone. This is about an update on the Sigma protocol and Fiat-Shamir IRTF CFRG drafts. We are trying to inform and specify how two components very popular in the zero-knowledge literature should be done. One is Sigma protocols, that are zero-knowledge interactive proof of knowledge for generally simple relations and so simple that in fact is only three messages and one of them is just a random challenge that's a public coin. And Fiat-Shamir, which allows to turn this interactive protocol into a non-interactive one by means of a cryptographic hash function. So together, they allow you to create non-interactive zero-knowledge proofs that can be later used also by other documents. These documents have been now part of the CFRG for a while. Sigma protocols are in particular useful for very simple relations like for simple, lightweight anonymous credentials. That's where we found most adoption, but we expect them to be used also in other places. And specifically, the Fiat-Shamir transformation, we expect it to survive also as we move forward with the post-quantum agenda because it's such an important component that we expect it to be also useful in the future and also be used in other zero-knowledge proofs that are not Sigma protocols. Since the last IETF, we have been working mostly collaborating with two other drafts that have been using these specifications and this has mostly been a useful constructive collaborative effort with other people that I would like to thank explicitly: Armando, Chris, Chris Wood, Jonathan, Michael, Sam, and Vishruti. And in addition to that, now and mostly thanks to effort now we have test vectors that are part of the specification. We have cipher suites and test vectors integrated and we are hopefully stabilizing these test vectors so that in the future, they can be used as a reference for what should be the input output of these cryptographic protocols and kicking off the formal verification for the Fiat-Shamir transformation, which is, you know, a long effort that will take a while, but we set the basis so that this can be done and can be later used by adopters of these specifications. In addition to that, thanks to Lindsay, Victor, and Ian, we built a library that can be used to prototype some of these systems. Given that they've been used a lot for anonymous credentials, we created a library, a Rust library that can be used, for instance, to specify credentials with attributes and how they can be redeemed. And these are like just two examples that have been popular in the IETF for rate limiting or for pseudonym authentication. These credentials, there is a wide range of credentials in there was big discussions and there will be other discussions happening at the next CFRG meeting. These credentials are particularly easy to specify and they can also be verified in one millisecond or less, so we expect them to be useful now even though, as it has been discussed for a while, they do not provide post-quantum security from a soundness perspective and only from an anonymity perspective in their basic form. So you know, like we built this whole stack of implementation and the two things that you see there colored are the ones that have been targeted by the specification effort and that we have test vectors for. And all the things on top, you know, are things that people can build on top. And as we, you know, move forward with the standardization of this primitive, the question is also about other specification that are part of the CFRG or they are part of the larger community. There are things like per-verifiability-- per-verifier linkability in within BBS, or anonymous authentication, or long fellow, that are all using some of the components here and so the question for the community is how do we converge so that we all base everything under one simple specification for the components that are to be reused. So this is-- this is all I wanted to say. Again, wrapping up: these two specifications are about simple zero-knowledge proofs. One is about the interactive protocols that deal with simple linear relation over elliptic curves and the Fiat-Shamir transformation is how about we make them non-interactive so they can be used in practice. Thanks.

Stanislav Smyshlyaev: Thanks a lot. Any comments, questions? No? Thank you. And let's move on, trying to get back to the schedule. Mallory, you're happy to start.

Mallory Knodel: Thanks. Hi everyone, I'm Mallory Knodel. I'm going to make this quick. Also wanted to just start by thanking the chairs for letting me present today. I'm just unable to be here on your Thursday session, which is I think when you would have a talk like this. So I'm not going to be talking about any research group document. I wanted to bring in some considerations from research that I have conducted with colleagues at both NYU and Cornell about the use of AI and data-driven features in end-to-end encrypted messaging, video, audio applications. Just trying to bring some practice to the theory that you're all worried about because I think there's a trend here that we really need to address. So folks might have seen this if you're using WhatsApp now. It's been out for about a year. There's also examples in Samsung phones that are using Google Messages. And lastly, Apple Intelligence got a lot of attention for its announcement last year. Some analysis was out pretty quickly by Matt Green and others and this paper came out around the same time. So what is it that we're looking at? I mean, in some ways, it's a problem of endness, right? That I think Chelsea Komlo and Brita Hale had presented on a few years ago. This idea that your phone, which has these apps on it, is, you know, potentially working with some of these other features. Feature-rich messaging is really popular. But what happens when you've gone through all this effort to, you know, enable strong cryptography for the purposes of confidentiality and privacy and then you start introducing features that basically just, you know, run-- run roughshod all over that, right? So I want you to look at these two screenshots. If you can read them from where you're sitting. Because we-- this narrative has not changed. That on the left, you have Meta AI in your WhatsApp chats and it says your personal messages stay private and it gives you some indication of how that happens even if you invoke Meta AI in your chats through a variety of different ways, right? For example, summarizing the transcript of a group chat that somehow that your personal messages stay private. Okay, how is that achieved? We wanted to actually analyze this because these claims are actually very consequential and we were not actually convinced. So we analyzed the available public documentation for these things and came out with some recommendations for how to actually achieve or sorry, maintain, let's say, the security, privacy, confidentiality guarantees of end-to-end encryption if you're using AI features. So this was the main research question: can you process message content, can you create derivatives of encrypted messages message content in a way that's compatible with end-to-end encryption? Maybe there are some narrow ways, but what are they and what definitely can you-- should you not be doing? So we had four recommendations about how AI training works and how models trained on end-to-end encryption should or shouldn't claim to have kept that data secure. In situ use, which is what we would consider the processing of that. That's sort of where I'm going to focus my talk because that's the security stuff. The second two things are mostly legal. It's in the slides. I'm not going to deal with that today and if you want the talky-talk version of this talk, there's a version of this available online and you can also, of course, read the paper. But disclosure and consent is basically like: can you actually continue to say these things like "your messages are still private," "this is still end-to-end encryption." Is that actually appropriate? Click. Yellow. Okay. So just me, let's talk about AI just for a brief moment and so we can get clear on what we mean when we use these different things. So maybe one of just to interpret I think one of the motivations for using AI in messaging applications: one, people like it, you know, maybe it's fun to be able to put like party hats on your friend's pictures. Maybe another reason is that people imagine there's a lot of this already happening if in the form of like on my endpoint on my device I'm copying and pasting- copying content, pasting it into AI chats and then pasting it back into the end-to-end encryption app. Maybe you should just make that all in one interface, that could be a form of harm reduction. It could be also just easier. And then, you know, maybe another reason is like messaging conversations are novel sources of content that can then improve models, right? Gives you rarer tokens and all of that. But you know, at what price, right? So we talk about the training as really specific in a moment where you might be using some of this content. And then you can also through the process of inference, this is when you've asked the trained model to do certain things for you, to generate content, to generate images, whatever. And then, you know, the data collection is actually part of the inference, so it feeds back into the training. So how does this happen? I mean, there's a couple ways to do this. If you're going to introduce these features, you could just do it all on the same endpoint and that would be kind of the equivalent, right, be the automated equivalent of me copying some content, pasting it into my robot, taking the output and putting it back into the chat. You've just like made that easier. But that's not really what's happening because our phones are not really powerful enough to do that. You could do another option where you sort of treat the bot like another endpoint. So you get around having to convince people that, you know, my robot is my endpoint and just give the robot its own endpoint. That would be just like, you know, instead of the ghost proposal if people remember that, the bots are in the chat. Obviously from user interface people don't like that, but you maybe notice when you join Zoom meetings that sometimes there's a note-taker. That'd be the sort of other option. What's actually happening is on the left. All of these all three of the proposals we evaluated are using trusted execution environments in the cloud to do the processing. The data, the content, it's leaving the user's endpoint device and it's now being processed in the cloud. And there's a really massive push narrative push to convince folks that this is equivalent to end-to-end encryption. So we started small but obvious. If you train an AI, a shared AI model, meaning other people have access to that model, on content that was originally end-to-end encrypted, that is not compatible with end-to-end encryption. And we talk a little bit more in the paper about the specifics of what would the harms be, right? You can recover data in a shared model, we treat it as derivative content which also shouldn't be considered anymore private content. So that's like our first very strong recommendation. This would not be cool. The second recommendation takes it a little bit further and talks about the processing on TEEs. And I just want to say, I'm sure for a lot of you folks in the room, like this is really obvious, but the goals for end-to-end encryption and the goals for trusted hardware may look like they have similar outcomes, may look like they have similar, you know, they feel the same, but they have absolutely different goals. They do totally different things and they aren't the same. This is one of the core points that we make in the paper. So on end-to-end encryption, you're going for confidentiality of the communication between endpoints. For trusted hardware, you're trying to create confidentiality of the compute. The machine state between the user and the cloud then is the thing that you are creating confidentiality for. Another really obvious one, right, is the mathematical hardness of end-to-end encryption is about security and then hardware design is the trusted encryption goal that deals with the specification for security there. So again, it's like not the same thing. We actually were able I think we came up with a framework, you know, as a lot of good academic papers do, for how to evaluate this not just for this case, but I think in general this could be expanded to look at a variety of different features for consumer messaging, voice and video apps that are using encryption. But for this, you know, we just can show you in a pretty straightforward way what we think would be cool. So compatible with end-to-end encryption would be on-device models. Again, probably not achievable for most devices today, but still something that's possible. The other thing that we also point out in the paper is that fully homomorphic encryption again, without fine-tuning though, would also be compatible. So we give those two things as caveats. Just jumping to the end, right, the most vulnerable thing would be server-side plaintext inference without fine-tuning, like definitely don't do that. For what it's worth, I don't think any of the papers we evaluated actually are proposing that, right? But then the things that fall in the middle are kind of what folks are proposing. So you know, it's not compatible with end-to-end encryption to be doing all this processing off-device and we really, really want to impart that on the folks who are deploying this and the folks that are working with deployments to improve them, to guide them, to suggest guardrails around that. So that's sort of I hope the outcome of this paper. This is, you know, if you're interested, the text of the second recommendation. We want to encourage endpoint local processing. If you can't do that, then you should make sure that there isn't a third party that can see the content or that can see derivatives of the content. We're talking about like message summaries or things like that. And that the end-to-end encrypted content is exclusively there to fulfill the user's requests and nothing else, for example, you're not trying to obliquely figure out, you know, preferences or things that could be used for advertising, things like that. There may be other equities and incentives involved and we want to go ahead and say that that would contravene the promises of end-to-end encryption, if it wasn't already obvious. Like I said I was going to speed through this, but the paper includes a great deal about the legal questions around consent and disclosure because these are being framed as privacy preserving or end-to-end encrypted messaging apps, even if we feel that they don't actually deliver that anymore with these AI features. And one thing I just wanted to point out is that you know, at some point all before all this AI stuff, you know, Zoom got actioned by the FTC because they were calling their product end-to-end encrypted but it wasn't on by default and that was like the basis for the FTC telling them that they could not market anymore Zoom as E2EE for folks who remember that. So that's one of the sort of things we highlight in that. Skipping ahead, you know, if you're going to be doing this, you need to give users disclosures, you need to give them opportunities to opt out, and so we do see that, I think, in the apps and things that we've looked at. I will leave it here. I know we don't have a lot of time today, but I did want to say that we just updated this paper last week, so it's fresh. It's got some little bit of conversation in there about a variety of new developments in the space, and so maybe folks will find that useful. Thanks.

Stanislav Smyshlyaev: Thank you very much, Mallory. Please join.

Jon: Yeah, first thing, I hope use of advanced crypto in phones doesn't turn out to be a privacy theater. We have seen leaked report from FBI before that actually Meta and iMessage, WhatsApp gives the police a lot of information, even like if you store it in iCloud, a lot of the past... question on 15, you said that FHE, fully homomorphic encryption, is compatible. Wouldn't you need some multi-party computation? My understanding is that fully homomorphic encryption, you need the private key to actually extract any data.

Mallory Knodel: Yeah. So just to be clear, it wouldn't be for training, it'd be for processing. And I think what we were trying to say is that, you know, there are some conditions in which you could create derivatives of encrypted content, like summaries or the ability to do keywords search. Like there's a bunch of stuff there that if you were to use fully homomorphic encryption, that that would still preserve the encryptedness, right? You would never have to decrypt, you would never be using the plaintext in order to get that output and so therefore it preserves end-to-end encryption. But it's the same problem: it's very expensive on-device, it's unlikely to be implemented at scale. So...

Jon: Can you get any output from fully homomorphic encryption?

Mallory Knodel: That I don't I mean, I honestly just don't know. I think that's, you know, this is I know Deirdre has talked about this in the past too, so for folks in that room or Deirdre come on and talk about the specifics of it. I just, you know, we wanted to mention that as something that, you know, it doesn't get decrypted, it's not plaintext, we feel like that would be compatible.

Stanislav Smyshlyaev: Okay, thanks. And I'm seeing that we don't have any time for new comments. Maybe you should also ask for some guidance in SAAG because CFRG can think about challenges here, but we don't have any guidance for you just now. So maybe you should go to SAAG and maybe they will have something more. Thanks again. And our last presentation, Nick Sullivan. Please start.

Nick Sullivan: Okay, can folks hear? I'm hearing some echoes, but I was told I was quiet before. Yes? No? I'm assuming everything's good. Okay, so this is-- my hat is off for this, my chair hat is off. This is an individual draft that I put together because of a series of, you know, confusions that we've had across the IETF, which is something we have not had to do for a very long time, which is bringing new primitives into the IETF more broadly. So I put together a document, it's structured as a BCP, and it hopes to answer the question, how do we standardize cryptography at the IETF? What pathways exist within the existing relationships between groups and what has worked in the past and what hasn't, and what lessons we can learn from that going forward. And from sort of a very high level, there is the security fundamental cryptographic analysis, which oftentimes is done at the CFRG or even NIST or other external places. And then there's the concrete decisions about protocol profiles, things on the wire, IANA tickets-- IANA specifications, etc. So hopefully this-- this is not a "here's how we should do things" document. It's a "here's a list of ways" that we can head get ahead of decisions or get ahead of issues that happen pretty frequently and have happened frequently even at this IETF. So in particular, what's broken today? Oftentimes, working groups without cryptographic specialization in the group have used primitives that don't necessarily have a IETF approval or even a review from experts within the IETF. And we mentioned that the crypto panel is going to be renewed right now. As a reminder, the crypto panel is a general resource for the entire IETF. Any working group chair or area director who would like a comprehensive cryptographic review of, maybe not an entire protocol but at the very least the security considerations of a protocol can come to the CFRG chairs and we can try to delegate this sort of thing. So sometimes you end up in situations where different groups implement and standardize things that are just slightly different from each other and that is a very bad situation for us to go in for going forward because it has negative impacts on library developers. Problem two: this is something that's very salient, which is that crypto issues surface at last call that can take a what is a generally uncontroversial draft and cause it to be torpedoed or to be discussed all sorts of different ways and new things brought up. So when protocol documents do reach this last call-- IETF last call, IESG review, or even working group last call, there could be gaps in the crypto that people step in there. And then it's very, very much too late to go back and start a CFRG document, which, as we know, takes years to do in order to relief that, even crypto panel reviews are scoped in the weeks, right, not in the days. So sometimes fixing design issues like this creates some compromises under time pressure and documents that are valid or documents that are helpful may get stuck one way or the other. So you know, this-- the review done at one last call in one working group does not have the memory of the reviews done at working group last calls in other groups. And so this is the meat of where this came from and the sort of inspiration for this draft. So if you want a flowchart-- flowcharts are great if you're doing process analysis. But effectively what this document proposes is a two-lane model. One that analyzes the-- this text is small, I apologize, the slides are available. But there's the cryptographic foundation side, which is the mechanism specification or security considerations. This does not include wire formats, does not include code points, no IANA registries, but it does include test vectors. And then you have the standards layer, right, which is the protocol profiles. This is wire formats, code points, IANA registries, normative references, sort of common shared vocabulary across working groups. And so this pathway has a number of decisions here. The first one is it starts with one working group or an individual needs a cryptographic primitive for their protocol. The first question to ask is: is there a CFRG primitive available? Or is there a primitive available for which the crypto panel has done a review for? And this is not to say that every crypto decision within the IETF has to go through CFRG-- we are definitely not recommending that. What we're saying is that in deciding whether to introduce new cryptographic primitives into protocols, it's worth seeing what the rest of the IETF has done and in particular, the sort of central analysis point for cryptography at the IETF, which is the role CFRG has played, whether they've specified a primitive or not. So the way this worked for privacy pass is pretty straightforward. So privacy pass wanted a primitive, they introduced this new capability based on blind RSA or VOPRF. This did not exist. So step number two: do other working groups need it? And the answer was no. And I think it's probably still no for blinded RSA and VOPRF. And so this leads you down the path to the final one right here, which is a working group or individual drives parallel work with the CFRG. And this is what we did with privacy pass and it was extremely smooth, while maybe not extremely smooth, but it worked out in the end. The protocols were developed in the IETF layer, all the code points, all the exact on-the-wire recommendations, IANA code points were all developed in the privacy pass group while hash-to-curve, VOPRF, blinded RSA because they were not externally specified by an open international process, CFRG went ahead and specified them. So that's a what you'd call sort of a happy path that we've done before. And that's sort of if you apply this framework from scratch, that's one path that you can get. There are times in which different working groups need existing primitives and they're not necessarily in the best shape to share beyond an informational document. HPKE was an example here. This is an ongoing process that's happening. So it goes: the working group needs a cryptographic primitive and let's just take the TLS group for ECH here. HPKE was originally developed for MLS, or in conjunction with MLS, but is there a CFRG primitive? Yes. Is that on the standards track? No, it was an informational document. And the question was raised, should we promote this to a standards track document so that it can be referenced without downrefs by everybody and not sort of suited for from just a research document? And this led to dedicated working group that republished the profiles. So this is another happy path that we're trying and is currently working through. The idea is that these dedicated working groups are tightly scoped and focused on delivering outcomes to enable a wide variety of working groups to use a primitive in a way that's been analyzed and looked at from an IETF usage perspective. That's HPKE. It was mentioned earlier in this conversation about PQ signatures. So there's current debates in different groups-- massive debates, I was just in JOSE previous to this, about PQ signatures, which ones to use and how. If you were to follow the process here, you know, two years ago when originally the combiners or signatures were being explored for LAMPS, it could have followed this path where the working group needs a cryptographic primitive, CFRG doesn't have anything on PQ signatures. Do other working groups need it? Yes. And the conclusion here is potentially form a dedicated working group targeted within the IETF on PQ signatures within the IETF. And this would work in partnership with CFRG, which could provide reviews on the security considerations or even a full security considerations document on the PQ signatures that would come forward through that. So this is-- this is kind of the overall view of what this process could look like for introducing and standardizing new primitives and not having bikeshedding happen in 17 different working groups all around the world. Let's focus it all in one place if it's small enough, focus it in one working group if it's very, very small, focus it on the one person who's developing that draft and work in consultation with the CFRG to ensure that no security issues happen. So we don't have a lot of time here but I want to leave this up for discussion. This is an individual draft from my perspective. I've been sharing it around getting folks' opinion on this and as to whether or not this could be a helpful way to view the IETF processes that are available to folks to end up with a positive outcome going forward. So there's some discussion questions here, but I'd rather go and take questions from the audience if we have any.

Stanislav Smyshlyaev: I would like to add that we will start the Thursday session with 10 minutes allocated specifically for this discussion. So we'll update the agenda and on Thursday first 10 minutes of our session will be dedicated to this discussion. So if anyone wants to say something, please do it in the beginning of our session on Thursday. Probably better because very few people have read this if we were to do a poll. And now we are out of time. So thanks everyone for attending. We are waiting for you on Thursday and have a nice lunch.


Session Date/Time: 19 Mar 2026 01:00

This is a complete verbatim transcript of the CFRG session at IETF 125.

Stanislav Smyshlyaev: All right, good morning everyone. This is CFRG. We have the second session today. We've got two hours, but we've got a really busy agenda, so let's start. And we will start with the discussion on the slides, the last presentation of the Tuesday session about Two-lane model for publications. Nick did the presentation and there were some requests about willing to discuss this, and of course, we are happy to allocate some time for this. So we have 10 minutes and please, if anyone wants to comment, please do that.

While we are waiting, we have a minute taker, even two minute takers. Thanks a lot to Dan York and Usama, and all other information about the meeting is in the Chairs' slides that were on Tuesday. So is there anyone wanting to comment on Nick's slides on the two-lane model, or we can move on?

Nick Sullivan: Yeah, just as a quick reminder for folks who weren't here at the first session, I have a personal draft which is structured as a BCP, but describes different ways through which primitives can be specified at the IETF in conjunction with the CFRG. And the goal of this is to collect some of the pathways that have worked well and to provide them in a way that allows whether it's working group chairs or anybody who needs a new cryptographic primitive, a pathway to having the entire IETF adopt and agree on how to use such primitive.

Stanislav Smyshlyaev: Okay, so we've got two people in the queue. Michael, please.

Michael: Hi, Michael from NCSC. Nick, thank you for writing this draft. We spoke about this in person. But I had a quick question on the forming a dedicated working group point. So I think it's a really good idea to put all the discussion and the bike-shedding in one place. I guess it wasn't clear to me that when you have competing requirements or different needs from different working groups, how that's going to work coming from kind of one document from one working group. Could you expand on that a little bit?

Nick Sullivan: Sure. So the idea here is if there's three or more working groups who want to adopt a primitive and want to implement them in their protocols, that the advocates for it can come together and hash out and debate what the best parameterizations would be. The goal here is not to have one document that dictates things to different groups, but to have a slate of documents that parallel that provide code points in parallel for all the protocols that might need it.

So, for example, we have LAMPS, and we have the JOSE working group, and we have MLS, who are all interested in PQ hybrids, and they're all interested in—they have made different choices as to which parameterization to use: which version of ML-KEM, which version of the elliptic curve. There's good arguments to be made, but these arguments currently are being made in sort of silos where one working group does not see the arguments of other working groups for and against different combinations. So the idea here would be that at the very least these conversations would happen on the same mailing list so that, you know, it's in the light. So if one group does not have the same people, the same expertise, they would benefit from being in the same room as folks who are looking at implementing this primitive from a different perspective or within a different protocol. That's the idea.

Stanislav Smyshlyaev: Thank you. Next question, Eric.

Eric Rescorla: So I guess I'd like to just walk through an example, perhaps that would help. So, you know, we've had a lot of people who want to adopt, say, Post-Quantum. Forget about the hybrids for a second because like the Post-Quantum algorithms like, you know, like ML-KEM specifically, you know, is to some extent a drop-in for Diffie-Hellman, or we wish it were but we can sort of pretend it is. People have been pretending it is, even though it of course is not, as you know. And like what would—so like NIST standardized, you know, FIPS 203. So like what happens and then everyone's like, "Cool, we want ML-KEM." What now?

Nick Sullivan: Yeah, let's walk through the flowchart, right? So let's just say it's three different groups, they want a KEM that's PQ. They think they want ML-KEM, but it's that's really something that is worth discussing, right? I mean it doesn't come out of the box that a group would definitely want ML-KEM, they may want an alternative. But in any case, ML-KEM comes in at the very top of this chart: "Is there a primitive available defined by the CFRG or reviewed by the CFRG?" And the answer currently—currently is no.

Go to the left: "Do other working groups need it?" Yes, there's a group of three or more groups who are looking for PQ KEMs. And this leads you to the "Form a dedicated working group" in which case they would consult with the CFRG in order to take a look at whether or not this ML-KEM has security considerations, right? Or the selection of all the different potential groups would come to the CFRG and say, "Hey, we want to use PQ KEM. What are the sharp edges for it?" and work with the CFRG to produce a security considerations document much like the ML-KEM doc that Scott put together. And for all the working groups there would be registering of code points within this dedicated working group. So you'd get a TLS code point doc, you'd get an MLS code point doc, you'd get a PGP code point doc.

Eric Rescorla: So I think maybe the situation like actually—and that's a good illustration, thank you. So from the perspective as Bas was saying in the chat, like we've already been shoving ML-KEM in everything, right? And I think you sort of phrased this as having a PQ considerations doc. But my sense is that what people actually want to know is, you know, is ML-KEM okay? Now, I think people by and large have just assumed that it came out of the NIST process and so it's basically okay, right?

But if it was potentially Classic McEliece, you know—you mean or, you know, or NTRU people might feel differently, right? And so I guess like I sort of want to distinguish the question of assessing whether a primitive is good from giving guidance on how to use it. And I'm a little like—I guess I feel like the thing that we definitely need is the second, even if we bypass it in the case where it's been through some SDO—an SDO. But like I think we badly need a capability to say there's interest in using, you know, interest in using NTRU, say in SSH. Is that a practice which CFRG feels comfortable with? Not just like, you know, because I think like my sense if I understand these security considerations documents is like, you know, that there's the question of like what the external box, the external shape of like this of the thing is, right? And there's the question of whether it's internally okay. And it could be the case that you could have two things with the same external shape but actually one was perfectly broken, right? And it's not clear to me that a security considerations document resolves that question. Maybe you think it does.

Nick Sullivan: Uh, yeah, that's a good question. I think with regard to working groups who want to make the choice of NTRU, for example, off of their own choices, the path would be sort of similar. You'd go, "This group—working group needs a primitive," CFRG hasn't said anything about the primitive, and then it would go down to the other path where, you know, this individual will drive either CFRG adopting an analysis of multiple things or just if that's not on the table, there's not enough interest for, you know, comparing comparable PQ things, we would potentially get the Crypto Panel to just review the security considerations. And for a security—if you wanted to use NTRU, part of your security considerations are "NTRU's safe for use in this protocol." And so that would be part of the question that would be asked of the Crypto Panel.

Eric Rescorla: Okay, thank you. I think we are going to have to have a fair amount more discussion here, but thank you, that helps clarify quite a bit.

Stanislav Smyshlyaev: Thank you, and Jonathan.

Jonathan Hoyland: Hi, Jonathan Hoyland. Is the main goal of this to reduce load on CFRG, or is it to just avoid the incoherence of CFRG writing standards-track documents, or something else? Or both?

Nick Sullivan: The second part is not intentional, that's sort of a consequence of it. The main goal here is to effectively centralize and provide a pathway that's reliable and that can be accountable to timelines for providing primitives in a standards—inside of IETF standards. That's the goal. Currently there's—if you want to introduce some new primitive into your protocol, there's not necessarily an obvious path for how to get that into a—whether it's a specification written down or otherwise. And if it's something that hasn't been reviewed by an international open committee, that's even more challenging, right? So the goal is to centralize the conversations so that they're not happening in a lot of different places, allow working groups to make their own decisions but also force them to talk to each other so there's less diversity in terms of what people choose.

Jonathan Hoyland: So as a sort of strawman, would it be equivalent or the same if we just said, "Okay, CFRG is going to meet four times at each IETF instead of twice each IETF"?

Nick Sullivan: I think no. I mean the danger here is to have an all-encompassing working group that standardizes cryptography, and that'll be a bucket where everybody just throws everything in. I think the point of this is to have focused groups based on primitives and use cases and have them have strict schedules and end quickly, with HPKE being the first kind of, I guess, the more definitive cross-working group example.

Jonathan Hoyland: Okay, that clears things up, thank you.

Stanislav Smyshlyaev: Let's move it to the list, and let's start with our second presentation, Longfellow ZK. Abhi, I've passed slides control to you. Please, you can start.

Abhi Shelat: Hello, so my name is Abhi Shelat and this is again representing a group with Matteo, we've added two co-authors to this presentation, Tim and David, as well. So let us identify the problem that we're trying to solve here. So there's a desire for zero-knowledge schemes for small identity theorems. So theorems about your digital identity, that's primarily the biggest use case, proving things like age over 18 from a state-issued identity, or other things like my address. Now, there are other solutions to this like SD-JWT, which is another working group has specified here, or the mDL protocol which the ISO has specified.

But both of those have fundamental privacy problems that prevent them from being deployed at scale. Essentially, those other mechanisms leak information, they in particular leak these super-cookies, which allow people to track you. Either websites that collude can track you if the scheme is run improperly, or an issuer and a website can essentially track you. The only way to actually solve this properly with all the soundness considerations is to use some sort of zero-knowledge scheme that provably doesn't guarantee—provably doesn't release anything about the credential, and more importantly, it also doesn't release anything metadata about, for example, the proof. And here, for example, I'll point out that like the other major scheme for identity credentials, which is called BBS, it's not suitable for these small identity theorems either because it unfortunately leaks things like the index of the attribute that you're trying to release.

For example, if you have a BBS credential and some party decides to put "age over 18" in slot one for Alice and puts it into slot two for Bob, even though the proof system is zero-knowledge, the proof system does not consider the metadata of the proof, like for example, which index is revealed, and that breaks unlinkability. So far, the only scheme that works in these applications is a zero-knowledge scheme that's fully zero-knowledge and that doesn't have any metadata leaks, and that's the Longfellow scheme so far.

So the SD-JWT people have actually spoken—that they have general interest in zero-knowledge techniques that are post-quantum secure, and they would be supportive for these type of schemes. So that's one example of a working group that's expressed interest in this type of work. We've had a lot of feedback from EU, from organizations such as Apple and Google, that nobody wants to deploy any new schemes that are based on elliptic curve cryptography, and the preference is to choose post-quantum schemes when efficiency allows. That said, the whole point of those set of slides is that we believe there is some urgent market need for this type of post-quantum zero-knowledge scheme that's fully zero-knowledge and that—that works for the type of identity statements that occur in these SD-JWT or identity applications that I've mentioned.

Okay, so that brings us to the Longfellow scheme. It is one of the oldest recipes for—we follow one of the oldest recipes for producing zero-knowledge. We can come back to this slide if anybody's interested in how we produced this, but we have two components in it. I want to get into essentially—oh, we've essentially, you know, have a paper about this, it's open-source. Again, let me highlight that it's one of the few zero-knowledge systems that is practically deployable and that only relies on SHA-256 as a hardness assumption. And so far we have not found any scheme that fits these criteria that works better in the regime of 10,000 gate circuits to 50 million gate circuits. And that includes the latest work—I'll talk about some of those results recently.

What has happened since the last IETF meeting? So that's the main point of this presentation. There's been a second group, this ISRG group, that's completely independently produced a second implementation that is bit-compatible with our open-source implementation. And also there's been some work with integrating this into wallets and into the European age verification system. Several companies have essentially been able to integrate this into their products. There've been three independent security reviews of this, no issues raised with respect to the zero-knowledge thing. We've improved the performance, and here I'd like to give a few ideas. So you can—as before, you can prove knowledge of an ECDSA signature, and this is used to be a very complex statement, but now we can do this in essentially milliseconds. More interesting though—so for example, the ML-DSA-44 signature scheme, we're just about to reveal a—release a paper on this particular thing, but we have essentially produced a zero-knowledge proof of a signature—sorry, the RS is not correct here, that's a typo—but essentially we have Sig and Mu—so essentially if you look at the verify internal function that's specified in the FIPS spec for ML-DSA-44, we can essentially prove that we have a Sig and a Mu—essentially we hold a Sig for essentially a PK and a Mu that satisfies that particular function. And this is, I believe, the first zero-knowledge proof of possession for a post-quantum lattice signature scheme, and we can do it about 850 milliseconds on like an Apple device. So that's—and we're hoping to reduce that even further. Now, as you can see, it's substantially slower than an ECDSA signature, which is about 16 milliseconds, so it's still about, I don't know, 40 to 50 times slower. But again, ML-DSA itself is quite a bit slower than—and larger than ECDSA.

We also have another example now in our repo, essentially it's a post-quantum Bitcoin application. So if—notice that Bitcoin when they publish the address of a particular account, they actually apply a hash function to the elliptic curve point, two hash functions in fact. So in particular, you can harden Bitcoin with Longfellow by giving a proof that you know a secret key such that the RIP of the SHA of the compressed G to the X point is your particular Bitcoin address. And that we can do very, very efficiently, about 87 milliseconds to produce a proof and 60 milliseconds to verify it. And that's also notably 10 to 15 times faster than anybody else is able to do that sort of thing. Okay, I just want to report on one other—there has been a little bit of state-of-the-art work here, Vega is a late 2024 paper that compares to Longfellow. And they use some sort of very cutting-edge techniques and they claim essentially a factor two improvement on the prover time. And so I put that into perspective: essentially they require elliptic curve commitments, so they're no longer post-quantum secure. They also don't have a traditional security argument in the sense that they use this technique called folding, which doesn't allow for a traditional polynomial-time extraction algorithm for the statements they have in mind. And so the security guarantees are weaker and the, you know, complexity assumptions are more necessary. And their implementation uses threads, so it's not clear exactly how much faster it is, but their benchmark basically is about twice as fast as Longfellow. And so that's the tradeoff: essentially, you know, 500 milliseconds versus 250 milliseconds for essentially the type of proof that we use on the phone.

Okay, so that's what I wanted to basically cover, essentially that most importantly we have written a specification of this scheme. It's an old scheme, it uses very old recipe, it's been security reviewed and has essentially a second implementation that's bit-compatible that's roughly of the same performance in a different language in Rust by a totally different group. And so that's the status, that's how far we've come along and that's—and again, these benchmarks are two new benchmarks that show sort of the utility of the thing. So I'll leave it there, I'll leave the last five minutes for the questions as per the discussion if anybody has any questions or comments or feedback on what we should do with this.

Stanislav Smyshlyaev: Thank you so much. Please, Bas.

Bas: Thank you so much for this. We really—the hard work at standardization and getting really things to the end point, right, it's easy to propose something, so this is great work. So my question now—don't take it in the wrong way—but I want to wonder if you could compare it with how this would perform compared to VOLE-in-the-head. I mean, that's the obvious contender.

Abhi Shelat: Oh yes, so we have some—so the problem with VOLE-in-the-head is that you have to commit to—so the size of the commitment that you have to make with VOLE-in-the-head is related to the size of the VOLE, and VOLE essentially uses one bit or something like that per gate. So even for very small circuits, it's not clear—

Bas: VOLE uses one bit per witness, not per gate.

Abhi Shelat: A per witness, yeah. Per—so VOLE requires at—unless you apply a sumcheck technique like ours, you essentially require one witness per gate, per multiplication gate in VOLE-in-the-head. And that essentially is what prevents it from scaling for very large circuits like the ones we're handling. The sumcheck technique in our work essentially reduces the size of the witness that you need to something very, very close to the input size of the witness, not the circuit size. And VOLE-in-the-head techniques essentially, unless you combine them with sumcheck, which essentially eliminates the sumcheck, you essentially need very large fields in order to get the soundness, it's not clear. So I've had several discussions with people who work on VOLE-in-the-head stuff, but I have not been—I have not found any evidence that it can—it can basically be performant in the same regime. I'm happy to take that up after—offline, and I do think now that enough people—I'll create a write-up for why I think that technique can't scale.

Stanislav Smyshlyaev: Thank you. John.

John: Yeah, thank you. Yeah, I'm happy to see this, there's a lot of non-privacy use cases for zero-knowledge proofs. And I strongly agree with this, it doesn't make sense to deploy anything new quantum vulnerable at this point in time, that's just a waste of resources. So I'm very happy to see this, I'd like this discussion going on, I would like to see more discussion on what different schemes—what are the performance gains and the properties that you get. So I very welcome this discussion you have with Bas and a follow-up on that so we can see what is possible.

Abhi Shelat: Yeah, thank you for that. I think I'll follow up with something on the email—the email list with my comments on that.

Stanislav Smyshlyaev: Thank you. Eric.

Eric Rescorla: Yeah, thank you for the presentation. This is very nice work. I think, you know, this is easily fast enough for most—for most applications I'm familiar with, in particular all the age assurance applications, even if you had to absorb a full second for the age assurance, it really wouldn't be that big a deal. So, while it would be nice to have things faster—it's always nice to have things faster, like I don't—I wouldn't spend a lot of time trying to do that except for, you know, bragging reasons. The—as I understand it, you know, this just takes arbitrary circuit, right? And so I think, you know, for practical applications, of course, you know, we need some mechanism for defining the circuits which we actually intend to prove and verify and they prove the thing they think they claim they prove, right? And I guess I was interested in your thoughts on how we get to an end-to-end system that like is usable in the real world, you know, that takes use of this, like for instance in the example you gave 18, like, you know, how do we get from the point where we're like, okay, you know, I want 18, you know, where I—where what I want to do is prove 18, and I can go read some document that tells me how to do that and I don't have to like figure it out for myself, right? How do you think we get—and you know, and I guess like, you know, just to make—just to like—obviously we could just have like one where you—where you plugged, you know, the 18 parameter in, but like you could imagine well what if I have to be 18 and like I don't know, like, you know, over 18 under 21 or something like that, you know, like more complicated predicates? Like how do I get a generic system that I can—that I can deploy straightforwardly just by reading the documentation?

Abhi Shelat: This is a great question, Eric. So I'll tell you what the status quo is. Of course you can define some arbitrary complex programming language. What we're doing is working with standards. There is something called DCQL in the OpenID spec, that allows you to ask for certain attributes, just equality of certain attributes. That is the level of the interface that we support. That lets you essentially generically let you ask for three well-defined attributes and—and—and so our circuit essentially does that. It follows the DCQL query format, lets you ask for three of the attributes, if you only want one you just ask for the same one three times. And that's like a very good place to start because it's simple enough and it handles many use cases. It does not let you handle arbitrary things like "I was born on a Monday," but—which we can prove—but—but you know, that's a good starting point for bootstrapping the practical use cases of this.

Eric Rescorla: I agree, and for age assurance that's plenty good enough. Thank you.

Stanislav Smyshlyaev: Thank you. Brent.

Brent: First want to say support this work, this is—big fan of Longfellow. Minor quibble: calling it ZK-Lib seems to indicate that it'll have more to do with other things than just Longfellow. So maybe consider naming it something less general if you're not going to include Vega and BBS and all of the other ZK techniques that people know and love, maybe change that.

Abhi Shelat: Okay, great. Thanks.

Stanislav Smyshlyaev: Thank you so much, Abhi. Tania, we have already locked the queue and we are moving on now. NTRU based Public Key Encryption and I am passing you slides control. Just take it. Just a moment, it will work in a moment. Yes, please. Please start.

Yizhen: Okay, morning everyone. I'm Yizhen, and this talk is with Xinhua. We talk about this NTRU-based public key encryption. The main question of this talk is as post-quantum migration becomes real, do we really need a more compact alternative? In why—so let's start with some broader context. We know that ML-KEM has been deployed in post-quantum migration, like we listed some—okay, thank you, thank you. In why-one TLS Google enabled—enabled ML-KEM by default in Chrome for TLS 1.3, and also Cloudflare supports hybrid post-quantum key agreement in TLS. Also in cloud services, messaging, they all use ML-KEM in practice. Well, the first message is that the post-quantum migration is not a future topic, and ML-KEM is in progress.

Well, in reality there do exist some maladjustment, especially for protocol fraction and fragments. We use a simple example here for the size of ML-KEM. These two are under NIST's third security and fifth security, and we can see that for ML-KEM-768, they got public key and ciphertext both above exceed 1000 bytes, and for ML-KEM-1024, they got even exceed 1.5 thousand. By comparison, we show that the Dawn-1024, which is the latest NTRU-based PKE, it only provides about 1000 public key and ciphertext, it's about 30% smaller than ML-KEM.

Well, what's the packed of size in practice? The first is a direct in TLS client hello or server hello, it may split across two packets. And in IKEv2, ML-KEM can exceed MTU and require extra IP fragments. This is from the previous draft. And why it matters? Since more packets mean more tail latency and more loss sensitivity, and higher overhead of the whole communication, especially for some resource-constrained links and devices. So this point is not—we don't mean that ML-KEM is bad. The point is that for some situations, maybe we can look in for a—we can look for some more compact alternative. So is there exact solution?

Yes, we explain—we introduce NTRU as a natural solution. So we just start a brief introduction why NTRU can be more compact. This is the underlying structure. Okay, for ring or module LWE public key encryption, like ML-KEM, it usually require two ring elements for ciphertext, as we show here, C1 and C2. While for NTRU public key encryption, it only leads to one ring element in ciphertext. So this doesn't mean it directly cut the size by half, but it indeed decrease the size by a lot. So NTRU seems good to be a more compact public key encryption. It's a takeaway: the reason is from the underlying structure.

Then we take a glance of development of NTRU. The story begins at 1996, the original NTRU, and it—this family of NTRU is continue to evolve both in practice and theory. They got still some compact construction in recent years, and we say in the original NTRU, they got a 1860 bytes for the sum of public key and ciphertext, and for the latest NTRU public key encryption Dawn, maybe the size can—is only 900 mount. Okay, all the data showing in this slide is under NIST's first security level, and the performance is under AVX2. Okay, this size we want to talk is NTRU, this family has a long story about 30 years evolution, and it performs—it still explores a better tradeoff between compactness and efficiency while remains security.

Okay, for some thing may concern, so like ML-KEM, the underlying assumption is a Module LWE assumption, while NTRU we need some other assumption like NTRU assumption. So the first concern is that is it safe enough? So we show that there's—there exist some structural relation between NTRU and LWE still. The left is NTRU h * f sub T = 0, and the right is ring-LWE a * s sub—plus E = B. Well, the NTRU can be viewed as a special ring-LWE with zero B. So they do have some relation. NTRU maybe can be see some viewed as some ring-LWE with special factor. And for the theoretical hardness, there exists a reduction of NTRU, so NTRU can be reduced to some hard problem like gap SVP on ideal lattices, and this back up the security of NTRU. And for concrete security, there exist some attacks especially for NTRU, like dense sub-lattice attacks, but they are on some specific parameter settings about larger modulus. And for the public key encryption context where Q is comparable to N, so we needn't to consider about that. So the message is that for the public key encryption, NTRU has a credible security foundation. We can use it, since it allows—my 30 years history, there's not some large attack can weaken its security by a lot. Okay.

Then we show some concrete comparison of post-quantum KEMs in size, cost, and security, especially for NTRU-KEM and ML-KEM. We know that for the security, we split it in two columns. The first column is about the secret key and plaintext. This is usually the security under—under CP-A model. And the second column is the DFR, namely that the decryption failure rate. This is about the security under CCA model. Okay, we see here there's several NTRU family KEMs, they show different advantage, such as NTRU-HPS, HRSS, and NTRU Prime. They got no decryption failure and they got higher concrete security level. While the Dawn and Knife, some latest NTRU construction, they show some smaller size and higher efficiency compared to ML-KEM. We highlight Dawn and ML-KEM as a pair to compare, since Dawn—Dawn is about the same security as ML-KEM and the size is about 33% smaller, while it's got even got faster efficiency. All the data shown here is still in AVX2, and in standard C implementation, the gap will be larger.

Okay, to visualize it, we show a size breakdown in public key and ciphertext. We can see that Dawn is clearly the most compact among the post-quantum KEMs. And this is the cost breakdown. Okay, Dawn, Knife, NTRU and ML-KEM, they all got high efficiency.

Finally, the main question: do we need an additional NTRU-style option for specific deployment? So we need to clarify our position first. So we view Dawn as a size-oriented NTRU family alternative alongside previous draft about NTRU-HPS and HRSS proposed in CFRG. And our target is for some specific situations that are packet-budgeted, bandwidth-sensitive, and resources-constrained deployments. So just one question about is—is the size—the advantage of size is large enough to justify an additional option? Yeah, since it has been proposed ML-KEM and NTRU-HPS HRSS, so we wonder if the industry or in reality do we really need a more compact alternative. So we pre-do a warm up about Dawn. Okay, thank you.

Stanislav Smyshlyaev: Thank you so much. Please questions, Bas.

Bas: Okay, this is—this is—from a performance angle, this is great. This is a real improvement on ML-KEM. But we cannot just adopt this, right? We need—we need to paint a target mark on it and then have analysis for years before we can trust in deployments. And that should start now, right? At the moment NIST and such they are focused on signatures, so it's nice to create a parallel thread here if we can. But I mean, this is years away before we can even start trusting this. That's on the process side. Now, on an earlier question—a more direct question is you define a 512 and a 1024. Is it possible to have a 768 level? Because the 1024 level is not really an improvement. I mean, technically it is, but not really. A 512 is clearly an improvement, but if 512 slips, is there a 768 in between?

Yizhen: So you mean for the NIST 3 circuit level? Okay, that's—so for currently Dawn only has a parameter set for of NIST first security level and the 5 security level. But we show here before, even for the NIST 5 security level, Dawn-1024 is still smaller than ML-KEM in NIST 3 first security level and it already—

Bas: Yeah, yeah, it's smaller, but the difference is not worth it for a migration, right? So the question is: is there a 768 variant of Dawn possible?

Yizhen: For now I'd say no. Yeah, it is hard to choose NIST 3 security level parameter of Dawn or some Knife such NTRU-PKE.

Stanislav Smyshlyaev: Okay thanks, John.

John: Yeah, thank you. Very happy to see this focus on key exchange sizes. The main problem with ML-KEM for very constrained IoT radio is the size of the public key and the ciphertext. Answering your question if we need this, I think we don't need the original NTRU that has too little advantage. I do think we need this. And I agree with Bas that this needs more evaluation. I hope you will submit this to the Chinese crypto competition if the Chinese crypto competition includes key exchange with smaller sizes. I think it will have global relevance, not just for China. Thank you.

Yizhen: Yeah, thank you.

Stanislav Smyshlyaev: Please, Eric.

Eric Rescorla: Thank you for the presentation. I think as you've heard, this probably needs some bake time. I guess the question I was interested in is, are you able to offer an argument that if I liked one of the other variants, I would like this as well, like a reduction to one of these other variants, or am I just stuck analyzing it from scratch?

Yizhen: Sorry, do you mean some reduction?

Eric Rescorla: Yeah, I was saying for instance, could you—yes, if you could reduce Dawn down to like Classic NTRU or something like that?

Yizhen: Oh yeah, yeah. Dawn could be reduced to NTRU assumption and ring-LWE assumption, just like in the classic NTRU family in the underlying assumption is still—is still the same.

Eric Rescorla: Thank you.

Stanislav Smyshlyaev: Okay, thank you. Nick, do you have anything to add?

Nick Sullivan: Yeah, just question to the presenter. At the last CFRG meeting, we announced that we're seeking security considerations for different KEMs. And you listed an individual draft here that was not adopted as something that this could join onto. But my question is: are you willing to write a security considerations draft? Is this part of the plan?

Yizhen: Sure, yeah, we are willing to write about especially for Dawn or some latest KEM security document.

Nick Sullivan: Okay, thank you.

Yizhen: Okay.

Stanislav Smyshlyaev: Thank you so much. And let's move on to the next presentation, Recent Advances in Fully Homomorphic Encryption. We haven't got any ongoing work on fully homomorphic encryption in CFRG and we have questions about this, so we'll be happy to hear a presentation. Please, you can start.

Senhui: Okay, thank you. Hello everyone, I'm Senhui from China. Today I will introduce some recent advances in fully homomorphic encryption. And I will mainly introduce three parts. First, I will introduction a very brief history of FHE, and then is some progress toward the practical FHE, and finally the standardization considerations.

And first of all, during a very long period of time, the internet mainly focused on the security of communication. But in the last decade or two decades and in the future, we need to consider more and more securities in communication. To do this task, the traditional symmetric and public encryption is not powerful enough. We need more powerful primitives. Homomorphic encryption is that kind of new powerful primitives. Given the encryption of M1 to MN and the target function can be directly computed on the ciphertext and get a new ciphertext and the corresponding plaintext is just the plaintext we wanted. That's the definition, a very brief definition of FHE.

And in fact, FHE is not a new primitive from the history. It was the definition—the notion was proposed nearly 48 years before. And in the first 31 years, it remains as one of the biggest open problem in the area of cryptography. Until 2009, the first framework was proposed. And then it take eight years to design several classical algorithms such as the BGV, BFV, CKKS, and CGGI. And now in the last decade or nine years, the standardization of FHE considered in several community such as the ISO-IEC, and many companies publish their open source library for FHE. And on the same time, some practical aspects of FHE are continuing to progressing such as transciphering and compiling. And today I will focus on—

Okay, we mentioned FHE most people may think it is very slow, very expensive. In the early years, it is true. For example, in 2009, the first algorithm of FHE to do one bit bootstrapping—bootstrapping is a one of the basic operation in the FHE—to do one bit bootstrapping, it takes more than 30 minutes. Now it is much better. For example, if we do a gate bootstrapping, it is 400,000 times faster than 2009 record. And the circuit bootstrapping is 100,000 times faster. So nowadays we get much better for the computational performance of FHE.

And very similar as the performance of AI, the performance of FHE also benefit from the progress of hardwares such as the SIMD instructions. For example, with the help of the AVX instruction, we can get a nearly 10 times of speedup for several implementations. And if we use the GPU and we can get much faster. The recent progress is the ASIC designed by the DARPA project. I remember it is the Intel company's chip. It can provide 5,000 times of speedup based on the general implementation in personal computer.

Okay, next I will introduce two important topic I think will be interested in the IETF community. The first is transciphering. Transciphering is a technique can transform a symmetric key encryption ciphertext to a FHE ciphertext. With the help of transciphering, we can making the FHE community friendly, which means in the combination or even the storage, we can just use the symmetric key encryption. And then when we need to do the computation, and we can translate it to a FHE ciphertext and do the evaluations.

In the past decade, the performance of transciphering are speed up very largely. For example, in the 2012 the first transciphering algorithm proposed in 2012, it takes more than one week to deal with the 128 bit block of AES. But nowadays we only need 14 milliseconds and or 11 second for different algorithms. It is much faster, the latest is proposed 20 times.

And for most of the symmetric key encryption used nowadays, for example, the ZUC used the SNOW-ZUC and AES and the ISO NIST and other standard also have their symmetric key standardizations. For most of the algorithms, we test their performance. Many of them takes less than 100 milliseconds and this also can be speed up by hardware.

And the second tool is the homomorphic processor or we call it the virtual processor. Which means to use the homomorphic encryption is not—it is very different from the traditional symmetric encryption. For traditional symmetric encryption, the engineer only need to know three functions: key generation, encryption, and decryption, then he can use the algorithm very friendly. But for homomorphic encryption, we need to do the very complex evaluations, and even with the help of open source library, it is still a hard work to do homomorphic programming. To make this easier, now the solution is we can design a homomorphic instruction—assembly instruction set. And based on this instruction set, we can build a virtual processor. And then the engineer will be very easy to use this as a user processors to do their development. Nowadays some of the instructions in the assembly, for example, the add, the multiplication and they must take about 36 millisecond to 200 millisecond and then can also be speed up by hardware.

And finally, I will introduce example of the application in the neural network inferences. For example, in the facial recognition and speaker verification, in this case—in this scenarios, we only need very simple neural networks. And the formance currently is only takes about 0.2 or 0.6 second for this light neural networks.

And currently the FHE are already considered in the standardization in ISO. And maybe in the future one or two years, there will be some ISO standardization for the standard for the FHE algorithms. Okay, finally, the forward path of FHE in IETF. I have three questions. The first is, do the Internet plus AI need FHE? And I saw in the previous slides, some experts already mentioned this question. And do we need to—to build a team to discuss the FHE in IETF? And second is, do we need to prepare some documents of FHE for the research in the CFRG or other groups? Thank you.

Stanislav Smyshlyaev: Thank you so much. It's a very great topic to discuss now. First of all, we had a presentation on AI versus end-to-end encryption, and it is definitely connected topic. And we have some questions about the future of FHE in IETF, IRTF and specifically CFRG. So maybe we will hear some opinions from the room. But the idea of adopting the topic or discussing whether we want to adopt the topic is very applicable to this. So if you could take it to the list and ask whether CFRG wants to adopt the topic of working on FHE with maybe some ideas of questions which we should reply to, because when we adopt a topic, first of all we want to understand what's the intended result of our work. So if some people, for example, Mallory from Tuesday's session, have some requests about FHE or some mechanisms, that'll be great. So first of all we would like to understand what people want from us. And then if we want and we have some questions, then we can try to find the answers. And it's great that you're with us because I'm sure that FHE must be assessed and addressed and thought in CFRG sooner or later. But first of all we really need to understand what specific tasks should we solve.

And I don't think that we have questions from the room, so I really appreciate your presentation. Oh, there's a question from Lee Luan. Lee, please come to the mic. We have a couple of minutes.

Lee: Yes, thank you, thank you Dr. Lou for the presentation. And yes, I don't know if you remind—you have seen or remind we actually, I and some friends has a hackathon project specifically for the AI inference into the FHE. We try to use the FHE to recognize the pictures such as the cat or the dog and we see some potentials. So regarding to the documents, I think or the work, potential works on the IRTF or the IETF, my thinking is—I also talk about a little bit with Paul. So my thinking is now we kind of there's a gap to—we have a protocol gap to process the ciphertext in the protocol level. Because when we think about the HTTPS or the IPsec, usually we need to decrypt the payload and to process the entire payload. But when you talk about the FHE, usually we trying to compute directly on the ciphertext. And how this ciphertext can be correctly processed, maybe we need an additional indications for the protocol level. So my thinking is maybe we can do some extensions or the working in the IRTF or IETF to—to doing it, but we definitely need more time to discussing or think about some the potential works. So that's my thoughts.

Stanislav Smyshlyaev: Okay, thank you so much. And few comments from me. We have some comments in the chat. Comments about whether CFRG should or should not do anything with FHE now, provided that we don't have any specific demand from IETF for now. And that maybe some immediate standardization is not possible because FHE is great but it's very system-specific. And that's why we'd like to ask you to ask on the list whether there are some demands, requests, desire. Because of course we can do some work without IETF demand. CFRG can start research without specific demands. But of course we have our priorities and we want to do the work which is needed for IETF in the mid-term, or if we believe that we understand what specific tasks we should solve. So please take it to the list. And I would like to remind you and your colleagues that in a month we will have a nomination call to Crypto Review Panel, and I'm sure that we really need people who have expertise in fully homomorphic encryption and maybe other areas that you have. So please look at the call for nominations and if you are ready to do it, please just nominate if you want. It's to you and all your colleagues. Thank you so much for the presentation.

Senhui: Thank you, thank you.

Stanislav Smyshlyaev: Let's move on, and the next presentation will be from our remote office, Luma, Low-latency PQ mutual authentication architecture. Xingshu, please start.

Xingshu: Hi everyone, thank you for having me. I'm Xingshu from University of Edinburgh. I mainly work with Mitch and Colin. Today I'm going to talk about Luma, a low-latency post-quantum mutual authentication architecture. That's mainly for the cloud or data center environment. This work has been recently accepted to NDSS, and it's a work that's about authentication design. It could potentially fits into TLS, but we're not here for proposing anything about standardization yet. Before going to that direction, we would love to—we'd really appreciate any feedback and thoughts on the soundness of the crypto composition in this design. And so that's why we are here, and to get the input from CFRG community.

Well, so stepping back a bit, let's talk about why we even looking at this problem. Fast authentication matters in the cloud mainly for two reasons. Firstly, in the cloud many calls finish very quickly, so most of the time the crypto operation takes a large part of the end-to-end total communication cost. And secondly, because the cloud uses the high-bandwidth fabric and also it's very has low latency, so the round trip time in the cloud is very small, like 10 to 50 microseconds, which is different from the internet where it usually has like tens of milliseconds. And most of time, sometimes the crypto operation can take longer time than the networking itself.

And also here is how the cloud application look like. And we can see from the figure, the cloud application is built from multiple microservices or serverless functions, and one user request can triggers like multiple internal calls in the cloud. And the microservice are ephemeral instances, they come and go. And they open many new connections very frequently, and each connection requires the authentication. And in the cloud, the mutual authentication is mandatory where both party requires to do the signing and verification. It doubles the authentication cost, especially compared to the open internet where only the server is authenticated.

And more importantly, the post-quantum migration amplify this problem. From our experiments, we found that when the post-quantum signature is used, the mutual authentication takes like 50 to 70 percent of the overall handshake crypto overhead. And this table shows more details. We can see compared to ECDSA, the Dilithium 2 is like 2.5 slower for the signing, and the Falcon signing is like 9 times slower. There are some work they are trying to accelerate post-quantum authentication. For example, the KEM-TLS. That design mainly reduces the authentication bandwidth cost by replacing the post-quantum signature algorithm by the KEM operations.

For the Luma, I think the KEM-TLS is quite good and helpful for the open internet where round trip time is the dominant latency. But Luma focus on this on-path mutual authentication acceleration. The core idea is very simple. We split the work into two parts. So we can do the expensive work ahead of time, and we can keep the online operation very fast and light.

Yeah, this is quite different from the normal signature workflow. So usually when a message is ready, the signer creates a full signature out of it in the real time, and the verifier validates it with the public key. But we can see the expensive post-quantum signing and verification sits directly on the critical path. And thus, to get the heavy part off the critical path, we introduce to use this online-offline signature paradigm. This paradigm allows us to do—allows the signer to pre-generate some keys and prepare for the signing. And it also allows the verifier to prepare for the verification, so during the online phase they can have the fast signing and fast verification. The important thing here is, this paradigm only works when the verifier has the right verification key ahead of time before this connection or before this authentication process starts. It is quite difficult for the open internet where the server and client are strangers, it's quite hard for them to exchange the crypto state. But for the data center, it is feasible because data center is a constrained network and the services usually talk to a small and relatively stable set of peers as we just show in that cloud application figure.

Thus, now let's take a look at the construction, how we construction this post-quantum online-offline signature. We mainly consider two construction methods. The first one uses this online—this one-time signature online, and the second one uses this trapdoor hash online. And we pick the first one because it mainly—it only relies on this hash-based primitives, which is widely used in the TLS, so we can make sure that we didn't introduce any new primitives.

And this EMG-style paradigm allows us to combine the slow post-quantum signature with a fast one-time signature that's performed online. And in our instantiation, we combine the Dilithium 2 with WOTS+. Here's how the WOTS+ works. Basically the secret keys are some random values and we do like several times of hashes to get the public key and also for the signature generation we also do like different number of hashes to get the signature itself. We choose this WOTS+ mainly because it has been widely adopted in the practice. For example, both SPHINCS+ and XMSS adopt this WOTS+ signature as the leaf elements, and so we think it's sort of widely adopted, widely studied, and sort of standardized.

Yeah, so now let's take a look at how Luma fits into this TLS 1.3 process. We split the work into two planes: background plane handles the offline task asynchronously. So and for this background plane we introduce the KeyDist service for pre-distribute keys. And each endpoint, they pre-generate WOTS+ key pairs, upload their public keys to the KeyDist service and fetch their peers' public key from KeyDist service. These keys are organized in batches using the Merkle tree. For the foreground plane, it mainly handles the handshake itself. So the client starts a client hello and advertising that Luma is supported, and the server deems with the client hello and quickly generates a Luma signature and sends back to the client. And client verifies it with the pre-fetched verification key, then they do this authentication again in the same fast way, and then they once both part has been authenticated, they can start happily chatting with each other.

Yeah, now let's take a look at performance. We isolating the verify and signing operation of different signature algorithms here. Compared to Dilithium 2, we can see Luma achieves quite fast online signing, that's less than one microsecond, and for the online verification, we can do it within seven microseconds. And this table shows this end-to-end TLS handshake latency. This case is the one client talking to one server. For the mTLS case on the right, and compared to this Dilithium 2, we can see Luma reduces like P50 latency by 34% and P99 latency by 48%. We also tested the concurrency setting where many clients talking to one server. On the left, we can see with the client side loading—loads increases, the Luma which is the dark blue one stays the lowest. And on the right, Luma sustains the high throughput, showing a good scalability.

Now let's wrap it up. Overall, we found that online-offline signature can reduce the post-quantum authentication overhead significantly, especially for the cloud setting. And we think Luma is a general approach that can compatible with different post-quantum signature algorithms. And our questions for the CFRG community is: we really appreciate any feedback about the soundness of our design, and also is there any like subtle attacks that we may have missing? And we also wonder if there are any better constructions that we should consider. That's all, thank you.

Stanislav Smyshlyaev: Thank you. Okay, we have time for one question, please Andre.

Andre: Andre Popov, Microsoft. One thing I would like to clarify. If this scheme relies on a key distribution service, right, why not just use pre-shared keys with TLS instead of this?

Xingshu: TLS pre-shared key is the different case. We are targeting the first time, the full handshake. TLS pre-shared key is for the resumption handshake, which means the two—the client and server, they have to build this TLS connection for the first time, so they know each other, and client and server, they both like save some tickets and so for the following handshakes they can use that ticket to do the encrypted communication.

Andre: It's a possible flow in TLS what you're describing, but there's also an opportunity to pre-shared keys using, you know, out-of-band mechanism just like in your scheme. And then that, you know, avoids the entire cost of the, you know—

Xingshu: You mean the TLS PSK can be pre-shared with out-of-band—called externally provisioned PSK, yes. Why would you do that? Because for the TLS resumption, they basically exchange that pre-shared key during the TLS handshake itself.

Andre: It's not resumption. You can use a PSK that's not resumption, it's externally provisioned PSK. So just—just a question.

Stanislav Smyshlyaev: Thank you, Nick. Any ideas how to conclude?

Nick Sullivan: Thanks for the presentation. Let's send an email to the list and we can continue the conversation there. Thank you so much.

Xingshu: Thank you.

Stanislav Smyshlyaev: Thank you so much. Then next presentation, HMAC Based Key Combiners for Multiple Keys. Guilin Wang, please Guilin, you can start. Thank you.

Guilin Wang: Yeah, thank you. Okay, so here's Guilin. So I'm going to present the HMAC based key combiner for multiple keys. So we have given a presentation at the last IETF meeting at Montreal. Yeah, it's the second one.

This beginning is just a basic information about the draft. So basically we proposed a provably secure key combiner for multiple keys based on HMAC. So called the HKCV1 and HKCV2. So the main features for our construction is that it is decoupling feature. So it means for the—to generate a pseudo-random master key from multiple original keys, which can be obtained from different methods, we don't need to get the input for the public key or for the cipher text. In that case, it will be more efficient. And next feature could be that we hope our security proof is more rigorous and more useful.

Here I just give a quick look what our scheme looks like. And on the right side, it is comparison with the currently available existing solutions from standards and from IETF draft. The first one is we call it the concatenation. In that case it is useful if multiple keys are available simultaneously. In that case, we can concatenate together and then based on the extraction diagram of HMAC to extract the entropy and then try to smooth them. So we use the HMAC twice, such as the randomness can be distributed uniformly for our output key. Finally, we truncate the key to get the length what we need for the pseudo-random master key.

The second one is in another scenario, so that means maybe the key is available one by one. But during this process, the protocol may need to generate some intermediate keys to protect the following information, even before the next key—next key is available, in this case. So we don't give detail here.

This part is the comparison about the efficiency and security as I just mentioned. The main efficiency is just come from decoupling feature, because in our—in our key combiner, we don't need to require the public key and the cipher text, especially in the post-quantum KEM scenario. The public key and also the cipher text could be very long, not just the secret itself, or maybe the context information etc.

And the security we mentioned here, the first scheme, our first scheme is—has better security proof, because it can be approved to be secure just under a little weak of assumption, according to the standard assumption for hash function, collision-free. It's even weaker than this one, can based this assumption. But the second one is not that good, it is just proved under secure under the random oracle, by taking the hash function as an ideal function, we can see this way.

And also the same—similar, the right side is the comparison for for existing schemes. We would like to mention is like the scheme called the NIST SP800-133. For this scheme, there is some input called the data, but the data is optional. So later we will introduce some new result from our team. So we did implement this all of these schemes, now five or six schemes together and try to compare what the concrete the performance difference between them. So next I will present that part.

Okay, here is the table. There is a lot of numbers. Basically we give a two sets of experiment data. One is we just use the P-256 curve combined with the ML-KEM for three security levels. The main number actually just the percentage part. Because from the experimental—experiment data we can see, the SP-133 from the NIST standard is the most efficient one. But as we know, for this scheme, we did not found the formal proof security result for that. If anyone knows their result, please let us know.

Then we compare the efficiency of other schemes. Basically we can see our scheme maybe number two or number three or sometimes number four. As a comparison, the cascade KDF from ETSI and also the IETF draft one is less efficient. Normally the last last two schemes here. This is the most important information. Other information are not that important.

Okay, so here is another one. We just combined with the X-25519 with the ML-KEM similar. So the result is still similar. So actually our two schemes together with the SP-133 can be considered the fastest one.

Okay, so here are some short summary. We can see is our scheme maybe two of the top three, according to the efficiency, some a little bit slower than the NIST standard. Yeah, so the last we have some implementation notes. But maybe I don't mention too much. We have a dedicated engineer for this part, called the Ms. Li Xinlei, so we thank for her contribution. So if you would like to know information actually you can ask us, or maybe a little later we just send the related information in mailing list. Okay, that's all. Maybe for some questions. Thank you.

Stanislav Smyshlyaev: Thank you so much. Any comments, questions? Maybe some comment from Nick regarding the current activity with KEM combiners team. Nick, can you hear me? And we don't hear you. Maybe your mic is off. At the bottom of the screen.

Nick Sullivan: Yeah, while we have Deirdre in the queue who can speak more authoritatively on this, but combining things with HMAC as well as with other XOFs, things that have been explored. And there's some recent results on the academic side to prove the usefulness within the PQ context. That's sort of all I can say about this right now. Go Scott.

Scott: Yeah, I was just wondering why you're using HMAC as opposed to like Shake. Nice thing about Shake is if you need 137 bytes of output for your whatever, you can squeeze out that much. With HMAC you have to do this bizarre, counter-based design which is a pain. I was just—I'm just advocating that maybe you should consider Shake. Thank you.

Guilin Wang: Oh thank you, thanks for suggestion. Maybe we can consider later. Our basic reason we use like the HMAC is just because HMAC is popular in IETF and also for the protocol applications. Yeah. Okay, thank you.

Stanislav Smyshlyaev: Thank you, Deirdre.

Deirdre: Hi, I just read your draft and I was looking for the paper that has your proofs is not publicly available. I don't know if you want to put it on eprint before Springer publishes it as part of Inscript or wherever it's being published. Second, it's not very clear from your draft the security properties you're trying to achieve with these constructions. I had to go all the way to the bottom to see what you're trying to achieve and it seems you are claiming that V2, the extractor is in-CCA, but you're not fully claiming the property you're actually trying to achieve there. And then for V1, you claim that is a randomness secure randomness extractor, not an in-CCA KEM combiner or whatever you're trying to achieve. I'm not fully clear what the thing you're trying to achieve with these constructions is from a security property perspective, and also whether you achieve it, or whether you're actually claiming to achieve it or not. Clarity there would be most appreciated.

Guilin Wang: Okay, thanks for your comments. For the security part, yeah maybe we can discuss later by email. For the first part, yeah I will update the draft later and actually paper already published. Yeah, it's available on the internet. Yeah. Okay, thank you. I will let you know. Yeah, thanks.

Stanislav Smyshlyaev: Okay, thanks John.

John: Yeah, thank you. I'm very happy to see this focus on key exchange sizes. The main problem with ML-KEM for very constrained IoT radio is the size of the public key and the cipher text. Answering your question if we need this, I think we don't need the original NTRU that has too little advantage. I do think we need this. And I agree with Bas that this needs more evaluation. I hope you will submit this to the Chinese crypto competition if the Chinese crypto competition includes key exchange with smaller sizes. I think it will have global relevance, not just for China. Thank you.

Guilin Wang: Yeah, thank you.

Stanislav Smyshlyaev: Thank you, and let's move on to the next presentation, Hybrid Digital Signatures with Strong Unforgeability. Lucas Prebell, and I'm sharing your slides. Yes please, you can start.

Lucas Prebell: Okay, so hi everyone. I will give an overview and an update of our draft about hybrid digital signatures with strong unforgeability. So it was updated earlier this month and this work proposes two hybrid constructions, so combining post-quantum and traditional components, aiming at strong unforgeability security.

So I will start by recalling the motivation and the problem we try to solve. So first the usual security model for signature schemes is existential unforgeability, but SUF-CMA strong unforgeability is stronger because an adversary who has seen a valid signature cannot produce a different valid one for the same message. And several IETF drafts currently rely on hybrid signature schemes which only achieve EUF-CMA security. However, in several real-world applications, we might need a stronger notion of security, for example to prevent replay attacks. And so the challenge that this draft tries to solve is how to achieve strong unforgeability even if one component is only EUF-CMA secure. And so in this work we define two hybrid signature constructions which offer strong unforgeability under different assumptions. And so in the next slides I will present the two constructions defined in the draft.

So the first one is a black box construction, so which means the two underlying signature schemes are treated as independent models. And the idea is quite simple. The hybrid signature is composed of a signature from the traditional signature scheme, the first component, and the second signature is actually a signature not over the message, but the concatenation of the message and the first signature. And so this binding ensures that a valid hybrid signature cannot be modified without breaking the SUF-CMA security of the second component. And so as a result, this hybrid construction remains SUF-CMA secure even if the first component is only existentially unforgeable, as long as the second component is SUF-CMA secure. However, because of this binding, the signing process is sequential, so you need to first sign traditionally and then sign with your post-quantum component. So this is a different difference compared to the composite approach, for example, which can be executed in parallel.

And so that was our first construction. And the second construction of the draft is a little bit different. It's a non-black box construction, so it is designed specifically for the case the first component signature scheme is built from the Fiat-Shamir paradigm and the second component can be any signature scheme. And so this construction is composed of an identification scheme and a signature scheme. And so it is a little bit more compact because the resulting signature is made—is composed of the response from the traditional identification scheme and from the second signature. And but more significantly, the main difference with the first construction is that it's SUF-CMA secure as long as any of the component is SUF-CMA secure, which wasn't the case of the first construction. For the first construction, the security came from the SUF-CMA security of the second component. And similarly, it's—the signing process is sequential there.

So we implemented those two constructions and the performance isn't—there is not a big difference between the non-black box and black box construction, so the performance shouldn't be a deciding factor between black box and non-black box. The security will be the main—the main motivation for choosing one instead of the other.

And in the last update, we mostly updated the security section, so adding some clarifications and speaking about more security properties, and we also clarified and added some editorial modifications. And so the questions that we would like to ask the research group would be if there is any need for this hybrid constructions in real-world applications, if there is any preference between black box and non-black box, and if there is any gap which we need to be filled. And in particular, looking further ahead, we would like to have more people reading the draft and sending us feedback. So thank you very much.

Stanislav Smyshlyaev: Thank you so much. Please, Scott.

Scott: To answer the question about preferences, we have a strong preference for a black box construction. Basically, we don't want anyone opening up ML-DSA, inserting some stuff, and hoping they haven't broken anything. Thank you very much.

Lucas Prebell: Okay, thanks for your comment.

Stanislav Smyshlyaev: Any other comments, questions? Okay then, if you want someone to share the opinion about this, you can double it on the list and if people want, they will tell you opinion. Thank you so much, thank you Lucas. And now we have a presentation from Emil Lundberg, the ARKG algorithm. Please, I'm just a moment. Yes, and I've passed slides control to you.

Emil Lundberg: Thank you, okay here. There, thank you. So yeah, I'm here to present a draft called the Asynchronous Remote Key Generation Algorithm, ARKG. I should have noted on the slide that I'm affiliated with Yubico, also my co-author John Bradley is also here as a remote attendant.

So I'll go quite quickly because I don't have a lot of time. In summary, ARKG is a key generation algorithm that enables you to delegate creation of public keys without giving access to the private keys. So for example, you can have some kind of a hardware security device which can output a seed to like a client software, and then the client software can use that seed to derive a key handle and a public key as a pair, without having to do any additional call to the secure hardware. Then you can, for example, use that public key to register something to some kind of certificate authority or or something like that. And then when you want to authenticate or sign something, do something with the private key, then you send the key handle back to the hardware device, and then you can do something with the with the private key on the hardware device. But the interesting part of all this is that this step where you generate key handles and public keys, you can do that any number of times without having to do additional calls to the external hardware. And that's what makes this scheme interesting and useful.

So in very brief how it works: essentially the pairing step at the beginning where the—where one party outputs a seed, it's exchange of a KEM public key and a base private key—base public key for a some kind of public key scheme which supports key blinding. And then when you generate the key handles and public keys, what you do is you blind the public key and then you encapsulate the blinding factor using the KEM. And then that encapsulated blinding factor becomes the key handle that you then later send back to the the original party which can decapsulate the blinding factor and then blind the secret key and arrive at—at a matching pair.

So there's some academic background to this. This goes all the way back to 2012 in Bitcoin, where this kind of technique is used to do so-called stealth addresses. That later inspired a proposal from a colleague and myself about how you can use this kind of technique to do backup keys in WebAuthn. At the time we were not able to find any formal security proofs of this—the key generation technique on its own, just the—the whole of that BIP32 scheme in Bitcoin. So we worked together with a team of researchers and they were able to develop a formal security proof for this technique. And then later in 2023, at least three different papers also proposed post-quantum extensions to this as well. So we have adopted—in this draft, we have adopted the construction by Wilson, because it's a modular construction and has the security properties of the original proof that we—so we have adopted that and that's what this draft is based on. The other concurrent publications are linked in the draft, but I don't have time to cover them here.

So we have some concrete use cases for this, most notably or like in general any any case where you have some kind of use for efficient hardware binding in some kind of application where you—where it's useful to have a lot of public keys that are unrelated. So the use case that motivated all this work was EUDI, where the idea was to use this for single-use batch-issued keys to not make super-cookies out of the public keys. But I mean, it may not be as relevant anymore with Longfellow, but still this kind of thing still works. You can use it for backup solutions as we originally proposed in 2019.

And implementation status for this is this will be a tech preview feature in YubiKey 5.8, and we also are working on supporting or using this technique in WW Wallet, which is a EUDI pilot. And also Cyprus ID which is based on that, and right now we're using it to do ECDSA-based credentials, but we're likely to look into using this to integrate with Longfellow later as well, using ARKG to generate ECDSA keys and then layering Longfellow on top of ECDSA is the idea.

So yeah, the contributions from this draft is to make a concrete specification or concrete instantiation of ARKG that's ready to implement as a general-purpose key generation primitive. Currently we define instances for the NIST curves and the X25519 and so, because that's what we're concretely interested in using in practice at the moment. And we adopt this modular construction from Wilson's paper which is ready for post-quantum instances, you just—quotation marks—need to define those instances, but the framework should work for those. And yeah, I would like to move towards working group adoption of this if that seems like a good idea. I'm new to these—to this group, so I'm not sure what the process is, but I would like to move in that direction so this can be a standard that they can reference to—to promote interoperability. And that's my presentation, please give me your questions.

Stanislav Smyshlyaev: Thank you Emil, any comments, questions? Any support or maybe concerns? Okay then, we can take it to the list if you want, Emil. Thank you so much for the presentation, and let's move to the last one. After the presentation from Usama we'll have some additional announcement from Deirdre Connolly, and so Usama you can start. I'll share your slides. Yes, and you can take slides control. Please you can start.

Usama: Do you hear me well?

Stanislav Smyshlyaev: Yes we hear you and I'm sharing your slides now. Yes, and you can take slides control. Please you can start.

Usama: Okay, good, I see the slides. Okay. Yeah, welcome everyone, thanks for being here. So I'm going to talk about this Relay Attacks in intra-handshake and proposed solution in post-handshake. I had a couple of links already in the mailing lists. So I'm going to talk about first the relay attacks, then give a very quick overview of RFC 9261 and our post-handshake attestation solution, and finally about the formal analysis that we are doing. The work is mainly being done in the SEED working group, and why I'm here is basically to make a request to CFRG for recommendations and guidance. What that means is basically in ProVerif, we are making perfect assumptions as is typical with all the symbolic tools, and we want to know if there is anything from cryptographic perspective by making these perfect assumptions that we are missing or could lead to some problems. So that's the main purpose, as I said I had made a couple of mailing list posts already, one on the relay attacks, the second one on the client handshake traffic secret versus the application traffic secret.

And to give you a bit of a context what this is all about, remote attestation is nothing just a signature from a cryptographic perspective. And these are the three main classes of attestation, how you do signature compared to a TLS handshake from a temporal perspective is what is shown here. In the diagram, if you do before that's pre, in between is intra, and after is post. And I'm going to mainly talk about intra in the first part and as I said I use ProVerif for the formal analysis.

So this is the main question that we have. How do I bind that evidence which is signed to the specific connection? So I describe here the three different secrets which are unique to a specific connection. The first one is the Diffie-Hellman secret (g, x, y), and what that comes from is from a server perspective, it's getting whatever it got (gx) from the client hello message and together with what it had (gy) for that specific connection, these two things basically make up this secret and that is what is shared between both client and server. Second one is handshake traffic secret, client handshake traffic secret. And coming from section 7.1 of RFC 8446, it's basically—and that includes complete client hello and server hello, which means that the first one is completely included in this one and it's strictly stronger than the first one. The third level is the application traffic secret of the client which is all the way up to the server finished, which means that again it is strictly stronger than the second one.

And the question I have for the research group here is—or what we had in our mind was whether level 2 is sufficient. That's something that came up at the chartering time of the SEED working group. And the discussion that basically happened was that whether the level 2 here—I hear a bit of an echo, I don't know where it is coming from anyway—the level 2—there are two specific reasons why we think that—to answer this question of whether this binding to the client handshake traffic secret is sufficient or not. The first one is that the handshake traffic secret is actually meant to encrypt the handshake messages and it's irrelevant for the security goals, that's what Ecker proposed at the chartering time of the SEED working group. I didn't believe at that point in time and I was kind of curious about why this is not the case and why this would be not sufficient. And then what I did is as a follow-up work was that in ProVerif I removed all the encryption and of the handshake messages. So everything you see here (KSH) starting from encrypted extensions all the way down up to finished message (KSH) encryption, so all messages I removed all the encryption. Everything stays the same, all the properties are still satisfied as they would be with encryption. It has nothing to do with security goals at all, it is relevant to the privacy goals and not to the security goals.

Second point I want to make from our perspective and our study is that server is not yet authenticated at the point in time where we generate that evidence or the signature, just in layman terms at the certificate message. So that's the only point in time in intra-handshake without doing any changes to the handshake protocol, the only way we can put that evidence is an extension to the certificate message shown as 2A. So to do that I have to have that evidence generated before 2A and all I can do is just put that handshake traffic secret or some derivative of that or something relevant to that which generates that core correlation to do that. And if I do that, server is not yet authenticated at that point in time, as you see the authentication part just starts from the certificate message. And if I need to put that evidence in there, I have no chance to do any authentication at all for that specific evidence.

So with these two points, we believe that it's not sufficient and application traffic secret, as is apparent, it's sending over the real secret that it wants—the client wants to send at the end of the protocol and that's what it really wants to protect and—so our results basically agree with what Ecker said at the chartering time.

So what we further did to that analysis is to kind of look into all the possible exploration of the intra-handshake, that was something which was instructed by the AD, Paul. And what we did is basically took the use case from SEED working group, that is the AI agent use case. So basically it's nothing just a fancy thing that TLS server is acting as the AI agent, nothing different from the protocol design perspective, so wherever you see genuine AI agent you can just replace it by TLS server, so no technical difference at all.

So what happens here is basically if you use the TLS client nonce for the binding, what can happen is the adversary can simply play a relay attack, which is to say that it forwards that or relays that TLS nonce over to the genuine TLS server, generates that evidence and gets it back, and then it relays it over to the verifying relying party, which in this case is the TLS client. So the TLS client accepts that evidence because it's basically coming from a genuine TLS server and it establishes a connection which it believes is going to the genuine AI agent or the server, but it's not actually, it's actually going to the adversary.

So why binding—so the second option, for example, what we have is the server's public key, and that we believe is also insufficient. The reason is that it can be leaked in various ways. One of those is that if there is a vulnerability in the code, and there are a number of those—like we get often these kind of vulnerability, there are a lot of there is a lot of evidence already from the practical exploits. And what can happen is basically in such vulnerabilities or if the provisioning protocol of the key provisioner, the entity which is provisioning that private key to the genuine AI agent here, if that entity—that entity actually has the key copy of the key, so we cannot actually prove the property that the genuine AI agent is the only entity in the whole world which has access to that key. So this property cannot be proved in that case, and also the provisioning protocol between the provisioner and the genuine AI agent that can be buggy as well, so there are several of those reasons by which this key can be leaked. And if this key is leaked, as the figure shows basically the adversary can then do get that evidence and certificate from the pipe which is shown here between adversary and genuine AI agent, and using that certificate and evidence it can then do everything that the genuine AI agent would have done. So that's a single point of failure where if this key leaks, everything breaks down and you lose everything. So whole security argument is based on that and we reported that attack over to various implementers, and in specifically COCO-CIS is one of those which actually uses a combination of attestation nonce and server's public key as the binder for the two protocols, the remote attestation and the TLS protocol. And in specifically, like they replicate whatever we proposed them—and the attack that we showed them from ProVerif—and just a small rephrasing, there are a bit of nits that we disagree with them, but in in essence basically they have acknowledged all the attacks.

So second part is basically giving you a quick overview of what we propose as the possible solution, or the what the cryptographic mechanisms could be for designing a pure secure solution. And to give you a recap of that, there is RFC 9261 which basically what it does is develop a standard handshake for one-way authenticated channel with the server being authenticated by default as per RFC 8446. And what it can do on top is that if server is the attester, the client can send the authenticator request which is nothing just the copy of the certificate request which is defined in RFC 8446 but the other way around. So it's the client which generates that message rather than the server, and the authenticator is then generated by the server which consists of familiar messages which are the same as the TLS handshake messages: certificate, certificate verify, finished. They are all well analyzed and they have the nice properties which we can utilize and be happy with it.

So what post-handshake attestation actually does on top of this is that this has the client certificate request has this certificate request context, and in there we can put that attestation nonce and the certificate, as in the TLS protocol handshake message, so it is extensible and there we can put that evidence which will then give the way to transport evidence over to the client. So that's the whole functionality that we actually need. So just to give you an overview of what that actually means: client generates attestation request, sends it over, the server generates authenticator that includes the evidence, sends it back, and then validate—client validates that authenticator and appraises that evidence. So the green ones are the changes compared to that.

And then I want to come to the final part which is the ProVerif model which I have and the thing that I want to discuss here is specifically if we have missed something from RFC 9261 perspective or in general from cryptographic perspective. What I specifically want to focus on here is that in 2B, for example, we have the signatures done by the priv-EK ephemeral key, and then in 3C at the top when the the TLS protocol TLS handshake has been established, after the establishment of that—so the signatures in that post-handshake certificate verify is done by the priv-LTK. That means basically we are ensuring that it terminates within the TEE and it also belongs to a specific—it also has that specific key which we need, so it provides the proof of possession of both keys and we believe that it is a secure solution. Of course this is up for discussion and we are still like working on the formal analysis.

Other thing that I want to mention here very quickly is that the two new things: the authenticator request and the authenticator. I will describe the proposals for that, the first one is very easy for as I said the certificate request will be simply the attestation nonce, and then for the authenticator the three messages: certificate, certificate verify, finished messages. For all of those you see the equations 2, 4 and 6 which are basically the exporting key materials, the standard TLS exporters. We make no changes to the key schedule, that's the standard API which is provided and we just what we do is basically we invoke those with a specific—with specific values, meaning that the context label and length is all that we change, and each of them has a different context so that they cannot be mixed and matched, and you also see that AT1 and AT-hat 1 and AT-hat 2, which basically means that that integrity of all the messages is also protected.

So with that, the final idea basically as I said exporters ensure that they are all bound to the connection, evidence is bound to a fully authenticated server all the way up to server finished, and there is a running hash as I showed the AT—the hat 1 and hat 2. Coming from RFC 9261, we just invoke those with new labels, but nothing—nothing changed. And we use correlation properties in ProVerif to do that. And the question specifically I have for the—for the for the research group is basically, would you recommend doing a computational proof for this, or do you believe that ProVerif would be good enough for this, for example, like computational proof meaning the CryptoVerif or the such tools. That's pretty much it. There are some links that you can see afterwards and these references. Thanks very much, I look forward to your feedback or any questions.

Stanislav Smyshlyaev: Thank you, Usama. We ask you to have some explicit requests for some guidance from CFRG. I saw a couple of questions but I'm not sure that they were explicit enough. So if you want any guidance on any specific matters, you can ask on the list. Or maybe we have anyone willing to comment now? Because it's fine to ask CFRG for guidance, but you really have to be very explicit with your questions. I don't see anyone willing to comment now. And as I understand, we have some any other business from Deirdre. Deirdre, please.

Deirdre Connolly: Hi, just a quick announcement. I've uploaded a security considerations for ML-DSA as a first draft, inspired by talking about other security considerations documents for KEMs yesterday. It's heavily inspired by Scott Fluhrer's security considerations for ML-KEM in discussions with Sophie Schmieg. Please take a look.

Nick Sullivan: Oh thank you, great. I'd like to add to that is basically if there are other signature schemes that are being actively considered within the IETF, consider to bring this to the CFRG as well. And is the intention, Deirdre, that this could be along the same lines as what was announced on Tuesday, like a set of collections for the CFRG to give recommendations to the IETF regarding which PQ KEMs to do and how to safely use them?

Deirdre Connolly: Sure, just generally like for example, FIPS 204 specifies a Hash-ML-DSA variant and regular ML-DSA and talks about things like: well, you can also do external Mu, which kind of gives you exactly what Hash-ML-DSA seems to provide on the tin but none of the caveats. And especially for people who care about using ML-DSA in a very FIPS-specific way that won't get in their way, about how yes you can use external Mu in calculating it in a different cryptographic module, exporting it and then calculating the signature over it in another cryptographic module, things about sourcing the randomness for signing, for hedged or signing versus deterministic signing, specific things like that about kind of: here are some choices that are allowed by FIPS 204 and some well-motivated recommendations about what choices to make and why, but also with a with a clear clear perspective on if you like ML-DSA you probably want to be FIPS compatible—not always, but probably—and linking to all the places that FIPS has published guidance to be like, "Yes you can do this, it's fine. You might not want to do this even though it is fine according to NIST." Stuff like that. And then if CFRG wants to treat that as sort of like a—implicit like, "We like this thing, that's why we're giving you so much information about it," sure, why not.

Nick Sullivan: Okay, yeah, that sounds really valuable, especially if there's ambiguity and options in documents that are published by other SDOs. So this is probably great, a really good use of the CFRG's time.

Stanislav Smyshlyaev: Okay, thank you. And now we have to finish our meeting. I would like to repeat that in a couple of weeks we are going to send an announcement of call for nominations for Crypto Review Panel. We'll have a Crypto Panel rotation and we really need good candidates there, because we've got a lot of work to do. So please, if you want to self-nominate or you want to nominate someone, please do that after we make an announcement. Thanks to everyone for attending the meeting. Have a nice day. Thank you.