Markdown Version

Session Date/Time: 16 Mar 2026 03:30

Joe Salowey: All right. Um, we'll get started here. This is the TLS working group session at IETF 125. Uh, I'm Joe Salowey, and Deirdre is in the remote room co-chairing. Uh, Sean is in transit to the meeting because his flight was canceled and then rebooked and all that fun stuff. So, uh, this is the first of two TLS sessions. And since it's one of the earlier sessions, I'll, uh, hang out on this Note Well slide that kind of covers the conduct, uh, of the IETF and what you agree to, uh, when you sign up with respect to intellectual property, anti-harassment, and other topics. So, please make sure you understand the obligations in the Note Well, um, and the requirements. Um, I'll move on now.

One thing is, uh, for people who are on-site, please join through the on-site MeetEcho tool. Uh, this allows us to manage the queue, um, and probably in the remote room, people should join through that as well if they're not joined through the full tool. Um, and then please keep your audio and video off unless you are chairing or presenting. Uh, hopefully this session goes a little smoother than the dispatch session I was attending earlier, um, but please be patient with us if there are audio-visual issues.

Um, let's see. We have a pretty full agenda today. Um, we'll be discussing—we'll have a brief update from the chairs. We'll talk about some working group drafts: the [draft-ietf-tls-mldsa], [draft-ietf-tls-pake], and [draft-ietf-tls-mlkem]. Um, and we'll talk about one non-working group draft as well, um, on PQ continuity. Uh, we also have a session on Friday that is a longer session where we will be covering more topics. Um, we did have some requests that didn't, uh, make it onto the agenda because there was not enough discussion on the list and for these particular topics, and we kind of ran out of room on the agenda. Um, any agenda bashing for today's agenda?

Um, I'll just briefly show a preview of what's on Friday. So... All right, I'm going to change the slides here so that we can bring up the other ones.

Deirdre Connolly: Hello?

Joe Salowey: Yaron?

Deirdre Connolly: Yeah. Hey, Joe. Uh, Yaron hasn't uploaded the slides, so it's quite early for him, so I don't know if he'll be able to make it to the meeting for the TLS continuity draft.

Joe Salowey: Okay. When we get to—I think the slides are already uploaded.

Deirdre Connolly: Okay. He just pinged me some time back, so we'll check—I'll check with him.

Joe Salowey: Yeah, and if we need to upload new ones, uh, ping us in the chat and we can, uh, do that. Sure. Okay.

All right. Um, here's just a brief update. Um, we've had, uh, in the over the past several months, we've had a number of complaints and appeals, um, particularly having to do with MLKEM and the process therein. Um, for the MLKEM, the appeal was unsuccessful and was on the MLKEM adoption. Uh, this ID is still an adopted working group draft based on the result of that—MLKEM—uh, appeal. Uh, are you guys not seeing slides? Okay. I think we're good. But in any case, uh, there was some moderation issues. We did moderate a working group participant during this process, um, and it was determined to be in conformance with 3934. However, we did make an error in removing the moderation, so the moderation stayed active for a four or five days, maybe even closer to a week longer than it should have, um, because we failed to do the process correctly. Um, since then, we've rectified that situation.

Um, next, uh, we have published several RFCs. So for IANA registry updates, TLS encrypted Client Hello, bootstrapping TLS encrypted Client Hello with DNS service bindings, and DTLS RFC have all been published as RFCs. Congratulations and thank you for all your hard work. I know this is no small feat.

Um, we have quite a few drafts in the RFC Editor queue. We have two working group last calls that have completed. One for jumbo record limit. Um, we are still uncertain as to the status of the implementations here. We know there are two implementations, uh, but they're both—I think Hannes was involved with both of them. I don't know if Hannes, you want to say anything about implementation at this point? Because we do like to have, um, implementations for the drafts that we are—for the documents we're posting.

Hannes Tschofenig: Hi Joe, hi everyone. Yeah, um, yeah, the—I've worked on implementations and not anyone else, so um, yeah. What should I say?

Joe Salowey: Okay. Yeah, I think we'll have to discuss and see if that is sufficient for implementation or—or if we need to wait for additional. Um, next is MLKEM—oh, MLKEM. We'll talk—we have a whole presentation on this, and so we'll defer any more discussion on that till then. Um, we have a couple of drafts still kind of hanging out in the working group waiting for more experience, which is [draft-ietf-tls-extended-key-update], [draft-ietf-tls-tlsflags], and [draft-ietf-tls-wkech]. Let's see if I go to... up! And that's all I have for now. So now we'll jump into our first presentation, which will be MLDSA.

Tim Hollebeek: Hello. I think, do you want to request slide, do you want me to click it for you? Why don't you go ahead and do it? There's a grand total of about two slides anyway. Okay. I'll do it.

All right. So we're talking about [draft-ietf-tls-mldsa] in TLS 1.3. Uh, this one's been out there for a little bit and it's not too long. Next slide. So this one doesn't actually do a ton of things. It just registers the three, uh, MLDSA sizes as code points. Uh, one of the things that it does say is it specifies that you have to be using TLS 1.3. Uh, it prohibits use in TLS 1.2, consistent with our policies that we're not going back and making TLS 1.2 quantum-safe. Uh, it's really short, it's really straightforward, uh, but it's very important, uh, for all the people who are trying to do this in the real world. Uh, and before anybody gets caught on any ratholes, it doesn't say you can or shouldn't or should use hybrid. Hybrid is basically out of scope. This is if you want to use bare, uh, MLDSA, this is how you do it. If you want to discuss something else, go discuss a different draft. And the same is true for anything about, uh, certificate management or all the other, uh, silly things that might be associated. This—this draft is very short and simple. Uh, next slide. Oh, and actually, yeah, next slide as well.

So what we're doing, uh, Draft 1's been out there for a while. Uh, people have reviewed it. There aren't any open issues on it. Uh, FIPS 204 has been stable for a while now. Uh, MLDSA is also already starting to get deployed in the ecosystem, and there's a bunch of deadlines approaching. It takes a long time for people to do these sorts of transitions, so uh, I think this is a good time to get this one out the door. Let's not be the people that are holding up, uh, everything else. Uh, so we're—the authors are asking for working group last call. Thank you.

Richard Barnes: The code points for this one are registered. The names tell you what to do. Um, you just use the appropriate MLDSA things. What is this draft adding at this point? Is there any point to publishing the draft or, now that the code points are there and pretty self-explanatory, are we done?

Tim Hollebeek: Uh, well, so publishing the draft, uh, basically does say that we're done. Uh, we already adopted the draft, so I think the discussion of whether we need a draft or not is in the past. Uh, and for these things, actually, it does make a lot of significance whether there is an RFC or just code points, especially for people who don't follow IETF as closely as the people who are, uh, listening very closely right now. Uh, so if you tell people to go read an RFC, they can do that. If you tell them to go look for a bunch of code points, uh, you're going to have a different conversation.

Richard Barnes: Well, my point is like the reference in the IANA registry could just as well be to the FIPS specification for—for MLDSA. Like the—I don't think the draft is specifying anything technical on top of what the FIPS specifies. So like, just update the reference to that. And then the people who want a nice reference have a nice government-sponsored reference even, stronger than RFC.

Tim Hollebeek: I mean, you know, like we—I thought we had this discussion during the adoption call, so...

Victor Vasiliev: Uh, yes, just briefly to Richard's point: at least the draft gets to say it's not for TLS 1.2, which FIPS doesn't say. And if—and if people have other things they want it to say, you know, uh, feel free to suggest those things. Um, one, uh, comment on TLS 1.2: presumably the prohibition does not apply to certificate chains, right? So if one had an end-entity key that was still RSA but the CAs above it were all MLDSA, that's still fine in TLS 1.2, right? Nobody's stopping TLS 1.2 from verifying MLDSA certificates.

Tim Hollebeek: Oh, that's an interesting point for the implementations whether that's going to work or not. I hadn't thought about trying those things with TLS 1.2. Uh...

Victor Vasiliev: It should work. Uh, and it probably does. But if I said in the draft that the prohibition is really about the TLS protocol and not about certificate chains outside of TLS...

Tim Hollebeek: Yeah, no, that's an interesting point. Thank you.

John Gray: Uh, yeah. I recently reviewed this. I think it's perfect and ready for working group last call. Just ship it.

Ousama: Yeah, I agree with Richard. I think there is no need for this work, and the answer that I got from Tim was—I disagree with that. The adopting any work does not lead to necessarily to publication. It's not a good argument to say that, "Hey, we have already passed these discussions and so on. We shouldn't take up these kinds of works which are just wasting the working group energy."

Tim Hollebeek: Thank you for your comment.

David Benjamin: Uh, yes, we should ship this. It is true that this document is fairly minimal, but there is actually a few things to say. Like there—there is the TLS 1.2 thing. There's also MLDSA has a context string. We need to know what to put in. Empty string is sort of obvious, but it needs to be written down somewhere. The code points interact with X.509 signature algorithms. We need to say which one we want. There's an obvious one, but still, you know, all these things need to be said. Do we use the pre-hash form or not the pre-hash form? Um, and so that means we need to have some document here. It is true that a draft document, a version draft is just as stable, uh, but that doesn't tell other TLS implementers who aren't following this until like folks, you know, who use the stuff the TLS implementers produce that the working group is done with this and isn't going to next week make another draft saying, "Oh, actually we did want to use the context string because hey this like prefix we could have put it in there" or whatever else. So like, I think rather than making it so that it is harder for us to get stuff done, we should just go and like ship this without any drama.

Tim Hollebeek: Yep. Thank you.

Tim Hollebeek: Okay, I would double exactly what David said. FIPS 204 cannot be a reference because it never—I mean, it cannot be the definition because it never talks about how to fit it into TLS, whether or not to do pure or hybrid, what context strings to use. It says nothing of that. And the—complaint is, "Does this take up a whole lot of working group time?" Uh, basically not really. Just a working group last call, kick it over the IESG, and—and the working group's work is pretty much done. Mm-hmm.

Ecker: So it seems to me Victor's actually raised an interesting technical point, uh, which is—if you think about how TLS 1.3 works, right? It's plainly the case that you have to—the client has to be able to advertise, uh, this code point even in a connection which is offering TLS 1.3 because it might also be offering TLS 1.2. However, the server at the time it's sending signature schemes knows what the client supports and does not support. So it could be the case—so you could imagine saying, for instance, that it is forbidden to send this signature scheme by the server at all under any circumstances for client authentication, um, and that the client is forbidden from sending it unless it is also advertising TLS 1.2, which would be a way of really prohibiting TLS 1.2. Uh, so I guess I'm not—I'm not taking a position on the edge. I just thought of the problem like this minute, but it seems to me that like actually that probably does need to be flushed out a little bit because what the text here says I don't think is actually like quite clear now that you think about it as one might think. Like, this text says you can't use it in TLS 1.2, um, uh, but it's not clear what that means because—because the signature schemes apply all the way through the cert chain. Um, and so it's not clear what that really means. Um, and—and then there's the—so like there's the question of what appears in the cert chain, and there's a question of what appears in the—in the advertisement messages. And so I think, um, you know, so I think that may need a little more flushing out, um, and—and we may need to mailing list sort of to do that. Um, I'm not—I'm not sure any of the—any of the outcomes would be terrible, but I think we actually have to be clear if we're going to have normative words here that they're quite clear.

Tim Hollebeek: Yeah, no, Ecker, I agree with you. I think we said the right thing at the hand-wavy level, and now we have to figure out how to make that technically precise, and there's a bunch of ways of doing that. So absolutely.

Ecker: And I'm—and I'm happy to help with that—that effort. Um...

Tim Hollebeek: Yeah, I'd love to get your help. So let's have that discussion during working group last call.

Ecker: I think we probably could do it in working group last call. I would slightly prefer maybe to do it beforehand, but I'm not going to lay down in the ground over it.

Tim Hollebeek: Yeah, sure. I mean, you'll send me some language real fast and maybe we'll take care of it before.

Ecker: Uh, hopefully. Is there a GitHub tracker somewhere I could take a look at this? No. Okay. I'm going to forget about it by tomorrow, but okay.

Speaker 1: I have a plus one to the comments that have been made before me and in the chats. Let's make this an RFC so people who don't know about IETF processes look to RFCs and we should make that available.

Tim Hollebeek: Yep.

Joe Salowey: Okay, I think the queue is cleared. Um, unless that wasn't Michael. Uh, all right. So yeah, I think that, you know, it would be good to have resolved known issues before working group last call, but uh, let's—let's see if we can work something, you know, over the next week or two. And if we can get that in, that would make things even smoother. Um, but we, you know, we can always run a call and—and then fix that as part of the last call.

Tim Hollebeek: I hear you on that one. I'll work with a bunch of people and we'll get some text in.

Joe Salowey: All right then. Next up, I think is TLS PAKE. Laura, do you want me to give you slide control, do you want me to click it for you?

Laura: Um, that's okay. I think that there's only like a few... oh, I have it. Okay. Uh, can everyone hear me? It's my first time presenting remotely.

Joe Salowey: Great.

Laura: Okay, uh, so first up, this is what is changed since IETF 124. We got a lot done at that meeting and have closed out a bunch of issues. So the new draft version 01 has changes for all of these, um, including specification on how to use CPace, a few minor clarifications, and then some of these were just resolved as in we did not choose to make any changes for them, but the issues were closed out.

The open issues we have remaining are pretty small and don't have clear direction for me to make work on the draft, slash I don't think they're like—they're very—they're not essential to the content of the draft. So the three are: one, the question about, uh, the identities and whether we wanted to factor those out or keep them factored out, or whether we wanted individual identities for each different PAKE algorithm. So far, nobody's commented on the discussion with strong opinions either way. Uh, just leave it open in case someone has thought of a use case that would necessitate changing this. Uh, we have an open issue requesting more, uh, text on use cases and motivations. So adding some additional text there could be useful. I've asked the originator of that issue to suggest some text. And then I filed an issue to track Deirdre's request to add OQuake. I don't think this is blocking or necessary for this draft just because I'm not aware of any implementation interest, but I did include it for completeness.

Other thing, uh, someone brought up on the list about formal analysis. Uh, the goal being basically to prove that the explicit key confirmation messages are sufficiently replaced by the TLS Finished messages. Uh, we've started some work on this by extending the TLS 1.3 model in ProVerif, the one that was used for analyzing ECH. Status is it needs some polish before we can put it in the public GitHub and it's just been finding time to work on that.

So next steps are actually getting that formal analysis moving along, which is on the authors right now. But content-wise, I feel like we're in a place where this draft has mostly converged and there doesn't seem to be a ton of discussion going on on the actual contents. Uh, yeah, I guess that's sort of where I'm at. I'm curious anyone else's opinions and if we're at a point where we agree on that, I'd love if we could move towards a working group last call.

Scott: Hi, I'd like to bring up one issue—missing security consideration. Spake2+ has a property that if you're able to solve one discrete log problem, like you happen to have a very slow quantum computer, then you can basically break the system. Uh, that may not be a blocker, but it really needs to be brought up in case someone cares, because that's what security considerations should be about.

Laura: Okay, I believe we had some discussion on the list on this. Uh, I'll go—I will reference that and see if I can put up a PR to add a security consideration for it.

Scott: Uh, if you need text, I can actually provide some.

Laura: Great.

Ousama: Yeah, uh, thanks for the slide and thanks for the update on ProVerif. I was just curious about the property. Can you say, maybe it's a question for Chris to answer, so and say a little bit more about the property that you are describing and to better understand what exactly is being done? And in that respect, I think I would suggest doing that ProVerif code before going for the working group last call because it's as per FAT process it's a requirement for that, right? And as far as I know, there is no FAT person announced or assigned for this specific draft. So we don't yet even know what the requirement is and going for working group last call would be like really jumping that.

Laura: Right. So—so Chris can say a bit more about the property that you had on the last slide if you go one back. So if you could explain a little bit more about the goal that you're trying to prove, that would be nice. Yes. In—like the Spake2+ RFC, it explicitly says you need to send explicit key confirmation messages. In the TLS PAKE draft, we've said you do not need to send these explicit messages as specified in the Spake2+ draft. Instead, their purpose is now covered through the TLS Finished message.

Ousama: So what I meant was that's not a property you can prove in ProVerif directly. So maybe I will follow up on the list with Chris for that. Thanks.

Ecker: Yeah, I'm less worried at the FAT process, um, than I am about the merits of the case. I think we do need some kind of substantial analysis for this. This is a pretty big change, and like—I'm not saying it's not important, I'm just saying that like if we're going to have it, we have to have some analysis. So I'm pleased to see there's—that you're doing the ProVerif model. I understand, I'm reading the chat and it sounds like there's some doubts from Dennis about whether the ProVerif model is sufficient. I have no opinion on that topic, but I'm just saying that like I think we can't advance this document without seeing some real analysis, not at the scale of what we had for 1.3, but like pretty substantial. Because, you know, this affects the main properties of the system and like—really break everything as opposed to some of the things we look at where like, "Okay, we can convince ourselves pretty easily we didn't destroy everything and maybe this thing just doesn't work." But this could like really just not work. This could really destroy everything. And—and thank you, Scott, for bringing up that point earlier about the—the discrete log, that's actually really important to bring out because it's not a property as I understand it of TLS normally.

Tom: Hello. Um, on the a little bit of point of process, the FAT is something that the chairs invoke and ask to form an opinion on what the authors have presented both in the draft and also a part of accompanying material in terms of like proofs and things that they did. And I think that it's part of the task of the people writing the proof to determine what kinds of security properties they want to analyze, what kind of evidence they're presenting for that, and it's more up to the FAT to maybe help the working group decide if that evidence is convincing enough. The FAT is not a gatekeeper for the TLS working group explicitly; it does not have any role in the consensus process. So, um, I kind of want to basically say that the authors did nothing wrong. The invoking or asking the FAT for an opinion is something that the TLS working group chairs are supposed to do when working group last call is started, I believe—that's what the process says. Um, and you—in particular, I'm a little bit as a FAT member, I'm a little bit displeased at the way that the FAT is currently being invoked as a sort of "I'm annoyed with this draft" and try to use the FAT as a distraction method. Um, and I'm also worried that this will make people pull out of helping with these analysis and helping the TLS working group sort of parse through what all of these highly technical security analyses mean. That was my sort of two cents on that.

Joe Salowey: Yeah, I'll just remind folks that we have a bit more time to discuss FAT and how we can improve that on—in the Friday agenda. But Ousama, you can say some things if you'd like.

Ousama: No, it's fine.

Joe Salowey: Okay. All right then. Yeah, I think, uh, chairs will follow up with you on and with the FAT on what the next steps for that are, uh, because I don't remember exactly where we are with that in terms of the process. So—and then, you know, I think we do—we will want to see, uh, results of a formal analysis during that, uh, during the last call, so uh, we should figure out how we can help you get that done.

Laura: Okay. And I guess going off of what Ousama mentioned, if there's additional security properties that we think need to be shown in the formal analysis, filing issues or sending something to the list about those would be very helpful.

Joe Salowey: Yeah. All right then. Deirdre, you're up next. Asking for slides... I can share the slides if you want.

Deirdre Connolly: Sure. I don't know how I'm logged in. I'm logged in as a room. Yeah, that is a room. You would need to log in on your laptop as a chair if you want to. Never mind. I can run the slides. Okay, thank you.

So this is an update on MLKEM only in TLS 1.3 for key agreement. Next, please. This is the same slide I've shown several meetings in a row. This is the only post-quantum only ciphersuite, not hybrid. We have several hybrid PQ ciphersuites that also use MLKEM. This fills in the rest of the owl on the other side of the hybrid design stuff that we're—about to ship basically, that ship in production. We have elliptic curves only, hybrid design, which is KEMs and elliptic curves, and this is the first KEM-only specific to MLKEM described in FIPS 203. And we just, like the discussion on MLDSA, like we presented this with code points and everyone was happy with code points only, and then the FIPS became real and everyone asked to adopt the document and eventually turn it into an RFC because there are lots of people outside the IETF who need an RFC to ship the thing and get their vendors to ship the thing and don't understand what a code point and a draft that's not an RFC is.

Uh, made a lot of changes out of this—working group last call. They're on the GitHub. I did not push another change after the 07 version because that was the one that was being discussed during the working group last call, but there's a lot of changes accumulated on the GitHub. Next slide.

So, add a lot of references for a bunch of different things. Many, many, many renovations of the motivation section out of discussion from the last call. We'll talk about that next. Uh, removed several structures that were duplicate with 8446, 8446-bis. Uh, I liked them, but I—I got the feedback several times that they were duplicative, to just remove them, so I just did. Um, we moved implementations must not reuse randomness and must not reuse ciphertexts out of the security considerations into the body, so this becomes normative. This is like the only normative change that I think I can recall from the last couple of weeks of changes. Added some language in the security consideration section about why some people like hybrid. That's it. Not you should or there's a recommendation one way or the other, but just that some people like hybrid for reasons they consider it to be a more conservative option for reasons. Uh, and added multiple multiple references, at least five, to existing security analysis both formal and pen-and-paper of using KEMs with different security properties from full-on IND-CCA to one-time IND-CCA or IND-CPA using KEMs for key agreement in TLS 1.3. Because there was a little bit of discussion about "What—there needs to be analysis of this thing." It's like, "Here, let me give you a stack of papers and some symbolic models of all the analysis we have already had of doing this in TLS 1.3, not even looking at the hybrid stuff that we are doing and deploying." Next slide.

One feedback I got recently is, "Just get rid of the motivation section." It's been angst over by many. Could we just get rid of it? I'd be fine with that. I thought I needed to have one. Um, we have one person in the queue. Victor.

Victor Vasiliev: Go ahead, Victor. Okay, I thought I would go at the end. Um, so you mentioned the security considerations why some people prefer hybrid. I think maybe you'll get less objections and do well to just outline instead the risks that those people wisely or otherwise are trying to address by having hybrids and present those risks for people to consider. Uh, because saying "some prefer" is kind of sort of, you know, faint praise, you know, kind of situation. It would be far better to say that if you're concerned about failure of the post-quantum algorithm prior to quantum computers becoming available, you might want to consider hybrids because of your concern about. In other words, a much more neutral posture that really explains the risks, presents them to the user, and then leaves it to them to make sound decisions that meet their requirements. I'll get there. I'll get there and I'll try to address that point directly. Thanks. That's also why I'm one of the advocates of not advocating of not having a motivation section. People who are motivated know why they're motivated. We don't need to tell them why. And the motivations that have been there are sort of a little bit weak historically in terms of this draft, so I feel we can skip it. Cool. Uh, taking—taking your own advice, I'll save questions to the end because next slide...

Ah, this is the second thing. This is the "must not reuse randomness in the generation of ciphertext," not keys, ciphertext. The ciphertext in a KEM-based key exchange is akin to the server-side generated Diffie-Hellman side of the—of the key exchange. This is saying you must not reuse the randomness in the generation of the—of the ciphertext. And if you kind of look at it, it implies that you can't reuse the ciphertext because if you could reuse the ciphertext, you are implicitly reusing the randomness used to generate the ciphertext, so you can't do that either. Um, this says "must not." In 8446-bis, it says you "should not" reuse key material. It should not reuse keyshares in key exchanges. Is there a problem here? Okay, I see people shaking their heads no. Think about that question. I—I just have an honest question if this is fine and if the answer is yes, this is fine, we can just do this, we can just ship an RFC that says this, great, cool. I just wasn't sure. All right, to go to the next slide and they try to navigate the slides I don't have control over. Okay.

So here I have three slides' worth of text. I apologize. This is what I currently have on GitHub, not on DataTracker, about security considerations. La-la-la-la-la. Proponents of hybrid key establishment generally consider it a conservative approach to deployment of newer post-quantum schemes alongside other older traditional schemes retaining at least the security currently offered by traditional algorithms. Think about—think about whether you like this, hate this, want more. Next slide.

Um, this is going into detail about what MLKEM is supposed to be offering, which is the IND-CCA security property, indistinguishability of the ciphertext under chosen ciphertext attack. And several analyses of using KEMs in TLS 1.3, including some old favorites and some brand-new ones. Um, and pointing out the fact that strictly speaking, IND-CCA is a little bit of overkill for ephemeral key agreement in TLS 1.3. We kind of like this because even 8446-bis says you "should not" reuse key material, but you're not forbidden and just in case that happens, either on purpose or by accident, IND-CCA covers your ass in that regard. So this is all very lovely and nice and well-analyzed. And next...

Um, and this is more stuff about why this is—why we get all this out by using MLKEM in TLS 1.3. I don't really remember if I massaged this very much, but this is the rest of the security consideration section on GitHub. Next.

I have one open issue that actually needs to get fixed, which is a link to CNSSP-15 and they have a cert that you can't use if you don't have the right root cert and I need to fix this. I tried to do it earlier today and was not able to find a better URL. There are other open issues on the GitHub, but they all pertain to the discussions we've been having and then I just touched on. Uh, next.

Yeah, resolve that. I don't know what we're doing next. I do not know. Um, okay. I'll go to questions. Jonathan?

Jonathan: Yes, hi. Um, so I do think that with with all these things that have been discussed that that there's value in in publishing this. However, um, there was a a message posted to list uh recommending removal of the citation to Canadian guidance document ITSP 40-111 from the motivations. Uh, and so I just wanted to clarify that this is not—that document is not guidance for how to use um algorithms within TLS. There's a separate guidance document that describes that, but that does not include post-quantum algorithms yet. So—so if that could be actioned or or put in in GitHub, that'd be appreciated. Thank you.

Deirdre Connolly: If that isn't already covered by an open issue, could you open one or I will lose track of it.

Jonathan: Okay. Thank you.

Ecker: Yeah, this is actually the question for the chairs, which is I would like to hear them declare what the—what the output of the consensus call was and I'd like to hear them declare a—an assuming as I think it is that the output was—that there was not consensus to proceed. I'd like to hear them declare what their plan to do about—about that because I mean, I'll just say, I don't think anybody who like was against this is going to be for it depending on I mean what changes you make in the—at the—that we're discussing here. So um, I'd like to hear the chairs' plan for proceeding before we like consider wordsmithing this document.

Joe Salowey: Okay, I'll give a quick uh kind of summary of where we're at and we'll—we'll post something more formal to the list. But basically, I mean we have kind of three categories of—of folks: there are folks who are for, there are folks who are against, and there are folks that have issues with the current document. And what we're trying to do now is resolve the issues—see if we can resolve the issues with the current document to clear if there's consensus to move forward or not.

Ecker: Well, I'm not a chair but I did a little bit of counting of these and I think the number of people who are against it um would probably not get you over the consensus bar um with the people who are just flat out against it. Do you actually disagree with that?

Joe Salowey: Um, I think I do. Um...

Ecker: Okay, I think that'd be very helpful for the chairs to say then in their analysis because like, we're going to burn a lot more time on this and another working group last call a lot other messages and there's no point in doing that unless there's a real chance of getting over the hump.

Joe Salowey: Okay. Point taken.

Victor Vasiliev: Yes, so I agree with Ecker and everything trying to suggest is with the goal of maybe convincing some of the borderline folks who are against it to maybe change their mind. Uh, to that end, the security considerations you showed say, you know, "proponents of" and then, you know, whatever. Uh, I think that's the wrong tact because it already sort of sets up this, uh, battle between proponents, opponents, whatever. Um, it should be much more neutral and say, you know, "here are the risks. If—if these risks apply to you, please consider, you know, alternatives" or at least consider the risks. Uh, so don't frame it in the way you're framing it, instead describe the scenarios in which this may or may not be appropriate in a very neutral way. And maybe by in fact putting in more of the objections that we heard on the list as issues to consider, perhaps some folks who are opposed might see this as a document that supports their case by laying out the security considerations, you know, and then maybe that's enough. Uh, this could be the first version of what per some want to see be a security area-wide policy. But let's at least try and reach consensus on it here. Um, and then maybe if it succeeds here, we can say it broadly about other protocols, or not.

Deirdre Connolly: If you have suggested text, I—I welcome it because I think part of the—the angst is that some people consider these obvious risks to not actually be obvious risks and the other way have, uh, feel the other way. And so if you have better, uh, verbage for, "well, but security considerations aren't just considerations, right? to that," I can work with that. Right. I'm just saying security considerations are quite difficult because it's yeah. Right. It's okay to say things in the security considerations that not everybody agrees on because they are considerations, not statements of fact. So I'll suggest some text. Thank you.

Ousama: Yeah, right. Uh, it was quite a quick one. I didn't get hold of all the things but what I understand right now and the slides were updated very very late. I think it was just the last hour when it was updated. Anyway, so what I'm seeing here is none of the concerns that I have shown on the list multiple times have been considered. And the major one being that you have a hybrid property. First of all, you are talking about—the major one—one of the concerns was motivation. And now you are really talking about the reverse, "hey, just put out the motivation." So if there is no motivation, why are we even taking up this work? And the second point I want to raise is that this is not the thing that the TLS working group should be doing in my humble opinion. This is something that the CFRG or somewhere else should be first taking up and then we should be in the TLS working group taking up that, "hey, that has been now settled down and checked to a reasonable level of guarantee and whatever the the setup there is." So it should first go to CFRG, get that attestation, and then bring that attestation here in TLS that, "hey, this is now set to be good." And then we are good to put it in and things like that. So I don't think it's a right forum to take it up the first of all, and then to put it, "okay, now the TLS working group has attested it and now we use it as well." And my third point will be in hybrid we have the compositional property, speaking from a formal perspective. And you have the ECDHE component, you have the hybrid component, MLKEM component, and when you are removing the ECDHE component, there has to be a very strong motivation for that because all the proofs are—now it's—it's a security degradation in my opinion that from two hard problems you are reducing that to one. In my opinion, moving from the ECDHE to the hybrid is a security improvement, but now moving from hybrid over to MLKEM-only is a security degradation, and which is what I don't like.

Deirdre Connolly: I strongly disagree with your analysis.

Ousama: So what do you disagree with? Can you say a bit more about that? Which point do you disagree?

Deirdre Connolly: Uh, we have multiple ways of establishing a shared secret in TLS, and going from one to a different kind is not a degradation. I mean, you have RSA-based public key encryption to establish a shared secret, and I don't think until quantum computers became a threat anyone would argue that uh using RSA-based key establishment was a degradation versus elliptic curve Diffie-Hellman. They are different. But we can continue this off—off camera.

John Gray: Uh, yeah. I saw very little about security considerations for static keys. Uh, the draft could potentially refer to 8466-bis for that, but the current text seems to say that IND-CCA exceeds the blah-blah. It seems almost worse than—than before. It needs to IND-CCA does nothing against the privacy aspects of static keys where you basically adding a fixed field. All right, that's true. But then it's to we tried to remove as much reuse language as possible uh to—to kind of address that, not to give anyone any ideas about, "oh, I can reuse MLKEM keys and and things like that." It's just to address the existing security analysis, most—most of which are saying you don't need IND-CCA if you're using ephemeral keys. You can get away with IND—one-time IND-CCA or IND-CPA. IND-CCA is—is secure, uh, in terms of indistinguishability of the key material, of the session key material. You still get a secure session. But forward secrecy and post-compromise security of multiple TLS sessions, that is not addressed at all. You're—you're correct. But that's kind of on purpose.

John Gray: I—I was talking about privacy aspects. That's also not—I'm not happy with the current text. I want a negative language saying there are concerns see for ex—and list the concern or if that text comes in in the TLS-bis, that is also fine. Then I also missing any explanation how this is conformant with 800-227 requirements for static keys. To use static keys, there's a lot of additional requirements in 800-227. FIPS 203 refers to 800-227. I've not seen any answer on the list, and not seeing any issues, and not seeing any updated text that needs to be explained.

Deirdre Connolly: The updated—update text is in the on the GitHub and it says, "800-227 includes guidelines and requirements for implementations on using KEM securely. Implementers are encouraged to use implementations resistant to such attacks, especially those that can be applied by remote attackers." And I think we made a reference to static keys as they are termed by 227, um, but I have to go check. So double check the version on the GitHub. But we tried to very much not encourage or whatever mislead the reader into use of static keys or reuse keys while also like going beyond in terms of the reuse of ciphertext randomness 446-bis, which does say should, should not, not must not.

John Gray: Also a very different topic. I want how do you you need to ensure who who the owner of the static keys. That's a requirement from NIST. I would like a explanation how you conform to that. But I would read the GitHub and come back.

Victor Vasiliev: Yes, "must not reuse randomness," that should stay. That's independent of the ciphertext reuse. Randomness reuse, as Scott pointed out, is pretty bad because it cross-talks the shared secret across two different sessions, so keep that. Yep.

Joe Salowey: All right. Uh, chairs owe some explanation to the working group and, uh, we'll work on that this week. Um, we'll go to the next presentation. Yaron, and I believe I can give you slides. You have control.

Yaron Sheffer: Thanks, Joe. Can you actually hear me? I believe so. Yeah. Okay. And I'll drop the camera, that's not doing very well. All right. So for what I can squeeze into nine minutes, uh, we had a very lively discussion on the list. Thank you for that. And I guess we will continue the discussion on the list. Uh, so this is a new draft by Tiro and myself about rollback resistance, uh, as people are migrating into PQC. Many clients will need to acce—or most clients probably will need to accept both traditional certificate chains and PQC-so PQC or composite certificate chains for a long time as the server situation changes over a period of years. Uh, in the meantime, there may be emerging or even unknown quantum computers that can perform attacks on traditional certificate chains or specifically on the end-entity certificate. And these, uh, quantum computers can facilitate man-in-the-middle attacks that are not detectable by the client. And what we propose is a TOFU kind of solution, uh, that's mitigation. It's not a complete fix because you need to talk to the server correctly at least once. Um, and I'll point out immediately that there's this proposed solution here at the TLS level and then there are two alternatives: one is at the application level based on HSTS and the other is PKI level, uh, so notification within the certificate. And in fact, Tiro and myself are co-authors on a certificate-based alternative for-for to solve the same problem.

Semantics is quite simple. What the server is saying is, "I will support PQC or I will only provide PQC certificates—sorry, I will provide PQC certificates for the coming period of time." The client caches this commitment. And so if I see a server, uh, not providing a PQC certificate, I know that it's a man-in-the-middle attack and I would fail the handshake. This is similar to the behavior of HSTS where we're caching the ability of the server to do—to actually do TLS. Uh, but the whole thing happens at the TLS layer. No changes to PKI. The current draft says this works for client certificates and server certificates. However, there was an issue raised by Ecker on the list about client certificates and so for right now we're claiming server certificates and we'll have to look at the exact implications whether and to what degree this can be used for client certificates as well.

Going into details, uh, the—the extension is a simple structure containing a signature algorithm and a validity period in seconds. Um, the extension can go into Client Hello and Certificate Request to indicate support, or if you're actually declaring your commitment to support PQC, that goes as a Certificate message extension. Um, yeah, no more... yeah. Uh, the server can indicate that it no longer commits by sending a zero value, but that obviously only applies to that particular client and not to others. So do it at your own risk.

Joe Salowey: Uh, what—I see back-up compared to alternatives now, slide 7.

Yaron Sheffer: Yeah. Uh, you can see what I can't. You're on CDNs and middleboxes now. Yeah, so can you serve the slides, Joe, please? Yeah. Uh, what—what slide do you want to be on? Middleboxes is good. Okay.

Yaron Sheffer: Yeah. Uh, so CDNs and middleboxes, uh, add complications to the solution. With CDNs, you might have different points of presence, uh, where some pops present traditional certificate chains and some present PQC certificate chains. Obviously, you should not be sending a commitment unless you know or until you know that all CDN endpoints can actually publish the PQC or—well behavior would be non-deterministic. Middleboxes as a rule should not send and should not forward commitments. Uh, and a middlebox that wants to provide maximum connectivity for its clients will simply follow that rule. A problem arises if there's a middlebox between a client and the server, the client has already cached the server's commitment but the—but the middlebox is not PQC-aware and so is basically breaking the commitment. That would cause a breakage. We have not discussed it in the—in the draft, but there have been specific solutions for-for similar problems in the past for enterprise middleboxes, so it's a solvable problem. And in general, compared to other TOFU solutions in the past, server breaking—so-called server breaking here is not a kind of you don't you're never able to talk to this server anymore. All the server needs to do is to add a PQC certificate, which by the time-timeframe we're talking about, will be available. I'm out of time and my next two slides are comparisons with the HSTS solution and with the certificate-based solution, uh, so feel free to read those comparisons and let's continue this discussion on the mailing list. Thank you.

Joe Salowey: All right. Um, I think we're at the end of our session. Um, I don't see anybody in the queue. Ecker, if you want to say a few brief things, we can squeeze that in.

Ecker: Well, I just wanted to ask the question I've been asking all week, which is are there people interested in implementing this that have substantial implementations? Uh, because I think we already know there's people interested in implementing the the HTTPS version and so I think the threshold question is people interested in implementing this. And happy to take an answer on the list.

Joe Salowey: Yeah. Let's take it to the list.

Yaron Sheffer: Yeah. Uh, I don't have the answer. I think it's very early for-for either of these solutions.

Joe Salowey: Okay. Um, thank you. Uh, we'll meet again on Friday. So see you then, if not before. Cheers. Cheers.


Session Date/Time: 20 Mar 2026 06:00

Sean: All right. It is the top of the hour. It's time to get started. Welcome to the TLS working group. Uh, we are the last session of the day and of the, um, meeting itself. Joe, can you go to the next slide?

Sean: All right. Here's the IETF Note Well. Hopefully you've seen this and know it well. Um, it's about policies and procedures for, uh, acting and, um, participating in the IETF. Uh, you can scan that QR code there to get more information if you would like. If you have any questions, you can ask the chairs. You can ask the ADs, uh, who's Deb now, uh, from Paul. Um, and, uh, if you feel like there's been any issues, you can, again, come to us, or you can go directly to the Ombudsteam. Thank you. Next.

Sean: All right. Meeting tips. Uh, we are going to use the queue, so make sure that you, uh, queue up properly. So obviously you've got to be logged in, but, uh, go ahead and, you know, use the Join the Queue so that we can get you in. Uh, please make sure to state your name and affiliation when you get to the microphone. I know I usually don't, I forget to do that as well. Um, I'm Sean. Joe is there as well. I think Deirdre, she might be here, but she, I think she's at the airport. All right. Next.

Sean: Uh, Monday agenda. I think we can skip past this. Friday agenda. Um, we're going to spend a lot of time, or we allocated a lot of time to talk about Extended Key Update. Uh, we have a document that's been progressing along pretty well. Um, we got some FAT review, and we're going to go through that. And I think then, uh, Yaroslav, who is local, uh, to the session in Shenzhen, is going to present a little bit. Um, and then we have some, uh, non-working group IDs that we're going to go through. And I know that we also have some measurement data for ECH on ECH stuff, which will be interesting to review. Um, are there any agenda bashes?

Sean: All right. Going once, going twice. That's good. We're also going to go, we're going to jump back into some chair slides because there were some things that we had open. Sorry, Tom, did you want... we got some chair status stuff we're going to do a little bit first. Do some cleanup here. So if you can get to the next... keep going. We did that one. We did that one. We did that one. So, breaking news!

Sean: All right. Third time's a charm here. Um, we had a suggestion to, uh, prohibit key reuse within RFC 8446 and RFC 8446bis, um, uh, because currently now we do not forbid the reuse of ephemeral keys. In the current version of 8446bis, the, uh, we did add some text about "should not" in Appendix C, but we've got a renewed push to go ahead and, uh, make that prohibition, uh, more permanent and move it to the main body. Um, Ecker has, uh, a PR that he's already submitted, and there's already been a bit of a thread on this with a lot of people that have participated. Um, so I just want to make sure, um, that we also highlight that there's, there's been a concern that, you know, that this is just kind of a feel-good change, and, you know, implementations do and will continue to do this. Um, but we seem to have a lot of people that are now interested in making this change, so we're going to go ahead and do that. Um, yeah, Nick, I noted that Dennis is going to go ahead and give the presentation. Um, so we're going to do another consensus call to try to change this point. Um, I guess what we're going to, I've got that message ready to go, I will send it, uh, probably Monday to give everybody a chance to chill. Um, it'll be the typical two-week thing. Um, but I guess what I want to say is, since a lot of people have already responded in the, in the thread already, um, if you don't feel like you, you don't need to repeat yourself in the thread if we, uh, when we kick it off again. All right. Next.

Sean: All right. So we got, also got two liaison statements. This one I want to talk about. There was another one from the ITU about, uh, using QKD with TLS. Um, we'll let that one sit. Um, but this was the one from the IEEE where they... it has not yet hit the IETF liaison statements, uh, page, so the, we'll provide a link. That link will update when it finally gets updated. I believe it's been forwarded already, so it'll probably happen when the secretariat, uh, gets back to California. But the basic idea is the IEEE working group, um, would like to express support for publication of the draft-ietf-tls-mlkem document as an RFC with its definition of key establishment options that use pure ML-KEM. Um, they also would like to, uh, the EAP group to do some stuff. Um, but this is just basically kind of like I'm just giving you a heads up. All right. Next.

Sean: All right. This is an update on our draft-ietf-tls-mlkem ID working group summary. Uh, so, um, Joe and I have reviewed all the messages, all 198 of them, or 235, I can't remember which, which number it was in which time we did it. Um, so just a little, little scolding here from the chairs. Uh, when we ask you if you support or do not support, uh, progressing a document or adopting a document, it would really help if you would actually put that phrase "I support" or "I do not support" in the, in the message. We're not trying to read tea leaves. We need you to be pretty explicit about that. Um, and so basically what we've got at the point is that we do not believe we have consensus without resolving the following points. Um, to deal with key reuse, which, you know, see slide the first slide I talked about. Um, text for pre- uh, preferring hybrids. I believe Deirdre was in the process of try- trying to put some text up to see maybe if we can get, uh, some agreement on that and, you know, whether or not to include some motivations. And if we were to do that, obviously we could in- refer to the, the liaison statement that happened on the slide that I just talked about. Um, so we don't think that this is going to happen really quick, um, but we think it's going to take a couple of weeks and then we're going to try to run a targeted last call to see if those changes can help us get over to the point where we do actually have consensus. Does anyone have any questions about this? Uh, I want to apologize for us taking so long to get to it. Muhammad.

Muhammad: Yeah, somewhat bit confusing. So it still doesn't address the concern that I have from going from hybrids over to the pure ML-KEM, those security properties that you lose. I don't see that anywhere in any of the three points that you have mentioned here. So can you elaborate on that, please?

Sean: Sure. Uh, my understanding from reading the list, because I'm not a cryptographer, is that, um, KEMs in TLS are actually quite well understood at this point. And, uh, my understanding, or the, the tea leaves that I've read, is that, um, we would be okay to proceed forward. You may not agree, but that's what I heard.

Muhammad: Yeah, I mean, I, I would completely disagree. So the thing is that you have two different hard problems in hybrids, and that is to say if one breaks, the other one will still protect you. And that's the property that I don't see happening in the pure ML-KEMs. And that's what something, that's really something that I'm worried about. And what I proposed in the last meeting was that this should go over to CFRG or FAT or maybe something like, like, we need some proper guarantees for this in order to, to put it in. I mean, just thinking from the top of the head that, "Oh, this, this actually looks good" or something like that is not something that I can actually put trust in. So I would appreciate if we have a proper analysis of this before going into, uh, putting it into the TLS itself. Thanks.

Sean: Okay. So I don't want this to turn into a litigation of the working group last call. I see a lot of people that actually are cryptographers are in line. So I'm going to go ahead and close the queue. We're obviously going to take weeks to go over this. If we need to add another point, we will. Um, but, uh, I'll try to see if I can figure out where the lock the queue thing is. Oh, thank you very much. Uh, go ahead, John.

John: Uh, yeah, I think it would be good to take this key reuse first because if you, if TLS 1.3 forbids key reuse, all my major issues with this draft disappear, and I think most of the formal verification issues go away also. So I think fix that first and then discuss this.

Sean: Tanya.

Tanya: Yeah, so I think there's some very principled reasons missing here. Um, but I agree that those principled reasons will not be easily removable. I mean, if your, if your statement with this slide is you want to try these and then see whether the concerns go away, I agree that saying, um, people are concerned about the security is not one that will go away. Um, but I think there are several of us who expressed concerns, um, similar to what Muhammad has just been expressing. And those are not being addressed by this. And I would appreciate at least you could state that you have read that there are people who say "No, we are not ready for exposing the pure... I mean, exposing the, the new systems which are less well understood, less well implemented, um, as a sole security guarantee."

Sean: Thanks. Tom.

Tom: Yeah, basically my question is... maybe let's first comment on, uh, some of... oh, my webcam is totally freaking out, so let's quickly turn that off. Um, the, um, distracted now. Um, the FAT is not set up to evaluate the security properties of, uh, KEMs and elliptic curve sig- schemes or any sort of cryptographic primitive. Um, we're a bunch of protocol experts, not people that look at the security of cryptographic primitives. Um, aside from that, I believe that, um, the point does stand that KEMs are understood very well and the way that KEMs interact with, uh, with TLS is understood very well, but evaluating the, uh, core SVP security or whatever of ML-KEM is definitely not in scope of what we're doing. Nor do I think that CFRG is really qualified to do any of that either. Uh, that's years and years and years of, uh, publications in conferences. Um, I guess CFRG could maybe be asked to make a summary of that, but even that would probably be pretty tough. I think it's fine to say that NIST has done that work already. Um, the more, um, the question I had to the chairs is, uh, targeted consensus call. Does that explicitly mean that we're going to, aside from the, uh, bashing that will surely happen on, uh, every comma in the text for preferring hybrids, does this mean that we're going to put the concern, uh, "a non-hybrid is a non-hybrid" aside? That lots of people were voicing.

Sean: All right. I think you got me on that one. I, I mean, the targeted means we're targeted at the changes that we're making in response to these points.

Tom: These three points. Okay. Because I do agree with the other side that the point of "we don't like non-hybrids because they're non-hybrids" was also raised a lot. Um, I don't necessarily agree that that is something that should block this document, um, but that is what we will see a lot of again in the, any next working group last call on this. And I think Ecker also sort of asked, uh, are we going to be explicit about how, how we're going to go forward with this?

Sean: Yeah. Yeah, so the... I mean, we'll work out the details of the, of the call, exactly how we'll state it. But the, I mean, we know that there are people who are for and there are people who are against and they've voiced their opinion. And then there are people who may change their mind for better or for worse depending on how, how the text turns out. Um, so that, based on that, we will then, you know, be able to close out the consensus and see if we have a rough consensus or not to move forward.

Tom: Okay. Thanks.

Sean: Thanks, Tom. Victor.

Victor: Uh, yeah. I thought that the second bullet sort of largely tries to tackle the issue that the previous statements were discussing, in that if the security considerations clearly outline the risks of why you might want to use a hybrid and essentially put it as the, you know, the risk-averse choice that, that you should do when, you know, when you don't have better reasons to do something else. Well, that should address the comments from Osama and from Tanya and so on, by clearly saying use hybrids unless you know better. Uh, in terms of realities on the ground, you know, OpenSSL supports pure ML-KEM and will continue to support pure ML-KEM. Uh, it's just not on by default. A user can turn it on if they want it. But, uh, that's not going to change no matter what this draft says. Um, so I don't know what we're arguing about exactly, frankly, you know.

Sean: Fair enough. All right. Uh, that's kind of the heads up. Again, wait for the message, um, uh, and we'll, we'll try to get through it. Um, okay, I think we can, uh, release this queue, and we'll go over to Tom. And I'm going to set a timer once I figure out how much time I think I agreed to give him and everybody. So I'll put all this together in one time bucket. All right, 25 minutes. Would like control as well, please.

Tom: Great. And a different webcam, so it's not freaking out anymore. Um, so yeah, I was, um, asked to give the summary of the discussion of the formal analysis triage team, um, who were asked to have an opinion about the draft that proposes draft-ietf-tls-extended-key-update for TLS 1.3. Um, getting into this, I'm basically going to explain a little bit of how you write a proof, which might seem a little bit, um, like why are you going... where are you going with this? Uh, I believe that it will help make sense of why certain things are more difficult even though they seem very easy upfront. Um, so if we look at TLS 1.3, we basically have this sort of two phases in the, in the protocol. Um, we first have the KEM key exchange, or classicly the ephemeral Diffie-Hellman key exchange, and we can say properties about that, like "this is secure against passive adversaries." Nobody, if nobody messed with the KEM key exchange, this will remain secure. And then once you do all of the signatures in the handshake, you get out the session key, and then we like to say that that has properties like "secure against active adversaries." So if even if someone messed with the ephemeral key exchange, they got kicked out because they don't have the, the key that they need to produce the signature, and thus it's secure against active adversaries. Then we can layer the fancier stuff on top of that, like forward secrecy, which you could phrase as like the session key is hidden from the adversary if they don't know the ephemeral keys or they didn't know the signature keys when the session key is established, uh, which implies properties like even if we do give the adversary the signing keys, if we do that after the session key is established, that key remains hidden. Um, this is how we write these things in the models, and coming up with the right phrasing of all these properties is a lot of work, um, but that is how the proof sort of proceeds. Then we sort of have the, uh, existing mechanism in TLS 1.3, which is the non-extended key update. Um, that mechanism is essentially, um, the... you run a KDF (key derivation function) or, in other words, a hash, fancy hash, you run that over the existing session key, and then you derive a sort of next session key. And I'm actually not sure if this has been covered in any of the proofs, certainly not any of the pen-and-paper ones that I'm aware of. But if I would want to write a proof for this, I can sort of look at the existing models and then add something that says something like "if we add session key two to the attacker, then, uh, even if session key two is leaked, all of the stuff in the past, so session key one, remains hidden from the adversary." So this is a forward secrecy property again. So yeah, uh, like I said, I can sort of build on the existing, uh, model, and because I already have a proof of security for session key one and I'm not revealing session key one because that remaining secret was the whole point, um, I can make a lot of assumptions in my proof, and that makes life a lot, lot easier. I don't need to worry about any sort of malicious key update because the adversary can't interact with that. I can continue what I've proven in prior stages because that makes sense. And all of that makes, if you would want to add this, uh, fairly straightforward. We're sort of leading into the fact that this is a little bit more difficult, um, for the draft-ietf-tls-extended-key-update draft. Um, for extended key update, we're actually doing an ephemeral key exchange. So this is about kicking out an adversary from a prior bad state and then recovering security, if we want to consider post-compromise security. So if we give session key one to the adversary, so the, the previously established key, and the extended key update mechanism is able to execute without, uh, interference, um, then session key two, which will then newly be generated from this ephemeral key exchange, will remain hidden from the adversary. Uh, this is the definition of post-compromise security, um, but you're already sort of noting that there's a lot different here. Because with it's about recovering security, um, we are suddenly breaking lots of tradition with the model that we had before. So we need to leak the previously proven confidential and authentic session key one to the adversary because it's about recovering security even if the adversary got these keys. Um, which means that I need to worry about malicious extended key update, what that means, how I write that down in my model. Um, I probably need to invent new adversaries, the adversary, uh, that needs to be passive-only after being active before, and that's a little bit funky. And I can't necessarily continue to reuse what I've proven in prior stages because the adversary gets to play around with the session key for a while. And doing all of that makes life quite a lot harder when you want to write a proof for this, write a design a model that captures the security for extended key update. Um, there's some related work, um, because I do agree that this mechanism is not necessarily new or particularly exciting. Uh, the, the proposed mechanism for extended key update resembles, uh, continuous key agreement as found in, for example, Signal. Um, and you might say we have lots of proofs for Signal, but unfortunately we don't really actually have a proof that links the key exchange in Signal cleanly to the, uh, continuous key agreement. Um, so that's actually also a gap there. Um, and in IKEv2, there's a mechanism for security association rekeying, the Create Child SA rekeying, uh, of the IKE SA, um, which for extra fun supports complete renegotiation of absolutely everything without any authentication. Um, but, uh, there is no analysis of that that I am aware of, and I've been looking at that for a while now. So the, um... yeah. Um, there are other mechanisms, but unfortunately we can't really lift any of the analysis from that to this case cleanly. Um, so where are we? Um, I have to say all of the interaction with the draft, uh, authors was very pleasant. Um, I do agree that the main thing that they're proposing is sort of very simple. Uh, there were some confusions on, uh, forward secrecy and post-compromise security in a, in very early versions of the draft that haven't been resolved. Um, but there does remain some sort of subtle trickery and lots of potential for subtle issues in, for example, the key computation. Um, and I do again want to say, uh, that we all acknowledge that the authors have put in a lot of honest effort, uh, and are very motivated to get to the bottom of this, which is also very good to see. Um, the text here is a little bit, um, small, uh, unfortunately due to the technical detail, but, uh, what you're seeing on the left-hand of the slide here is a, um, abstract... is extracted from the key computation, uh, in the draft-ietf-tls-extended-key-update draft in a prior version because it's fixed now. Um, but it used to be that they, uh, for very practical and reasonable reasons, had decided, okay, uh, when we're going to re-derive, um, a new secret, the extended key update draft, um, rotates the main secret, uh, as it goes forward instead of only the, um, the client and server traffic encryption keys. Um, when it's rotating the secret and then it derives the new application traffic keys, it, um... what TLS 1.3 does, is, um, without extended key update, is that it includes the whole transcript of all of the messages that have been sent to compute that key. But because keeping track of all of the different things that could happen after the handshake, uh, they decided to simplify this to only include the two messages that were exchanged during the actual extended key update that happened. Um, this, um, unfortunately meant that because the traffic keys are... that the main secret actually contains no information on the transcript in TLS 1.3, that meant that you actually were unlinking the newly derived keys from the existing TLS 1.3 key schedule. Um, I'm not saying that this is necessarily insecure, um, but certainly it would have made arguing that, um, you're not deriving new keys in a way that is unsafe, um, proving that might well have been a lot harder, um, if not they had, um, started tying the key derivation to the existing transcript again. Um, there was some discussion on the list and on the pull request on the side effects that this has because you need to now keep track of where everything is going, etc. If you're interested in that, I recommend looking at that. But, um, even though this issue is now fixed, I think that this is illustrative of the, the kinds of subtlety that can hide in a change that seems sort of simple. Um, so yeah, some other things that the FAT brought up on, partially on some older versions, sort of continue with the theme of that you... this change does touch lots of individual parts of, of TLS 1.3 in, in subtle ways. Um, Ecker, do you want to ask a question now or at the end?

Ecker: Well, I wanted to ask a question about this first quote, actually. So happy to whenever you want, but I want to ask a question about this first quote.

Tom: Okay. Let me read out a quote and what I have to say about it, um, and then, um, I'll get to your questions. Sounds good. So, uh, one of those things that has many of these subtleties, uh, is that session tickets, uh, how are you going to deal with those if you have extended key update? Because, um, session tickets get derived from the main secret, and extended key update is in some way about moving forward the, the main secret in time. So you probably want to throw away all of these old session tickets every time that you do this extended key update. Um, of course, managing all of that, very complicated again. And, um, those sorts of things, um, do require very careful consideration. And also on this issue, uh, the authors have recently, uh, revved, uh, the text in the draft to, to close some gaps and, and recommend that you do need to throw away all of the old session tickets every time you do extended key update. Which now, uh, give the floor to Ecker.

Ecker: So is your assumption here that the, um, long-term keys are uncompromised?

Tom: Yes. So in post-compromise security, um, you are not, uh, re-authenticating, uh, or in at least in extended key update or in like what you're doing in Signal, you're not doing new exchanges with the, uh, with the identity keys, so the signature keys, or in the case of Signal, Diffie-Hellman identity keys. Um, the only thing that you're assuming is, um, you want to recover from a situation where for some reason, um, the keys got leaked. Uh, I guess that for this draft that might be because, uh, maybe they got, you're doing kernel TLS or whatever and the, the traffic keys got leaked from the network card, um, and then you do, um, uh, you somehow do a software update, so you kick out that adversary, um, and then you do, uninterrupted and unobserved, a new ephemeral key exchange.

Ecker: Right. I guess the point I was trying to make here is that, um, you know, from TLS's perspective, each session is a new, each connection is a new thing. And so if the, if the authentication keys are compromised, then they, you could just re-initiate a new session with authentication keys and the situation is no different from the, um, from the, the, the... the session ticket is taking the place of the authentication keys, right? And so, um, and so the, and so the only... and so, I mean, I agree with you, I agree with you that if you want to, um, it- that you need to remove the session tickets, but that, but I mean, but only makes sense if only the traffic keys are compromised. Because if the authentication keys and the traffic keys are both compromised, then the situation is the same as with the session tickets. Because session ticket is, is effectively the authentication key.

Tom: Yeah, no, I think that you're sort of capturing the, the subtlety here is that, uh, indeed, um, from the ephemeral key exchange and these sort of internal secret state of the handshake, um, you are minting new authentication tokens in some way that you could carry over into a next connection. So if we don't throw away old session keys, you can sort of roll back and, and use these, uh, session tickets as a, as a way to bootstrap.

Ecker: Well, right, but I guess, I guess, but like for instance, say you stored the session keys in the same secure storage as you stored the, um, you know, as you stored the, the authentication keys. Then you'd be in... my point is the session key is, I mean, I think, I said, I think we're in agreement. The session keys are like a replacement for the authentication keys, right? Um, yeah. Okay. Um, so I think that would probably whenever we write this, that would probably be helpful to explain it that way, um, because I think otherwise people might... and there's a bunch of back and forth on one of, on one of these issues as well about like... because like, I mean, it's like really very... I mean, it's really not the case that, like, the draft asserts this, but it's really not the case that like the authentication keys are typically stored in like secure storage. They're typically stored like on disk or in memory like in exactly the same, I mean, and in software implementation, in exactly the same as the, same place as like the, as the traffic keys. And so, um, you know, it's not really the case that like that's incredibly common to have, um, you know, it's not... I mean, there are settings as you suggest where it's like possible to compromise one and not the other, but it's not like generally the case that that is true. Um.

Tom: Yeah, um, I am going to bunt that to the authors because that's not what we looked at. Fair enough. That part of the draft. Um, I do also want to note that the discussion on this session key issue, uh, covered a lot of this, um, yeah, what does actually matter where do the, the session ticket keys live and and all of that as well. Um, so I think it might have gotten merged already, but, uh, that doesn't mean that you can, can't merge a new PR or continue discussion on a merged PR and things like that. And I think that the authors are interested in continuing that discussion as well.

Ecker: Sounds good. Thank you.

Tom: Tiru, as one of the authors, did you want to weigh in a little bit?

Tiru: Yes. Uh, thanks, Tom, for that raising this important issue, right? I mean, uh, right? I mean, we, in the updated draft, right? I think Yaroslav is going to present, uh, in the next few minutes, and we have anticipated, uh, two threat models. One is where your session keys and your, uh, tickets, the key material derived for tickets is stored in different locations, and one is secure. So that threat model is also discussed. And the threat model where both of them are stored in the same location and both could be compromised. So we have now mitigations and what to do for both the ones there. So that's what Yaroslav is going to present in the next few minutes.

Tom: Okay. Thank you. Let's not continue the discussion on this particular point at this time because, uh, probably best to just get through to the end of this and then see the other, uh, things. Um, okay. Uh, I think that the, the, the second quote on this slide, um, this, that this proposal changes the security of quote-unquote normal TLS in some way, and that that means that the security considerations will need to be updated post some analysis to reflect what you need to do if you implement this draft, uh, essentially captures our conclusion, which I'm getting to now. And that is that we do think that it is worth, uh, doing careful analysis of this extending TLS 1.3 with continuous key agreement. Um, the existing analysis don't extend easily to this new extensions mechanism, um, and symbolic analysis, uh, or computational analysis, uh, could, could either be helpful. Um, and I think that, um, we would like to see someone, and I believe that there's work in progress, uh, do an analysis of this draft and present sort of their outcome and how they model things, and then we can sort of make a determination if that satisfies, uh, our appetite for analysis. Um, I do want to state again that the FAT basically is just a bunch of volunteers from academia that give feedback to the working group on best effort basis, uh, and do not appreciate getting roped into flame wars. Um, the FAT, uh, has also not done a proper security analysis or any sort of proof, uh, and also we can't give you a clear list of requirements of you need to do exactly this because doing that is like 60 or 70 percent of the work of actually writing a proof. Um, we're not saying that the work by the authors is not good enough. I believe that we've had very useful discussions with them, and, uh, they are trying very hard to get this buttoned down because they also want this to be secure. Um, and finally, the FAT is not a gatekeeper for the TLS working group. We're not formally part of this whole consensus process. We're just sort of here to help inform the working group when it's going into this discussion. Um, and finally, we fully acknowledge that just get a proof done is a very difficult thing to ask someone because that means that often lots of work and we really need to, uh, appreciate the work that people put in when they, uh, submit that to the working group because they, uh, probably put in at least a few weeks if not more time into that every time someone shows up with some sort of proof. Okay, so that is it for the FAT and now, uh, as the last line says, it's up to the TLS working group again. Um, I think it makes sense to first maybe... well, actually the, the authors of the draft have basically already responded a little bit, uh, through Tiru's comments just now. So I can, uh, answer one or two questions on this and then let's give it to the authors.

Sean: Sounds great. Osama.

Osama: Yeah, uh, first I want to really thank the FAT for, uh, doing all this work and, uh, I think it's really useful input and it's very valuable from the experiences. Um, I basically want to, uh, draw attention to a specific point, which is namely, uh, the model that Tiru was mentioning. When we use TEEs, one of the concerns is that, as the authors mention in the draft currently, is that the TEE has no application traffic secret, so all the, all the TLS key schedule is outside of the TEE. And then my concern is that basically what is the purpose that the TEE is actually serving by just keeping that private key? It's not the way we use actually the TEEs. You are using the whole TEE just to protect a single key which is namely the private key of the TLS and no other key of the whole, uh, whole key schedule. Which means that the application traffic key is available to the OS, which can be malicious, which can be buggy and all that. That's not the way we use actually the TEEs. So...

Tom: Muhammad, let me interrupt you for a little bit. Um, TEEs are completely out of scope of any of the things that I talked about. Um, they are a deployment model and they are very certainly valuable in interpreting the results of an analysis and how they sort of map to practical attacks, but for the purposes of, um, modeling the security of, of TLS 1.3 or TLS 1.3 with such a continuous key agreement, I don't believe that, um, TEEs are relevant to the things that I just discussed.

Muhammad: Okay, so the FAT has not considered TEEs. That's what I will summarize, correct?

Tom: Absolutely not.

Muhammad: Okay. Absolutely not. Okay. Thank you very much. Thanks.

Tom: I'm not saying that they're not relevant, they're just not part of this kind of analysis usually.

Sean: All right. Victor.

Victor: Sure. Um, you speak of discarding session tickets. Uh, I'd like a little bit more clarity about that. Do both sides have to somehow discard the session tickets? Is it only the client that discards them? Can you elaborate on that a little bit? Because, uh, servers are stateless kind of... that's the point of session tickets. So it's not exactly easy for a server to discard something it doesn't have. Uh, if it gets rid of its decryption key, then it affects lots of other clients. I'm trying to figure out what's going on here.

Tom: Um, the, since the session ticket has sort of symmetric, uh, properties, um, not just in the symmetric key nature, but also in the way that it can be used by either side. Yeah, it does would need... if you want to throw away prior session tickets, you need to throw away all prior session tickets on either side. Um.

Victor: Not exactly doable right if you're stateless, right?

Tom: I'm not saying that this is easy. I mean, one solution could be to just disable session tickets entirely if you're using extended key update. Um, but Tiru is here to maybe weigh in because he's thought about this a bit more.

Tiru: Yeah. Uh, that raises a very important point, right? I mean, uh, right? I mean, who should discard that, right? And, and what we did in the current draft is that we are saying like, um, if, if you are storing your tickets, uh, right? And, and then you do an EKU, right? Then pretty much basically if you're relying on the session tickets, you end up basically relying on keying material which was compromised, uh, a priori, right? So, so whoever has the capability to, uh, delete them, delete them, or else best, if you cannot store them securely, right? Disable the session tickets totally. Um, I think that was something that we discussed quite a bit in the, in the, in the, in the GitHub repo, I believe.

Tom: Maybe it's worth handing things over to the authors at this point if we're going to go into this level of detail on specific changes.

Ecker: Yeah, that's probably right. Okay. I can get out of, I can get out of line. I can ask this part later.

Tiru: Yeah, I think these points are already going to be covered in the next slide, so I think we can have a good discussion at that time. Thanks.

Sean: Awesome. Good. Thank you very much, Tom. I appreciate all the work that you and the other FAT people have done, and as well as your warnings, uh, and reminders because it is true that, uh, it's not great to get sucked into some of the flame war stuff. Um, Joe. Great. Okay. Yaroslav, it's up to you. Go ahead. And I'm going to set a timer once I figure out how much time I think I agreed to give him and everybody. All right, 25 minutes.

Yaroslav: Hello, hello. Great. Uh, right. So I have 30 seconds to, uh, present quite a few slides. Let's see how that goes. Um, I would like to present updates to draft-ietf-tls-extended-key-update. We've done quite a bit of work before. First of all, I would like to acknowledge, uh, collaboration with Tom. Um, so that, that was really great. So it was very calm, mutually respectful communication, uh, very productive. So, um, kudos to Tom and FAT team. Um, I will not cover the foundation, what we're doing and why. Um, so what we have done since the last revision? We've updated terminology. So thanks Osama for highlighting, uh, some of the issues that we have in the terminology. Now things are, uh, much clearer. Uh, then thanks to suggestion from David Benjamin, we actually reduced number of flights, uh, in the extended key update process. Uh, so for example in the TLS, uh, previous process was request, response, and then sharing, uh, the first, uh, key share, sharing the second key share, switching to the new secrets. Um, sorry, request with key share, response with key share, then acknowledging the response and switching on the sender on the initiator side and then switching on the receiver side. Uh, one message was not really necessary. We could start switching to the new secrets, uh, on from the, uh, from the receiver side once, uh, we send, after immediately after we send, uh, the response, which we now do in TLS and DTLS. So the whole process is now having fewer backwards and forwards and is taking fewer... fewer... less time. Uh, based on feedback from Tom, we have added transcript. Um, tying, uh, into initial transcript, into the, uh, new transcript hash that we're calculating. Previously, we were calculating similarly to how exported authenticators do that, um, transcript just based on key update request and key update response. Now we are including previous, uh, transcript. And the first, the transcript zero is the original TLS handshake transcript that starts from client hello and goes all the way to, uh, client finished. So that way we have rolling transcript that is beginning from the, from the very start and apparently that is better for the security analysis properties of this proposal. Uh, then we have updated exporter considerations. Um, of course we cannot update initial exporter secret, and initial exporter secret could be subject to compromise even after EKU, it is initial. Um, so extended key update results in new exporter secret, and now you have a choice depending on your capabilities and your requirements, you could keep using initial exporter secret or you should be using if possible the updated exporter secret. So the TLS/DTLS library might need to notify application about exporter rotation so that application would act, act accordingly. Uh, in along those lines, we now added an update to exported authenticator RFC to use exporter secrets updated by extended key update. Uh, and exported... I'll talk a little bit about this later, but we now say that exported authenticators can be used to re-authenticate after extended key update or, uh, post-handshake authentication if it's allowed and enabled can be used to re-authenticate the client after extended key update. Uh, we've clarified effects on resumption, and based on previous discussion, there is more work that we need to do here. So compromise of pre-shared key obtained via new session ticket prior to extended key update allows attacker to resume the session after extended key update. So endpoints must either, well, disable resumption, which is unfortunate but could be valid compromise for, uh, certain deployments that have long-lasting sessions. That's where extended key update is typically required. Protect pre-shared keys, uh, using secure storage or isolation, or have a mechanism that would allow them to invalidate old tickets after extended key update happened so that attacker would not be able to roll back to old traffic secrets after resumption. Uh, added informal security goals: post-compromise security, key freshness, elimination of standard key update, detection of divergent key, uh, state. Um, and, uh, because we're now updating transcript, uh, we now have a potential clash with, uh, post-handshake authentication. Uh, yes, post-handshake... post-handshake authentication is a little bit exotic these days. Uh, it's explicitly prohibited with HTTP/2 and QUIC, but still part of TLS standard. Uh, so if extended key update is initiated when post-handshake authentication or exported authentication is in progress, then extended key update must be... must complete first, um, and then extend... and then post-handshake authentication or exported authentication should be, uh, performed using new keys. Uh, finally, we have two prototype implementations: uh, one that I've built on top of Rust- Rustls, uh, that is using the new updated, uh, transcript, and Hannes built on one on mbed TLS, not sure if that's open source. Um, surprisingly enough, they happen to interoperate, um, and we would like to do more interop tests. So if you have a TLS implementation and you would like to implement this and interoperate, please do let us know. Uh, and also Hannes added Promela spin models to, uh, the repo. Please take a look if you're interested. We have right now two open issues. Uh, not sure if we have time and desire to discuss them now, but would be great. So first is stronger post-compromise security issue opened by Yarron. Uh, so the question is should we add some kind of built-in mandatory re-authentication after extended key update? So again, the risk here is, um, uh, if after extended, uh, key update, if attacker managed to replace, if attacker managed to replace the victim, extended key update, um, will not... would allow... would not... would not, um, pre- would not, would not, um, stop attacker from impersonating the victim if it, it could complete extended key update if it compromised. So re-authentication could potentially prevent this, um, so should we add some sort of mandatory re-authentication here or point like we do today to post-handshake authentication or exported authentication or let it be some maybe future work? So I think right now authors, uh, believe that we should leave that aside and allow this to be explored in a future work if there is appetite for that. Eric, uh, would you like to make a comment, ask a question?

Ecker: I was going to weigh in on both these points. I'm happy to do that whenever you please.

Yaroslav: Sorry?

Ecker: I was going to weigh in on both of these points, but I'm happy to do that whenever you want.

Yaroslav: Okay, please do.

Ecker: Um, so I don't think we should add mandatory re-authentication. As I understand it, you have optional re-authentication with, um, with, um, uh, these are, these are orthogonal properties and like, you know, I think you should discuss in the security considerations what the implications are and say you might want to do it and leave it at that. There's no reason to be like... the "must" isn't doing anything... isn't... like people can do things or not do things as they please, right? And make... and it still adds value even if they don't have that, um, as we've been discussing. Um, this TEE thing is a complete red herring. Uh, the, um, uh, uh, like, it's- it's- it's one word in like... in the text, and all the text should just change to say that like sometimes the things are in secure storage and sometimes they're not. And and they could be in a bunch of different kinds of secure storage, and TEE is a fine example of that, and we don't need to explain why the application keys are not in the TEE. Like that's just not like relevant at this point. So like that text can be changed easily, but it doesn't need to have TEEs disappear, doesn't need to be a discussion of TEEs. Um, so I think like I think all of these things can be solved with just like some editorial work.

Yaroslav: Excellent. Um, thank you very much for supporting authors' position. John.

John: Uh, yeah, on the first point, I don't think you need to add mandatory built-in re-authentication. I think it could be optional, but I think you should add built-in re-authentication. Today there is no built-in re-authentication, and that means you need to change the application layer. And my understanding the benefit of this draft is that you don't have to change the application layer. Um, otherwise, like, if you follow NIST requirements for IPsec, it's like ephemeral Diffie-Hellman after a few hours, I don't remember six, four, something like that, and then re-authentication after 24 hours. So then you just delay the connection lifetime if you don't have that. Um, yeah.

Yaroslav: Thank you. Osama.

Osama: Yeah, Osama. Um, basically I want to elaborate on the point that I've been trying to make in this, uh, TEE thing. The point is that I'm saying that I'm perfectly fine if you completely remove TEE, that's what I also commented on, if you completely remove everything which implies that the TEE is being used because, in my head, I cannot just kind of digest that why use TEE just for the purpose of storing that long-term key if all the keys are available to the adversary at all times like the application traffic key with which you are sending your real secrets, so that doesn't make any sense to me. So if you remove TEE completely from the draft, that's perfectly fine for me.

Yaroslav: Okay. Thank you.

Ecker: I guess I don't really care whether TEE is in here, but I think that analysis is just wrong. Like there's a perfectly reasonable reason to have like the keys in secure storage so that the software is compromised they can't be exfiltrated even if they're in an Oracle, which is the usual procedure when you have the thing in HSMs or whatever. And and so like normally you put them in an HSM or TPM and all it can really do is signing, and like a TEE can do much more, but you can make a TEE be like a weak-ass HSM by basically having it be a signing Oracle. And that's all this text is implying. So I think that's perfectly fine.

Yaroslav: Thank you. Uh, yes, we have one more.

Felix: Yeah, hi. Felix Linker. One question on your models. You said you have SPIN Promela models. What did you try to evaluate with them?

Yaroslav: Uh, Hannes, I think that's a question for you if you're here.

Hannes: Yeah, uh, so the folks or the reason why I did those was, uh, specifically earlier in the discussions with David Benjamin, uh, on the comments he had for DTLS 1.3. I was specifically focusing on any detecting deadlocks, uh, specifically when doing, um, multiple of the different post-handshake authentication messages together with message loss and all of that, so there's a lot flying around, so I thought it would be useful to do that type of analysis.

Felix: Cool. Thanks.

Sean: All right. Thanks. Um, I think you can... thank you. Sorry for running over. No worries. It's a working group draft, it gets precedent. Uh, so you guys are going to go ahead and make some changes. Um, I guess we should make sure to update the, uh, issues to note that they were discussed and how they're going to get resolved or not. Um, Joe, for the next, can you go ahead and start it? I also want to start a show-of-hands poll for the, the drafts that are not working group, uh, adopted yet. Just curious about who's read them, so you know, uh, Yar- or, um, you can start talking about the draft and we'll fill these out and stop it at some point in the future. So, sorry, Valery. I forgot he was presenting. Is this the right one? Yes. All right. Valery. Sorry. Here, go ahead. You're up. I'll give the clicker the thing so you can, yeah, you got it. All right, great.

Valery: Uh, so hello. Uh, I'm Valery Smyslov. So it seems that not many people read the draft, so perhaps it will be fun for you this presentation. Uh, so, um, this is, uh, TLS 1.3 handshake, and it's just copy-pasted from RFC. And, uh, uh, just you can see that first message that is sent by client, ClientHello, server respond with ServerHello. And this messages has a both have a hard-coded limit on the size of the internal structures. So the size of the message, handshake message itself, is pretty large, so it's about 16 megabytes, but the size of the each extension inside ClientHello and ServerHello and the total size of all extensions in this messages is limited to 64 kilobytes minus one byte. Uh, well, it's hard-coded. Uh, and, uh, you can see this in the structure copy-pasted from RFC. So and what if we need more space? Well, actually it is the most important slide in the whole presentation. So please, when you make comment, don't tell us that this is not needed, because we're not discussing whether it is needed, whether it is not needed. Don't, don't comment along the lines that "I don't need it" or "my company doesn't need it," "nobody in TLS is not needed," so it was never needed. So just imagine that perhaps it's your next nightmare dream, but you have to implement it. So let's focus of how it can be implemented in TLS 1.3 with a less disturbing way. Uh, that's the point. And so the, the immediate goal, it's the most obvious goal, is, uh, large key shares, but we do not want to focus on large key shares despite that most of things that are meant, classic McEliece and something like that, like this. Uh, the, the better to have a generic solution, because we don't know what happens, uh, in the future, perhaps we will need more space. Uh, so let's better be prepared. Let's focus on how this can be accomplished and not focus on whether it is needed or not. So, uh, we have three proposals in the draft. So the first is a very simple proposal, just extend ClientHello and ServerHello, uh, make some changes in, in the key share structure. This proposal is a very focused on key shares. It's was initial proposal in the draft. So it's focused on only on key shares. So key shares are more complex structure that its size depends on the whether it is a proposed a... already defined algorithm or new KEMs that have a bigger size. And for the bigger size it just have a size 16 megabytes, two in power 24 minus one. And so, uh, the same as for extensions as a whole, as a list. So this is very simple proposal, very simple change. There are there is a proof of concept implementation made by Jonathan and it's just works, it's fork from OpenSSL, I think. So it doesn't change TLS state machine, it's good. So, uh, no additional round trips, every if server supports this extension. It's also good. But the main drawback is that it is not backward compatible. So the client need to know beforehand that server supports this extension because otherwise it doesn't work, doesn't work at all. So, uh, it, it may be okay for closed environment but not so okay for open internet or some not, perhaps not the internet but but more open networks. And yes it is only for key shares. Uh, it is not clear at least for me how it will interact with encrypted ClientHello. And it is not clear for me how it will interact with middleboxes because I've told I was told numerous number of times by TLS people that middleboxes are very important and they do their job that nobody nobody knows in details what they are doing they might drop anything. So they do deep packet inspection and see and everything is okay. So TLS must not change dramatically the messages for uh to be friendly for middleboxes. So the proposal number two, uh, it was suggested by John Mattsson, just make similar to what IKEv2 does. So utilize extended key update, currently extended key update only allows you to make additional key exchanges for the already negotiated key key exchange algorithm. And it is possible to extend this just negotiate additional key exchange. And in this case, uh, we can do extended key update immediately after initial handshake, uh, with a new KEM and combine both, uh, shared secret into session secret. So this looks, uh, a bit complicated, so we have initial handshake that is not changed and immediately perform an extended key update with a large key share. So, uh, there's no modification, the good thing that there's no modification for handshake messages at all. So no problem with middleboxes because, well, extended key update is encrypted messages, they middleboxes don't know what's happen inside. And it's backward compatible. But the drawbacks is that TLS state machine becomes extremely complex and, uh, well, it, it as previous as Tom mentioned extended key update perhaps there are some security considerations with this and we change the way that initial shared secret initial session secrets are computed so it's a very complex solution. And it's also applicable only key shares, that's not good. And again for me it's not clear how it will interact with extended key update if we want both in and out TLS to perform this this kind of trick with additional key exchange. So a third a third proposal just define new handshake message. So let's define a new message, call it AuxHandshakeData, that will come right after ClientHello ServerHello as it will contain some chunks of of data that are big enough to to not fit into ClientHello or ServerHello. So its presence can be negotiated with a hello retry request. And in this case, for example, in case if this message is used for key shares, these chunks will contain a large key share and key share from ClientHello will just reference the appropriate chunk in this message. So ClientHello will not contain key share itself it will contain a reference to the message that will follow ClientHello as that will contain, uh, the key share itself. So that's the diagram how it can go. Well it's it is generic solution, it's good idea, it's good thing. So not only for key share but for anything. Uh, it's no modification to ClientHello and ServerHello messages so middleboxes well hopefully it will simplify interaction with ECH a middleboxes that look only into ClientHello ServerHello and hopefully will ignore a new message I hope it's it can be well a good thing but I don't know for middleboxes not completely clear what they're doing. But uh it is backward compatible and the problem is with all this retry hello retry request. And well but perhaps that is not a big problem. So that's all in the draft and there are already one proposal from Eric on the list. And we don't ask for adoption at this point just to to collect an opinion how this can be done in the least destructive way for TLS 1.3. So any comments? Yaroslav.

Yaroslav: Right. So, um, I think that it, uh, the best approach is to do this kind of thing post-handshake. So you keep original handshake as it is, you do negotiate X2519ML-KEM, you know, whatever, uh, you indicate with TLS flag or some extension that you support this kind of thing, and then in post-handshake message, a brand new shiny post-handshake message, some kind of extension to extended key... in extended key update message you do an additional exchange with whatever you need to happen. That way it would be most compatible with all sorts of weird things that might be in line.

Valery: Yes, we also think that it's compatible. The problem is that it's a bit complex and it's also it's not clear for me after after listening to Tom's presentation how it how cryptographer will like this approach. But anyway, okay. Thank you. Eric.

Eric: Yeah, I mean, so per my review and echoing David Benjamin in the chat, I think this is an interesting intellectual problem and one we should not try to solve. Um, the entire motivation for this, the only real motivation for this, is a bunch of giant PQ algorithms with giant keys, and I've not heard any motivation that we really should do that. So like I think we should like... say this is an interesting design study and put it on the shelf for a while. It is going to be quite disruptive. All of these things are disruptive. Um, and um, uh, so I, I think we should just like say this is an interesting design study and put it on the shelf for a while.

Valery: Uh, well, I, I, I asked not to comment of whether we should or we should not. That's how...

Eric: But I mean what is the point of this discussion then?

Valery: Let's imagine this problem is we facing this problem.

Eric: No, but why? Let's imagine. No, but the question is how should the working group spend its time? The working group should not spend its time solving problems don't exist.

Valery: Let's, okay. Uh, perhaps imagine you have a dream, a nightmare dream, that you have to implement it. How would you do it?

Eric: But as I said, we have some design studies, but now you're asking to spend working group time inEric: trying to solve a problem. And I'm telling you that I don't think this is worth working group time.

Valery: Well, okay. I, I, I see. Okay. I guess I...

Sean: Fair enough. So I guess the other backstory here is that, um, this draft did originally go to the ISE. The ISE came to the chairs and we said, "Well, can't do that. Um, you need to come and, uh, talk to the TLS working group." So, um, I just want to put that out there.

Eric: Okay, but not everything is suitable for adoption in this working group. And so like... the... things come here and like... people are interested or they're not. And so like I'm saying we should not do it. Like if other people... other people want to do it and there's support for it, then like let's have that conversation. But I don't see anybody getting to the mic to say they want to do it. So I think that's like your answer.

Valery: Okay. Thank you. I got you.

Sean: Okay, thanks. Uh, we'll take it to the list. Next. Is I believe Osama.

Osama: Right. Should I start?

Sean: Yeah, you should. You should have control of the slides and all.

Osama: Yeah. Okay. I got it. Perfect. Thanks. Uh, thanks to the chairs for this opportunity, and, uh, I want to talk about some of the issues that we have, uh, from the formal analysis side specifically and some proposals that we propose that would help things move on towards, uh, better cooperation. Um, starting from genomics and health data, where GA4GH is most interested in. Of course, there are other applications. I'm not specifically saying that that's the only application, but formal analysis is really a natural fit because there are high stringent, uh, requirements there and formal analysis can really help there. And it's easy to start at the beginning rather than at the end. For example, if you find something at the working group last call that, "Hey, actually this was missed over and this threat model doesn't even make sense at all," uh, that's quite late in the process. And that's where we propose that the formal analysis be done as early as possible. And the scope of this specific document, specifically, is focused on only and only those documents which need formal analysis. And now this will be a kind of a circular argument which is that, okay, so which documents actually need formal analysis? But I actually here want to say that this is... I'm not, like the last time I presented this, the argument was that, "Hey, we, we actually don't need this for all the documents." That's what I'm saying here only that it's not meant to be for all the documents of the working group, so just keep that in mind. So which ones need formal analysis is an open question and that will come at the end. Um, so the main idea here is that we want open source, reproducible, and extendable proofs. So if there is some extension of that some specific mechanisms in the future, for example, then the same proofs can be utilized as is and then can be extended over. So that's the whole idea that we reuse this, these things rather than just building from scratch each time. Um, so I have three specific points, which are namely a best practices template for authors, that's what we propose, and secondly, the proposed FAT tracking process for the chairs. And thirdly, I have three... I have a few proposals. Some of them are modest, some of them are not modest. I will explain what that means. And I will use the term verifier, and I want to really clarify that. It's not a gatekeeper as Tom already mentioned, so we are, we are presenting our, let's say, goodwill, uh, efforts for what is best, uh, security considerations, and the working group itself is like, uh, able to take that or to drop that or even trash that, whatever you want to do with that. And it's also not a role in the working group process, so to say. That's basically we being also member of the working group as a contributor. And I'm using it as a for the lack of the good terminology. I'm just using this verifier as the, as the shorthand for the team doing the formal analysis. And as a supporting point, as I said, so for 8773bis, even after two working group last calls, even after, like, going through the FAT process, they didn't, like, detect this change which was that the key schedule itself was, uh, not correct. And this is really the kind of the support that we need from the authors. For example, if we propose something that is discussed and, uh, uh, accepted, acknowledged, and merged. So that's, we, we really want to acknowledge the cooperation from Russ in this, that he really acted in a, in a, let's say, a prompt manner and merged it within a week. That's, that's ideally what we would like to see.

Osama: So how authors can help us to help them, that's basically the point here. And I specifically want to say that we have four basic requests for the authors, namely the motivation, which is very critical. If you are doing something without motivation, that's itself questionable, like, independent of all the formal analysis and all these kind of things. Like, if, if we are spending our working group energy for something which does not need to be done at all, that's itself questionable. So we have compelling, we, we would like to see some compelling arguments and some authentic references that people have actually done something on top or this is a direction which one would like to see being integrated into, uh, the TLS protocol. Secondly, a realistic threat model. There has been a lot of discussion on the extended key, uh, update right now. You have already seen that. So from TEE perspective, it doesn't make any sense to me at all. Like, um, this is something that, that needs to be done early on or fixed early on. Informal desired security goals. Like, I, I think every author can at least give us some kind of starting point for, and we are not asking them formal, so that's specifically the informal desired security goals what they expect to get out of the whole, whole draft what they are writing or what they expect to get out of that. So slowly we can merge them into, or we can transform them into something which will be more of what the mechanism in the draft actually achieves and how far that is achievable and so on. And we propose two kind of diagrams which are the protocol diagram, which will show, like, "Hey, this is the client, this is the server, which messages are being sent over and what is happening on." And then the key schedule diagram if there are any changes compared to, um, the RFC 9146, for example.

Osama: So another thing for the template, which is something we have, um, which very much, uh, follows, uh, the pattern of what we would like to see, which is non-binding and non-normative, which means that you don't necessarily have to follow that, but we would like to see that happening, which is because of the reason that it makes things very, very clear. You have introduction, terminology, and then motivation based on these terminology. Uh, you could have this motivation already in the introduction itself, but there are sometimes some terms which are very specific and then it's better to have it like this. So we have no opinion on that. What we are basically saying is that this is, um, something that we would like to see, uh, happening, that all the components which are mentioned here should be part of it, and not necessarily in the same order or same pattern. But it would be very nice if we had such a pattern so that the readers, reviewers, formal analysis team, and everyone can actually have, uh, see some asymmetry in the documents. Um, protocol and key schedule diagrams go in the proposed solution. That's, uh, that could be one or more sections for that. And security considerations specifically if you have a threat model very clearly a section or a subsection under that, that would be really helpful. Your desired security goals, which we will, as I said, transform into the actual achieved security goals and then other security implications or considerations. Of course, that will not be specifically with this name, but with some other name, just to say that these are the necessary parts of it.

Osama: Coming towards the ask to the chairs, basically, I would like to see that, uh, a more bit, a little bit more transparency in the process. Namely, the process, uh, has to be something like this, which I have proposed in the PR number 16, which is to say this is, let's say, this document, this is the assigned FAT person, this is the decision email which has been sent to the, uh, to the mailing list, and this is the initial report. So for each of those, once they are updating for new document, they will ideally see that, oh, this is actually, so decision email is, for example, missing or the initial report is missing something we need to do. So this is kind of a template with which chairs can already see that, hey, this something in the process has been missing now. Or in other words, I am volunteering to help the chairs maintain this repo. They can of course I'm not participating in the decisions themselves. So once they decide, they can tell me over and I can just, just put it in there and share some of their workload as I can imagine they are very busy with other stuff. So we have a separate list for the documents not reviewed. Um, there are quite a few of those missing, but this is just a template that we have current drafts, completed drafts, drafts not reviewed by FAT, so that things are pretty much transparent that what has gone through the FAT process, what has not, and, uh, I'll come to the slide where I will say why it has not.

Osama: So some of the proposals, uh, some of the modest proposals, so to say. The process should be as transparent as possible to the working group specifically. And what that means is that working group consultation, which is already there in the process, not something that I'm cooking up. So it already says that the decisions have to go through the working group process, uh, working group consultation and information. So that's something that I would like to see, uh, happening actually, rather than just being written down. FAT review, um, may help guide or resolve some of the contention in the controversial drafts. Um, this is, I think, really the purpose of formal analysis, at least in my mind, to solve some of the or to help actually resolve these contentions. That's, that's the whole purpose of formal analysis to give you more robust guarantees and to give you mathematically sound formal arguments rather than just arguing from the top of the head that we, we, we see it like this and so on. Active engagement from the authors is requested. That means, basically, uh, from our side that if the authors are not responding at all to our questions within a reasonable time frame, let's say a few weeks, uh, if not months, so that would mean that we could, we could not just pursue their draft. So we are not at all just like the, the draft authors are voluntary service, so we, we, we are as well. So we are not bound to any specific draft to lead it to completion, even if we have started the work on it. So for from our side, it's also already a loss. I mean, if we spent some effort, we cannot publish it and so on. So it's already a loss for us, but that would be very unfortunate from our side as well.

Osama: Some not-so-modest proposals, meaning that we have quite a bit of a strong opinion on this, which means that, first of all, the TLS working group needs to, uh, encourage early consultation for other working groups, specifically where, um, the working groups do not have a TLS working group participant majority. And in that case, for example, like SEED working group, um, in that case, if they are going through such major changes, which is that what we have agreed upon them for the chartering. And if they want to re-charter and make these changes to the TLS protocol or the key schedule or whatever is explicitly allowed in their charter, there has to be, there has to be early consultation process with which they can go through. And, uh, secondly, I really have very strong conservation like, like resistance against the process that, which is saying that you cannot contact FAT directly. I mean, if I was doing the same draft of the TLS working group and I was not a member of the TLS working group, I could just contact them, "Hey, XYZ, I have come through this and I need your help in this." So this is really something that is, that is really limiting our abilities to work positively by not just contacting them. This is, this is not the way it should happen, like it should be encouraged. We are fine to have the chairs in CC for that, if, if there is, there is any concerns on that. But the formal methods community is really small. We have limited folks with knowledge of TLS, like who else do we contact? So if that's, that's a really a bottleneck for us.

Osama: And the feedback we get on the list is very limited. I think there are only two to three, like Ecker, John, and so on, who are just responding to it, uh, from the formal perspective, which is helpful for us and not much otherwise. And specifically the discussion that I would like to have is one is probably answered by Tom already that the computational security analysis is not thought to be in scope for FAT. That's what I, at least I understood. The, the second point I have is which drafts need or do not need formal analysis and why? I think the rationale has to be more clear to the to everyone. Tom, please go ahead.

Tom: Just quickly step in here. Computational analysis is absolutely in scope.

Muhammad: Oh, that's what I heard that you are kind of meaning from your presentation. Okay, if it's in scope, that's fine.

Tom: No, I, I, the first slide in my slides said that extending the existing computational models to EKU would be a lot of work, but and is not trivial, but is worth doing. And separately, analysis in a tool like Tamarin or ProVerif could also be helpful to understand how that extension is going with EKU. So I either...

Muhammad: Let me, let me, let me explain how I kind of viewed that. So I think I got the impression from your discussion with ML-KEM that you are implying that computational security analysis is not part of the FAT process. So that's what I got, not from the EKU, so I got it from ML-KEM.

Tom: Analysis of cryptographic primitives, which is cryptanalysis, which is an entirely different ball game.

Muhammad: Okay, I see. That's out of scope.

Tom: Yes, so like cryptographic primitives, we're not going to do, uh, like assessing, uh, the core SVP security of, of ML-KEM, but a computational analysis of a... so that's a pen-and-paper proof for sort of the non-protocol people, that is absolutely in scope.

Muhammad: Okay, okay. Thanks.

Sean: Okay, guys, we're a little over, so let's keep it brief. Thanks.

Felix: Yeah, hi. Felix Linker. To your second question, I mean, working group consensus seems to be a good measure for that, no? So if the working group would like to call the FAT, they can call the FAT. That seems lightweight and I guess you need to go through that anyway through that discussion.

Muhammad: So the drafts, I would quickly just expand on that. The drafts have not gone through the working group process, which means the chairs first need to bring into the... so the process says that the chairs make this decision, not the working group. So there is something that needs to be clarified in the...

Felix: Yeah, chairs do the calling, right? But I, I mean, I guess they listen to the working group whether it's required or not, yeah.

Sean: Mm-hmm. I mean, it's going to... it's going to get FAT review should, before it gets out the door, so...

Ecker: Yeah, so, um, I think there's some material in here that is good and some that is not. I think the idea of having a dashboard and transparency of what is happening is good. I think that like the attempt to construct some sort of special role, which you say you're not doing, but it totally does for this verifier, is like quite unfortunate and we should not do it in any way. Like there's no reason why the verifier... like maybe the FAT should be like able to talk to... maybe people should be able to email, but the idea that like only verifiers should be up on the list, that's like bad news. Um, so, um, uh, like, that's not... like that creates even more weird structures. Um, so, um, uh, similarly I, I think that like this first bullet actually also are not very good, um, you know, like I don't really want to expand like we have enough trouble getting like attention for things that are actually important to TLS. Um, having the whole purpose of having like these other working groups is like TLS doesn't have to engage with them. So like I think trying to like turn that into something that toward TLS has a new obligation is also bad news. So I think like my short version is I think that like the transparency part is good, the rest of it I think less good.

Muhammad: Okay, I will quickly respond to it if chairs allow me.

Sean: Sure, you got a minute.

Muhammad: Okay. So for the first point, I would say it's not going to happen daily in on daily basis, right? So it, it would be very rare case. So we just need to make a very small process that, hey, if, if you need to make some prominent changes of what we have allowed you in the charter, just this is the process. Come to the chairs, talk to the chairs, and chairs will do whatever is needed. That's pretty much it. So it's not going to happen daily basis. I don't think it will take any of the working group energy. On the second point, I strongly disagree with you. I think the that might not be even be interesting to the whole working group, this kind of discussion that I would like to have with the FAT. So, and from the FAT person perspective itself, they would not like to have, like, getting, like, 100 emails from the mailing list that, okay, so this one, this person is saying this, this is saying this, and how would they keep track of this? I think this will increase a lot of burden for them. The thing to do is that maybe a limited people of limited people from the working group may be involved in that, but I wouldn't open it over to a mailing list which is open for everyone. So that could be a very small design team, but I really have a concern that we are really facing this problem that who actually we talk to and how do we get these things resolved? So we need some way out for this.

Sean: Okay, uh, thanks. We're going to move over and do some ECH talk now. Thank you very much, Joe. Great. Okay. Yaroslav, it's up to you. Go ahead.

Yaroslav: All right, thank you. Um, so yeah, let's talk about ECH and HTTPS resource record. So, I want to see that implemented in the various libraries, but like library developers have this question like, "Is it safe to enable those things by default?" Um, you don't want to break connectivity or negatively affect performance. So that's like has been a common question, and we need to answer those those questions uh so that we can actually have it enabled by default. We don't want this kind of thing to be opt-in, uh we at worst it should be like opt-out. So, is this safe? Um, so we we did a a series of tests, uh to try to answer some of these questions like how does waiting for the HTTPS resource record affect performance and connectivity? Performance here is purely like latency. And the other one is: does enabling ECH by default break connectivity? Like do the services break when they don't support ECH and they get like a GREASE, uh ECH GREASE? Or also do networks block ECH? That can be a problem too, and those are things that have been in the mind of library developers. So let's start with the DNS test that we we ran. Um, so the DNS test was pretty much uh for HTTPS resource record. We were querying the top 10,000 Tranco domains that domains from that Tranco list, uh which tries to get like the top domains. And uh we queried the A, AAAA, and HTTPS records, measured the response times, do that three times to try to kind of mitigate variation and possibly caching too, and we plotted the distribution. Um, we have better graphs, but like the idea here is that it the performance is not very different of those records, which was a a good sign. Um, but I wanted to see it in a different way. I I introduced this concept of HTTPS delay, which is the timing of the HTTPS resource record minus this happy eyeballs baseline. So the happy eyeballs baseline is when you would start the first TCP connection when you use happy eyeballs V2. This is considering like the timing of the A and AAAA and the resolution delay, which is 15 milliseconds if you get the A record first before the AAAA, um, just to to make it more reflective of the of the connection process. And then I got some so measurements like I plotted the this HTTPS delay and the distribution. Um, and and you can see that um, we actually get um, 60% uh of these runs the HTTPS record actually beat or tied the the delay of the the regular connection, uh which is which was a good sign. And uh you can see here that at the bottom like in many there were like a bunch of cases where HTTPS resource record actually arrived before you would start establishing the connection, which means that if you have IP hints, you can actually already like reduce latency in those cases. Um, and then on the other end uh when it increases latency like the increase was small uh like 6% was more than 50 milliseconds uh only like 3% was more than 100 milliseconds. And when you look at the distribution for so just to be clear this is like only the domains where the HTTPS response arrived back without an error. Um, if you only look at the cases where the domain supports H3, they have ALPN H3, um, so they support QUIC, that chart is even like more tight and um, so the difference was like really minimal for those domains, and you are saving an entire round trip with QUIC, so it's really like a significant performance latency gain there, latency reduction. Um, and one important point here is that this HTTPS delay is only really becomes a penalty if it takes longer than the TCP connection, so you have like the the time of the AAAAA plus the TCP connection for the HTTPS record to come back because um, if you get that, you can still apply to the to the you can still apply the config to the connection that's being established if the endpoint matches. And often they will match, but then you can say like, "Oh, this TCP connection, I actually want to use H2." Um, or if um, if the config says H3, then you can wait for that connection but you can also just go in and start the H3 connection, which saves you a round trip too. So essentially here the point is um the HTTPS resource record will only really give you a a penalty if it's slower than the other uh the other records by more than a round trip here for like the the TCP handshake. Uh, so should we wait for HTTPS resource record? So like there's no or like small latency penalty, you can actually reduce latency by avoiding the TCP round trip if you use QUIC and you can also reduce by using IP hints. However, um there are like some very significant domains that do not respond to the HTTPS resource record, so like you can wait whatever as much as you want, you're not getting the answer, they would just like time out, and that's a problem because you just can't wait in those cases you're going to like run into the application timeout. So, um, we need to fix those domains. I don't know if people have contacts like like the .gov domain in the US, that's very problematic like a bunch of domains under that .gov is just not responding to HTTPS and I've tried to hit the authoritatives directly and it still doesn't work. So we need to get those domains fixed. So my so because of that, just because of these domains, I would say that the recommendation here is to not wait, um, like you have to cap the the wait for those domains, um, but ideally you wouldn't have to and then the domain owners should really like fix the authoritatives.

Ecker: Um, can I ask a question actually?

Yaroslav: Sure.

Ecker: So, um, I mean these authoritatives are like flat-out defective, right? Because they should be NXDOMAINing the the RR, right? Okay. That's what I thought. Um, can you can you send me privately this list? I may be able to reach out to some of these people.

Yaroslav: That would be awesome. Yeah. All right. Move on, ECH GREASE test. So...

Sean: One sec, Victor jumped in the queue too.

Victor: Just briefly, all the .gov contacts are published, you can download it from the CISA website if you want. You know there's a huge list of all the technical contacts for .gov if that's your concern. Uh, while I'm here, the client locale from which you are measuring this: were you in a well-connected environment or did you try this behind various oddball, you know, routers and hotels and all that?

Yaroslav: I no I I did a this measurement from a residential network and it worked fine. When I tried in a corporate network, I actually got a timeout and throttling because I think they analyze every packet, uh but in this case was it went very well. For so okay, let's move on to the ECH GREASE test. We want to know that whether GREASE will break the domains. So in this case the test was for the top 10,000 domains in the Tranco list. We fetched the domain like HTTPS domain and the root path twice, once with ECH disabled and once with ECH disabled—oh, enabled with GREASE. Um, and in this case we used the ECH-enabled curl because there's not a lot of support for ECH GREASE in the libraries, which is kind of like a challenge here. So but the Defto team that was adding ECH to OpenSSL, they actually had a version of OpenSSL and curl that I built um that enabled—that allowed me to use to do this test. And then pretty much there were like no adverse effects. Um, at my first run like I saw like 26 domains that showed differences between control and experiment, various different errors, but then as I retried those domains, um they succeeded, so I think they were just like transient issues. So for this top thousand, 10,000, GREASE didn't really break the connectivity. And some showed like different handshake time but I don't think they were significant was just 2.35% and um maybe this was just variability. I didn't run this multiple times but uh because the result was already like they all connected, so...

Yaroslav: And then the the third test that we ran was the network test where we were testing ECH on multiple networks. We used this proxy network called SOX and we iterated over like all the mobile networks that they support on each country and we tried to fetch www.google.com via the proxy, also again twice, one with ECH disabled, once with ECH GREASE using the same curl binary. And then we covered 878 networks in different countries, and here um we only see like a 0.34% cases, only really like three three networks where ECH GREASE failed but no ECH succeeded. Um, this I still need to do some more investigation and work but like in general it was a good sign. There was a good chunk of networks that we were not able to properly test, like 22.9% where both the control and experiment failed, so that reduced our coverage, potential coverage here. Um, and maybe was like issues instability with the proxies or rate limits, they have those. We also saw a blocking of google.com in China, so we were not able to test in the Chinese networks, but we know that um in general in China ECH is not blocked. So um those were the results, um like the conclusion here is that um you should be able to enable HTTPS resource record and ECH by default in your networking libraries um but provided that you cap the HTTPS resource record for now until those like top broken domains are fixed. And um we already have the tools at this repository on GitHub uh to collect the data so you can reproduce these tests yourself, that would be great if other people do that. And the reports are not there yet because we are working on cleaning up the Python notebooks but uh I should have the analysis there too um in the in the next couple weeks. So that's that's it for me. Hopefully we'll see more implementations of ECH and the HTTPS resource record out there.

Sean: Yeah, no, so this is great. So I don't know if anybody else wants to jump in. I think this is um encouraging results that the world is not falling. I think this actually would be pretty interesting because I know there was an OPS document that was being written to say lots of interesting things about ECH. Um, so I don't know what other people need to if anyone else wants to get in and say anything or other than just say thanks for doing all the hard work and doing these measurements. Victor.

Victor: Sure. Just some resources for DNS measurements: the ATLAS network is generally fairly useful. It's a vast network of probes through which you can proxy your DNS queries and measure how they behave across the world. So look for ATLAS probes if you haven't done that. Uh, and of course, you'll want to make measurements from behind various kinds of home routers and the like. Although some of those may be covered by the ATLAS network, in fact, so that's a good thing. Um, and I pasted into the chat a link for the CISA repository where you can get all the .gov contacts if you want to pursue that. You can reach out to them. I've had some interactions with a few of the government departments on some DNS topics though not HTTPS, uh but...

Yaroslav: But do they have the the contact information there?

Victor: Yeah, yeah. In the each line in the in the data contains contact information, it's essentially a CSV uh of of all the federal and state domains and whatever, if you need them. Uh, but I'm also skeptical about HTTPS being broadly available across all networks other than authoritatives which don't cooperate, because I expect more problems on the uh access side, basically the last mile is, you know, when you're behind a very obsolete home router or a hotel or whatever. That's where I would expect to see problems more than authoritative.

Yaroslav: I see.

Victor: So even if your residential network is good, lots of others might not be. So the measurements really need to be done with care. I'm skeptical that these really cover enough ground.

Sean: Awesome. All right. Thank you very much. Um, I'll kill this timer off, I'll start the next one. Uh, I also started a poll for, a show of hands to see who's done this. This is another ECH-related thing. Um, Nick and Dennis and Alessandro have presented a couple times, but over to you, Dennis.

Dennis: Thanks, Sean. Yep, so presenting signed ECH draft, which is an extension to ECH, and the core goal is basically to make ECH easier to deploy. So at the moment when you're thinking about deploying it on the server side, you need to choose an outer SNI, a cover name, which you'll need to go and register, get a valid TLS cert for, and then provision your server with. Uh, and that's the name that will appear on the wire to middleboxes and and so on. And what we'd like to do is basically cut away most of that and you can just pick an outer SNI and update your configs. You're not going to need to register a domain and you're not going to need to get a TLS cert or configure anything.

Dennis: And to sort of talk through how how that works, uh ECH has a happy path where the client and server are in sync with each other and the DNS information is up to date. In any kind of regular ECH, you know, you go to DNS, you grab an outer SNI, you grab a public key, encrypt some stuff, send it over to the server, the server decrypts it, and you authenticate all of that with a valid TLS certificate for the inner SNI, the inner SNI. And with signed ECH, we're not changing any part of that flow. There's a little bit of extra information we're putting in the config, but it works just the same, so the server decrypts the extension and it's authenticated again by the TLS certificate for the inner SNI.

Dennis: But there's also the unhappy case, and that's when the server and the DNS record are a little bit out of sync with each other. And in that circumstance, there needs to be a recovery path. And the way that works today is the server can't decrypt the ECH extension because the config's out of date, and it'll provide a fresh config and authenticate that with a valid TLS certificate for the outer SNI. And that's the bit that we'd like to change.

Dennis: So what we're doing instead, or what we're proposing to do, is to take a public key, a signing public key, take the hash of it, and pop that in DNS alongside the existing ECH config. And then when we do go down the retry route, we'll send back just any old TLS certificate, it could be for a default domain, it could be a completely invalid certificate, it doesn't matter. Um, but in the encrypted extensions message, there'll be an ECH extension with the fresh config and there'll also be the full public key that matches that hash from the DNS record and there'll be a signature over the fresh config. And this allows us to authenticate the fallback config just the way that we authenticate um the existing fallback config in ECH today. So to sort of remind you, we only authenticate the existing fallback config based on the outer SNI, and uh with this change we're authenticating it with the hash of a public key, but that's stored right next to the outer SNI and they kind of enjoy the same properties.

Dennis: And then in terms of why we think this is interesting, so effectively this is a slightly different privacy model. With regular ECH, you're looking at a lot of different websites and you want them to all coordinate to use the same kind of fixed outer SNI. With signed ECH, you can actually go in a different way and just tell everybody to use a random string as long as they all generate it in a roughly reasonable way. And you can actually even do this on a per-connection or per-DNS lookup basis because you can just change that SNI as much as you want, you don't need to register a domain or do anything else. And we think this actually makes it easier to deploy for server operators and more robust on the on the network.

Dennis: So in terms of easier to deploy, you could imagine a TLS server like Caddy or similar. Uh, Caddy, you know, ships with ECH support out of the box, but it's not turned on server-side because you need to provision it with a domain that you've got to register and various other bits and pieces. With signed ECH, Caddy could just turn on ECH server-side and just switch you over to using randomized SNIs with no no further problems, no no further configuration needed by the user.

Dennis: And then on the CDN side, at the moment ECH has this kind of fate-sharing property where you're asking customers who have their own domain name, "Do you want to share a domain name with everybody else that might be very easily blockable?" And most of them I think are pretty uncomfortable about that being mixed in with other sites' traffic. But if you tell them you're going to use random, you know, unique, non-shared SNIs and it's going to improve privacy but not actually comingle them with other people, I think the answer might look a little different.

Dennis: In terms of the fragility and the robustness, today a lot of CDNs are using just basically a fixed string for their cover name SNI, like cdn-ech.com, and there's already middleboxes all over the world widely deployed that just have a whole, you know, a huge blocklist of SNIs, and this is very, very easy to to act to. With using random SNIs, there's still ways to block this, but it's typically going to revolve around things like actively probing the site to see if it has a valid certificate for that SNI or kind of doing your own scanning to maintain basically a list of domain-IP mappings so that you can work backwards to who they might be connecting. It substantially raises the bar, but this is still like a cat-and-mouse game, like it's not going to completely make this issue go away, it's just changing the dynamics of who can deploy it and what the economics of that deployment look like.

Dennis: In terms of where it is now, uh Nick's developed some interop code in Rust, Go, and C that kind of parse these configs and build them and so on. Uh, we recently sent it to the mailing list and we got some great feedback from Ben Schwartz in particular. And I think we've got at least one more round of trimming this draft and making it shorter and simpler to go. Um, but with that in mind if there are questions or feedback on this approach it'd be be great to hear them.

Yaroslav: Uh, yeah, Yaroslav. Uh, so first of all, I think research in this area is extremely important. Uh, thank you very much for doing that. Um, yes, right now it's super easy to fingerprint uh based on cover name. Uh, my big concern with this approach, specifically when it comes to randomized SNI: so if you if you use cover SNI as just completely random string then that is non-resolvable, you're going to have a bad day. Uh, because there are there are cloud services that look at outer SNI and do their own DNS lookup and they replace destination IP address with the result of that DNS lookup they do for all sorts of performance optimization services. And there are non-trivial amount of enterprise networks that do things like destination NAT of everything towards a proxy and then proxy figures out where it actually goes looking at Host header or SNI. So I think there is at least needs to be some kind of guidance in the text uh saying that it cannot be just completely random string, it has to be resolvable towards where it's supposed to go. Also again if it's random non-resolvable you could fingerprint it. Try to resolve it, it's non-resolvable, okay, ECH, I'm going to drop it.

Dennis: Yeah, so I elided that kind of detail from this, but yeah, it basically allows the the operator to use whatever strategy they'd like, and yeah, giving some guidance there might be the the best thing that we can do.

Ecker: Hi, um, I think this is interesting work. Um, I'm not sure how persuaded I am by your argument about robustness against attack, but I am persuaded by your argument about robustness against fragility, so uh I don't think we have to fight about the other one. Um, um, I guess I'd like to hear about the trimming that is going to happen um so I can so I can know what I should be looking for.

Dennis: Yeah, I think at the moment the draft has two different authentication mechanisms in there, one built around raw public keys and one around a special type of certificate. I think the general feedback has been to to cut down to just the raw public keys.

Ecker: I concur with that. I think the certificate is probably not good. It just really seems like it's got a real a real um um, you know, a chicken-and-egg problem. So I think the raw public keys is a stronger design. Great. Thanks. Rich.

Rich: Yeah, hi. Pardon me, hi. Um, the the shared fate, isn't that also um... well, when you split out and say, "Oh, everyone you don't have to share fate with anybody," doesn't that also reduce the anonymity set? Or are you are they able to make them orthogonal?

Dennis: Yeah, so what you'd be doing, right, is at the moment you're combining everybody onto one SNI. And if that SNI is blocked, everybody suffers. With this, they're all going to be on different SNIs. Now you could imagine a policy which sort of tries to distinguish real from from fake SNIs and then maybe you have a have an issue. Um, but it's yeah, you're you're not sharing the same fate of a particular SNI, but you are in the same anonymity set and sharing the same thing. Like if they IP block everybody, you're all you are all still suffering.

Rich: So that if if the fake SNI is resolvable then someone will be able to go and see "Oh, these are all the same." In other words not an attacker but a nation-state agency could go through and look and see that they all have the same ECH key. And you end up sharing the same fate anyway.

Dennis: Well, they wouldn't they wouldn't see the ECH key, but what they could do, for example, is they could just scan a broad range of domains, identify those with ECH configs, and then ban all IPs associated with those. And that's that's something true in in either draft. Yeah.

Rich: Got it. Okay. Thank you. But I think this is interesting and should keep keep working on it. Thanks.

Victor: What does happen when the when that more long-term thing is stale?

Dennis: Uh, the same thing that happens when you've got a stale SNI today, which is to say you're going to get a connection error.

Sean: Okay. Thanks. Uh, again, I'm kicking off a poll. Yaroslav, thank you. Sorry your time got cut down to the last eight minutes here, but uh let's do it.

Yaroslav: Thank you. Okay, so the final presentation of IETF 125, clearly the best, um the most important one. Workload identifier origin hint. Um, it was presented few IETFs ago, there are few updates that we've made to this draft. Basically mutually authenticated TLS is awesome, has really nice security properties, doesn't affect application layer, it's great for bots, workloads, certain proxy setups, OTIs, IoTs, but it's really not great for generic web browsers. They give user uh some weird UI such as "insert smart card" um and users have no idea how they are supposed to react to that. So what what end up happening is uh people who want to use mTLS, they just have to create a separate endpoint, separate SNI, that would be specific for mTLS and separate for everything else, which is unnecessary complexity, you need more certificates, and it's it's ugly. Um, this applies of course for things such as APIs. So here is an example, relatively recent example from OpenAI API. It also applies to things such as for example mask proxies. So if you have a connect some kind of CONNECT IP proxy that you would use with workloads uh you could use mTLS with that but you cannot use the same for let's say web browsers uh because they they will freak out users so you do need to have separate endpoints. So the proposal is in workloads, in WIMSE working group, we have this thing called WIMSE workload identifier, which is a URI. This is now adopted draft progressing in WIMSE working group. Uh, so this URI consists of three parts: there is scheme, authority, trust domain, and scheme-specific path. So the first two sections are split into what I define as workload identifier origin, and these origins would be included as a list in TLS ClientHello extension. So this proposed TLS ClientHello extension contains optional list of workload identifier origins. Presence of this extension indicates that client promises not to freak out from certificate request, and client can provide a certificate for any of the listed workload identifier origins. So again the idea is that client might have multiple, typically limited very limited set of, certificates, workload certificates to authenticate, it can give to the server hint "this is what I support." Then a server may use that hint in in order in in implementation-specific ways to produce certificate requests with certain properties, or just maybe not accept that ClientHello because it doesn't like what client what client has to say, or just ignore it completely and not even honor that with certificate request because it doesn't know anything about those those workload identifier origins. Uh, so if your workload identifier origin needs privacy, so if your trust domain is super secret-ive one and you don't want to expose it in cleartext ClientHello, then maybe you should use ECH or uh use empty uh list, so that is effectively becoming a flag. You could actually combine these approaches, so you could use in your outer ClientHello uh do empty list and inside your encrypted ClientHello you would have uh this list of workload identifier origins. So um any is there any interest in this from this working group? Is are there any questions, suggestions, comments?

Ecker: I guess I'm not quite tracking why you have to indicate the identity in the initial message at all as opposed to just "I would do something or other if you ask me for a certificate."

Yaroslav: So the the reason for that, I might be I might have multiple certificates from different trust domains. Uh, which is not uncommon. Um, and that's a relatively small number, maybe two or three. So when I speak to a destination, I don't know which certificate am I supposed to present. And in the certificate request if I just provide a let's say a TLS flag, in the certificate request, um I could of course include list of uh CAs that I expect certificates to be trusted, but if you're a public API that speaks to many, many, many clients, have many, many, many potential uh CAs that it would trust if CA is defined by client. So if I come and register my CA with API provider, uh then this is this is my CA that I register and I could server could have tens of thousands, hundreds of thousands of them so it's not feasible to list all of them in the certificate request. Also they could be obviously very, very sensitive. And if I am a workload I typically when I'm a client I participate in very few trust domains so that is feasible to present that.

Ecker: Right, yeah, okay. I mean so like the ECH thing is like a pretty serious regression in terms of privacy, right? Because the ECH does not provide like I agree ECH encrypts the data, but like ECH is not authenticated, so it just does not provide the privacy properties that enciphering that that happens in normal handshake. So like I guess this leaves me pretty sad, I have to say. Um, uh.

Yaroslav: Right, right. And well that's a big question if robots de- did deserve privacy or not, let's let's leave that aside. But uh again, this is not exposing the full uh URI but only the origin part which is just scheme and trust domain which should be relatively...

Ecker: I I wonder I wonder if there's some, you know, some way to merge this with the previous presentation, um, and um, uh, and uh have the um uh have the uh the server supply effectively an ECH config in in line because then you could trust it, right? That is if the server if the server supplied an ECH config on first handshake and you had to make a reconnection, then um actually you'd be in quite a strong position and you wouldn't have to worry about about impersonation of the ECH via DNS. So um maybe maybe that's the answer. But I guess I would not be super jazzed about putting this information in in what's effectively the clear from the perspective of the network. Right.

Dennis: Yeah, just on a I guess a similar theme to Ecker, thinking more about like a a TLS flag that indicates you can tolerate a certificate request as the client in a reasonable way, and then just popping what the trust domain information goes into that certificate request. Because the server can signal what it's ready at that point.

Yaroslav: But again the practical problem with that is servers, uh especially when it's kind of public API thing that you speak to many, many, many clients have many, many, many potential uh CAs that it would trust if if CA is defined by client. So if I come and register my CA with API provider, uh then this is this is my CA that I register and I could server could have tens of thousands, hundreds of thousands of them so it's not feasible to list all of them in the certificate request. Also they could be obviously very, very sensitive. And if I'm a workload I typically when I'm a client I participate in very few trust domains so that is feasible to present that.

Dennis: But if you're a client and you're only in a few trust domains, doesn't can't you just offer?

Yaroslav: So that's what this extension is about, but when I'm presenting a certificate I have to pick one, I cannot present multiple certificates for server to choose from in the certificate message.

Dennis: But okay, but I'm saying maybe if you're only in a couple it's okay to to know which domains you should want to send those certificates to. Like because you wouldn't want to leak that you have certificates to just the network or a general server, right? You're not going to want to provide a workload identifier on every connection.

Yaroslav: Right, it could be certainly pre-configured, but sometimes it needs to be discoverable, that's what this is about.

Sean: Thanks. Uh, Tiru. And we're over time so let's please keep it short.

Tiru: Yeah, uh thanks Yaroslav. This seems like an interesting draft. So I have two suggestions basically. I think this draft does not discuss using trust anchor IDs, which is quite useful for the client to convey the trust anchors that it is relying on. Uh, I think you should see if you could complement with that. The second comment was since workloads are managed entities, I think you could get uh the ECH configuration provided in different ways compared to how ECH is provided via DNS or other means for open internet, right? So I think there are better ways to convey ECH which will probably help address uh the problem of getting the authenticated ECH, right, unlike what happens in the internet world today, right? Because these are all managed workloads and they are all being provisioned and managed by Kubernetes, for instance, right? So I think there could be certain deployments where ECH could be done in better ways uh right? And that should address the privacy aspect to some extent. Thanks.

Yaroslav: Uh, thank you for the comments. As of trust anchors, I'm really not sure they apply in for for workload client certificates. This is something that I need to look into. When it comes to ECH information propagation in workload environments, so yeah, this is certainly something worth exploring but perhaps outside of this particular draft.

Sean: All right. Thanks, everyone.

Valery: All right, thank you very much. Have a great, uh, day. Have a great trip home. Enjoy the farewell reception.

Sean: Yep. We'll see you next time.