Markdown Version

Session Date/Time: 19 Mar 2026 03:30

Matt Joras: All right, hello everyone. This is the IETF 125 meeting for the QUIC working group. Can someone confirm they can hear me in the room since both of our chairs are...? Okay, thank you. We're going to get started here.

Matt Joras: So, as usual, we'd like to remind everyone of the Note Well. This is the thing that everyone agrees to as participating in the IETF. It's basically guidelines and rules for our conduct. If you have not reviewed it, even though you agreed to this as part of attending, please note it again. If you have any questions or concerns about the Note Well, you can reach out to the chairs or to the area directors.

Matt Joras: So, much is the same for this meeting. The session is recorded. Make sure if you have not done so now, please use the onsite tool for those onsite. I'm not exactly sure actually what the people in the remote room do, but you know, use the tool. This will be how we do the queue. And so, yeah, just this is all pretty normal stuff, but please actually get it out now so that we get the attendance recorded correctly. And this is the normal links for the things like the agenda, if you hadn't seen those already. And after that, we'll start talking about the status of the various documents. Lucas, you want to do that?

Lucas Pardue: Yeah, sure. I omitted the slide that we normally have about note-taking and looking for a volunteer. We're going to rely on the auto-transcriber here, but if anyone did want to jump into the HedgeDoc and just provide some manual notes, that would be appreciated too. Afterwards, Matt and I will scrub that and make sure the minutes are as we like them and presented and put in the various ways we do stuff. But we're not going to block on that for now.

Lucas Pardue: You'll see that this session today is a shorter one than we normally have. That's partly because we've cleared a lot of our work queue and partly because Matt and I are remote and it's a terrible time zone for us. So instead of kind of giving more of the documents time slots, we just figured we'd give a quick, brief overview of some of the status of some of the documents that we have before passing over the agenda time to other things, which I'll come on to after these first few slides.

Lucas Pardue: So, the first major thing to announce is the Multipath document or Managing Multiple Paths, whatever we ended up calling it, has made progress. We went to the IESG ballot. That was, like, recently cleared of one of the discuss items. So big thank you to the authors and all of the reviewers to take us through that process. If you recall last time, we were going to submit it, so it feels like we've made very good progress there. We'll see how we move on with those things. But effectively, that's done unless something comes back to the working group. We don't need to worry about it so much anymore, hence why we've got more free agenda time for other things.

Lucas Pardue: If you recall from the last meeting, both draft-ietf-quic-ack-frequency and the Reliable Reset document were in need of a shepherd's write-up. We've been a bit too busy to do those things, unfortunately. Those are on Matt and I to figure out. We're in a position now, though, where I think we're far more confident in what we put in that shepherd's write-up and to kick it on to the AD, especially in terms of Reliable Reset. For those who maybe were at the WebTransport session or weren't, there's been kind of more interop in the WebTransport land, who are our main driver for that extension. So we're a lot more confident that implementation is kind of at the point that they need to be. So I would expect by the time we meet in Vienna that we would have made more progress on those this time around.

Lucas Pardue: New to the working group is QMux. We had all of that discussion about rechartering and then an adoption call. So yeah, this is the new thing and we've got dedicated agenda time to discuss the open issues and proposals on that document that Kazuho will take us through. We have the extended key update. There's not really much to report here. My understanding is we' still waiting on progress for the kind of sibling document in the TLS land to make some more progress before the changes there are reflected back into the QUIC document. Yaroslav told me that the expectation is for the next meeting we'll have some updates back and maybe agenda time needed for that.

Lucas Pardue: For qlog, there's been little change in the specs themselves. Authors have been busy. We're probably going to do something like we did last time, hold a virtual interim between now and IETF 126 to burn through the remaining issues. But in terms of implementation, the Cloudflare quiche-qlog library and the quiche thing itself migrated to the latest schema. We landed that within the last week. I'm still under the understanding Qvis will be updated soon. I don't know if Robin's on the call and wants to put anything in the chat for that, but that's my understanding.

Lucas Pardue: We have the receive timestamp, which we'll just cover on one slide. That was the update that the authors gave us rather than having dedicated agenda time, just to come up and present one slide. And any other adopted documents we have that are not mentioned, there's not really any significant updates there, they're just same status quo as they were. Next slide, Matt. I don't have control.

Lucas Pardue: So yeah, this was the update from the draft-ietf-quic-receive-ts authors for us. They published draft 02. What this does compared to earlier drafts is picks some, you know, temporary work-in-progress code points for the frame types. So actually implementers can go and interop now, which is good. There's a open issue in the security consideration section which is important but, you know, not necessarily critical for people to start doing work. And you'll see we've got mvfst and Google quic and quiche planning to update their implementation. So the signal we've had is now is a great time to go implement and experiment. There's not been significant changes in the draft itself, so it's not like oh, we want some reviews and feedback, but go and play and use that implementation experience to provide any feedback as usual on GitHub as issues or just on the mailing list for discussion. Cool. Next slide, Matt.

Lucas Pardue: And then onto our agenda and agenda bashing, if there's any. We have just one working group item this time, QMux, with 20 minutes there. And then kind of a as-time-permits bucket. There's a session or section, whatever you want to call it, on MoQT over QMux. Anyone who was in the previous session at MoQ would have seen Suhas present on this. We don't care about the media stuff in this working group session. The reason I wanted to kind of give a little bit of time on this QUIC working group agenda is because of the relationship with advertising application protocols over QMux. We'll probably get into that in the QMux session itself, but I think it'll help just to see an example of some of the challenges application protocol designers or developers are having here and what we might be able to do to help them. Then we have a presentation about minimum RTT estimation—sorry, that's duplicated, that's a typo from me. Sorry, two items for the same thing. That's not correct. I apologize. And application of explicit measurement techniques for QUIC troubleshooting. Is there any agenda bashing that people would like to do? I see Kent in the queue.

Kent Watsen: Yep, this is Kent. If there's any time remaining at the end, I'd love to have a couple minutes to start a conversation about supporting call home over QUIC.

Lucas Pardue: I don't know what that is. We'll see how we do for balance of time. But I don't typically, given I've not—I don't know if anyone knows what this thing is, we wouldn't normally accept a last-minute agenda...

Kent Watsen: Yeah, hence the desire to start the conversation.

Lucas Pardue: Okay. Well, just as a note to people, if you do want agenda time, we are always over-subscribed. So, you know, do put that into a request to the chairs ahead of the meeting. Anyhow, that's it from us, from the chairs. Let's hand it over to Kazuho to talk about QMux.

Matt Joras: Go for it, Kazuho.

Kazuho Hoguku: Right. So hi, everyone. This is Kazuho Hoguku, and I'm going to talk about QMux. Next, please.

Kazuho Hoguku: Right. So finally the draft has been adopted, and so let's close the issues. We have 13 open issues and we have six pull requests. So let's look at them one by one, by first starting at the ones that we have the pull request. And the first one is this: two-layer encoding. Next, please.

Kazuho Hoguku: So in the last IETF, Alessandro proposed using a different encoding which is like this: Basically, QUIC version 1 uses the packet boundary to determine the end of the stream frame and datagram frames that don't have the length field. But QMux 00 as it currently stands uses a different approach, which is that if the length field is omitted, then the value being exchanged by the transport parameter is being used, with the default being 16 kilobytes. And that causes a slight divergence between how the frames are encoded and decoded. So the proposal was to have instead a record layer below the framing layer so that we can use it to identify the frame boundaries much like we do in version 1, much like we do in QUIC version 1. And that would make the frame encoding and decoding logic perfectly identical with what we have in version 1. Next, please.

Kazuho Hoguku: And then there was another question being raised was like—I mean, it's not generally about standardization, but the question was like: what if we wanted to port QMux to WebSockets? And because WebSockets has the messaging layer, it's much easier to port QMux to WebSockets if the QMux had the notion of records, because then we could use instead of the QMux record-layer approach, the WebSocket messaging layer as a unit of sending the set of frames. So next, please.

Kazuho Hoguku: The proposal, the pull request here actually implements the two-layer approach. Basically, we'd have a something called QMux record that is a size-prefixed field that contains one or more frames. And the benefit here is that, as said, the frame encoding/decoding becomes identical to that of version 1, and it becomes easier to port QMux onto something that has a message-oriented streaming format. And the downside is that when you are sending large objects like—I mean large frames such as 16 kilobyte, there's a slight increase in overhead, but it's only two bytes, so maybe we don't need to care. And so personally, I think this is a good compromise, but I'd like to hear if what people thinks. Any comments?

Yaroslav Rosomakho: I can hop over here. I have a question about how you plan to map this over a transport like WebSocket, which already has a framing layer. Are you saying it would not—that mapping would omit the QMux record? Is that what you're saying?

Kazuho Hoguku: Right. Just omit the size field, because... the frames would be there, of course. But you would...

Yaroslav Rosomakho: so this is somehow an optional layer that would be determined based on what your underlying transport is? You have to negotiate it?

Kazuho Hoguku: I mean, in case of TLS or a TCP or whatever the the byte-oriented streams, we'd have to use this record format. And we'll only specify this because that's what the charter says. But if somebody wants to port it to WebSocket, then they can just swap the recording layer to what WebSocket already provides.

Speaker 1: I think I'm next. Yeah, for context, I filed this issue because I'm doing it over WebSocket, and the QUIC stream frame—if you don't specify a length, it says it goes until the end of the frame, which I'd like to be the end of the WebSocket frame. But right now in QMux, it specifies it's based on the max frame size, which is like 16 kilobytes or something. So you just can't use that—you can't use the WebSocket framing. So I think this is great. I think this is great, basically WebSockets you could use its framing, and if there's no framing in the protocol, use a QMux record, like over TLS.

Alessandro Ghedini: Um, hi. I don't entirely remember the discussion from last time, maybe due to the the time here. But I think the compromise here is rather than use like a full QUIC packet header, then we use this QMux record instead to sort of frame the actual QUIC frames, which I think is a good compromise. I don't think the like the overhead is really an issue, so I think this is a good solution.

Lucas Pardue: Lucas speaking as an individual here. Yeah, back when Alessandro proposed this, I didn't understand what he meant. So I'm going to ignore that. What I saw in this proposal makes more sense to me. It seems to make some of the handling around like the awkward text we had about "Oh, if you—like it's all the same, but if the length field is omitted, it means the different thing here" and all of that. I could see from an implementation perspective how this record thing makes things a bit easier. So I kind of like the idea. There's a few other alternatives that kind of went through my head, but on the balance of what the overhead would be compared to what it enables and the trade-offs there, I think this is probably the way to go.

Yaroslav Rosomakho: Yaroslav. I am generally not a big fan of additional encapsulation layers as they create bugs and issues and performance inefficiencies. I wonder if we could accomplish the same outcome by allowing streams with endless streams—that is, streams that end—go until the end of underlying framing, and would allow that only when QMux is sent over things such as WebSockets that are message-oriented. Wouldn't we achieve the same outcome without introducing additional encapsulation on byte streams?

Kazuho Hoguku: Right. So the primary target of running QMux on top of is TLS, right? And TLS doesn't expose the boundary. It's not just because the TLS stacks are designed to not—actually the RFC itself says that it is totally up to the TLS stack to determine how they chunk the records. So we cannot rely on that property when we are using TLS. And I think that's the blocker we have.

Yaroslav Rosomakho: Right. So for those transports that do not allow—do not expose size of the message, such as TCP or TLS, you always mandate a size of stream or whatever. And then for things such as WebSockets that have message boundary, you allow endless stream or stream until the end of the message. Would that solve this?

Kazuho Hoguku: Yes. I mean, so that's what this proposal says.

Yaroslav Rosomakho: Okay.

Antonio Brunotto: Antonio, mostly as an individual. I think had a similar point as the one that was just discussed related to like TLS. We do not want to use TLS record boundaries to actually use as a delimiter in any way, especially because we don't actually want to like introduce extra TLS frames just because we actually want to like end a particular frame somewhere. Um, anyway, that's it.

Lucas Pardue: So do we have some sort of consensus here? Like emerging? It sounds to me like there's kind of support for this proposal, at least compared to what we have already. So with the chair hat on, I would say let's try it. None of these decisions are ever final-final, but that, yeah, let's look at, you know, taking this to the list maybe to confirm. But that we would land this and all of the stuff we're going to talk about now would end up in a new draft revision and allow people to get some implementation experience. It's a bit harder right now, I think, talking in some of the abstract. Let's go and play and maybe come back and we can always re-evaluate some of these choices.

Kazuho Hoguku: Makes sense. Thank you. So next, please.

Kazuho Hoguku: So next is about the reset of deadlock due to flow control. Next, please.

Kazuho Hoguku: So Issue 9 pointed out that I mean, we are using underlying transports like TCP and they have their own flow control. So unless the QMux stacks read and read and read what regardless of what the application says, there could be a deadlock. And for what's worth, HTTP/2 already had this problem and addressed this by saying that the stacks should never stop reading. So the idea would be to follow what HTTP/2 says. Next, please.

Kazuho Hoguku: And the pull request actually does that. It just says that must continue reading from the underlying transport even when delivery of stream data to the application is temporarily blocked, and that it must not couple reads from the underlying transport to application reads on any single QUIC stream. And that the QMux stack may drop received datagrams when they cannot be promptly delivered to the application. And we also know that continuing to read does not imply unbounded buffering of stream data, as the amount of stream data that the peer can send is always limited by the QUIC flow control. So I mean, this is kind of editorial in my opinion, but do people have any opinions? Thank you. So maybe we can just ask the mailing list and merge it. Next, please.

Kazuho Hoguku: ALPN profile. Next, please.

Kazuho Hoguku: So there were actually two issues being raised by Issue 12, which—on the first one is that even though the text already says what to do when 0-RTT is being used, but it doesn't say how ALPN should be used. And the second issue was that was the discussion about the TLS record boundary and if that can be leveraged when determining the QUIC frame boundary. Oh! And as we said, that's impossible due to RFC 8446. So next, please.

Kazuho Hoguku: And then there was the other question about how would be using the ALPN, like for example when we are doing interoperability tests using a beta version of QMux. And there were like two options obviously, and the question was like: which one is better? So next, please.

Kazuho Hoguku: So for the ALPN side, we have a pull request and it says that following what RFC 9001 of QUIC version 1 said, we say that endpoints must use ALPN unless another mechanism is available, and that application protocols that use QMux over TLS must designate their ALPN identifier and specify that they use QMux. So QMux is a dependency of the application, not being itself a ALPN or something, because QMux is not an application protocol, it's merely a substrate of the application protocol. And we also add a note that when doing interop, maybe using something like moqt-14 and implying that it depends on a specific version of QMux is just enough for interop, because you know, specifying both the application protocol version in the draft and the QMux version in the draft just makes the ALPN longer and it—I mean it's just a cost of finding the right matrix of the interop. So any comments?

Speaker 2: So Cullen Jennings. I mean, I don't have the knowledge or strong opinions on this, okay. But it seems that architecturally to me, if we think about there might be a YMux at some point, and it might also sit on things that we often end up with these shim layers that are flexible with different ones and different things below them, different layers of them. And I like this sort of style of design we have with WebTransport, which each layer contains in it what the next layer is down versus trying to encode every single layer in the ALPN in one thing. So I mean, do you have thoughts on those sort of two different designs?

Kazuho Hoguku: Right. So one of the reasons we chose to use ALPN to express the application protocol rather than the substrate when we did QUIC was that part of the reason is that RFC 7301 suggests that that's the approach. And the other practical reason is that when we have a load balancer in front, you have to look at the ALPN to decide which backend the connection should be routed to. So it made sense to have the application protocol exposed in the handshake rather than doing the handshake then determining where to forward the request. So that's I think...

Speaker 2: Right. Oh, okay. So that makes sense, it depends whether you think your load balancer does the QMux or not. Right. Okay.

Speaker 3: Hi. So I think the main issue with like just using the ALPN for moqt-14 is we kind of have to specify which draft version of QMux within moqt-14 is being used, like it has to know that that is one of the layers that could be underneath. And it also means we can't really ship MoQT unless QMux also ships first as an RFC, because we'd be just pinned to whatever QMux draft was latest when we when we rolled that ALPN. So I think it would be nice if we had some way of like some flexibility, like you can individually advertise like the application and the underlying QMux compatibility layer.

Kazuho Hoguku: Right. So I kind of think it depends on the goal, I mean what you're trying to achieve. Because I mean, if you are going to achieve interoperability between different stacks, you need to agree on the particular version of QMux being used and the MoQT version being used. Unless you agree on both, you cannot have interop, right? So the question boils down to if you need two identifiers to specify one thing—one combination, or if you can just rely on a single version number to determine those two things. And I think it's ultimately up to the application protocol developers to decide which is better. But I mean, honestly, I don't know. And in terms of the QMux spec, I think we have enough to say, and the added note is only about until it's being published. So in terms of specification, I don't care is my honest answer.

Ben Schwartz: Hi, Ben Schwartz. I looked at the PR, I think the PR is correct. I want to quibble with the slide only. Um, the slide says moqt-14. I think that that needs to read moqt-14-qmux. We don't need to specify the QMux version, that can be implicit for moqt-14, but moqt-14 is already the ALPN for MoQT 14 over QUIC, and this is not that. We need them to be distinct for service B record processing rules basically.

Speaker 4: So someone mentioned YMux or something like that as a potential thing that could be another layer. This is Cullen's architect question, I take the opposite view. These—these layers have leaky abstractions, and so you really do want the MoQT version that's specific to the QMux being bound to this one. And that addresses—that also goes to to Ben's point from from just now. You need that in things like service B records to to be able to say that. So I think this is the right design. It's not neatly layered in the way that we might like, but the identifier provides enough context for you to be able to make the right decision. I would probably spell it a little shorter than moqt-14-qmux or whatever, figure out a way to spell it so that it's succinct, but yeah, I think that's the that's the only thing that needs to be thought about here.

Lucas Pardue: Lucas speaking as an individual here. The PR does a really good job of making the requirements of ALPN clear. We can bikeshed names and stuff, we don't have the time for that right now, but I think substantive change in this PR is the right direction and makes—answers a lot of questions for people who've asked the similar thing time and again since Kazuho and I came up with this years ago and then had to explain it multiple times. So yeah, I think let's go ahead with that as an individual.

Kazuho Hoguku: Thank you. So let's move onto the next one. Oh! I've done this. So next, please. Yes! Oh, that the one before, sorry. Yes, this one. So there was a request to put the transport parameters in the TLS handshake because doing so might speed up when we can start using the connection for exchanging data. And the counter-argument that we had there was that some—it might be difficult in practice because some TLS stacks might not provide hooks to send receive arbitrary TLS extensions. And the other argument was that the only case that gets blocked by not having the capability to send the transport parameters in the handshake is the server being blocked until the client sends the transport parameters when in 0-RTT mode, when doing 0-RTT handshake. In all other cases, the endpoints can send the data as soon as the handshake concludes regardless of the transport parameters being sent in the handshake, because both endpoints can send the last message that conclude the handshake and then immediately followed by the transport parameters on on the wire. So going to the next slide, please.

Kazuho Hoguku: The proposal is to just point out point that out. So we'd continue to stay state that endpoints must not send frames whose use depends on peer transport parameters until the peer's transport parameters frame arrive. And this includes the use of stream frames. And then we'd point out that when 0-RTT is not used, the server sends this transport parameter in 0.5-RTT and the client receives them immediately after it obtains the traffic secrets, so there's no delay. And then when 0-RTT is used, remember transport parameters are used, so there's no delay as well. So are there any ideas—are there any comments regarding this approach? Does it sound fine enough?

David Schinazi: David Schinazi. I'm—so maybe I'm missing something, but why—what's the motivation for not just reusing the TLS extension for transport parameters and doing the exact same thing that QUIC does here?

Kazuho Hoguku: Right. So when we developed QUIC, the problem was that people had to struggle how to extend the TLS stack because they didn't expose the API for, you know, exchanging arbitrary extensions. And when QMux is considered as a fallback, we need to support even more TLS stacks, or maybe the substrate might not be TLS at all. Then that means that we have all the complexities again and probably it becomes harder to fix.

David Schinazi: Sure. Thanks. That makes sense to me.

Lucas Pardue: Okay, cool. So we're at time. But we've only got a couple more issues with open PRs, and I think it'd be valuable to go through those quickly just to see if there's any commentary in the room, I think that would be a good use of our time here. And then we'll move onto the next agenda side. So if you could be quick, Kazuho, that'd be great. And if anyone has comments and is willing to maybe just stick them in the chat, I think that would be a good use of our time here.

Kazuho Hoguku: Thank you. So this issue is about how omission of ACKs impact the QUIC state machinery and specifically the stream state machinery. Next, please.

Kazuho Hoguku: So the pull request merely clarifies that having no ACKs is fine because and the stacks can simply assume that the moment they write the frames to TLS or TCP or whatever the underlying substrate is, then they can assume in terms of the state machinery that they have been acknowledged. And behaving as such doesn't break the assumptions that QUIC already provides. Any comments?

Speaker 5: Yeah, I have been thinking about that quite a bit. I mean I discussed that on the PR on your on your issue. I have a problem there with equivalence between the upper API of QUIC and the upper API of QMux. In the case of MoQ, when we ran QUIC—MoQ over QUIC, one of the things we have is that we get on the API we pass information that is obtained from the congestion controller as in you can probably send at two megabits. And in order to do that with TCP it gets complicated. I mean you have to do a whole new thing and not having some kind of end-to-end say what is your RTT, what is your this, what is your that does not help. I mean it does not help.

Kazuho Hoguku: Thank you. I think that's a—I mean I agree with the problem statement, although I think it's kind of orthogonal to having implicit ACKs or I mean how the ACKs drive the stream state machinery.

Speaker 5: That is correct. It is not about "I'm sure that the user has received stream number 5." Yeah, right. So but it's about monitoring how the connection is progressing. Maybe we will discuss that later.

Kazuho Hoguku: Yeah, let's. Next, please.

Kazuho Hoguku: Oh, this is the simple one. Next, please. So basically the question raised was that maybe we should refer to Multipath QUIC as well—oh sorry, Multipath TCP as well. And the pull request is, next, please, basically we say that in addition to TCP, Multipath TCP is a viable underlying substrate. And that's it. Anybody has any opinions?

Speaker 6: I think this can be generalized. I mean, do we really care if it's TCP or some other transport that is doing multipath as long as it provides properties that we desire in terms of reliable in-order delivery, then multipath is—I don't think we need any text specific to Multipath TCP, it can be any kind of multipath.

Kazuho Hoguku: Yeah, I mean, we can just drop TCP as well if we base our argument on that direction, but I think it's just makes sense to just name a couple of reasonable choices.

Lucas Pardue: We are at time, so please keep comments very brief please.

Ben Schwartz: I don't think this is true anymore. We just said that we actually also want to run over WebSocket, which is not a byte-oriented stream. So I think maybe we just want to rephrase this whole thing.

Kazuho Hoguku: Right. So to be clear, the WebSocket one is not something that the working group is going to specify at this this moment. The task of QMux is to specify how the compatibility layer should be run on byte streams. The WebSocket one is just an imaginary idea of how one might do it in the future.

Speaker 7: So I think it's fine to mention MPTCP as an example, but given sort of limited deployment and unclear future, I wouldn't do anything more than that and I would also be fine with not mentioning it.

Speaker 8: I would rather that we do not mention that at all, because the migration problem—the first migration problem I see is suppose a mobile user that is having a QUIC connection and is moving inside an enterprise network that doesn't support UDP. So you're going to want to somehow migrate your QUIC connection to a QMux connection somehow. That is the real migration problem that's probably the most important migration problem. And so it's definitely not the Multipath TCP scenario. So I would—we have agreed at the beginning to not make things complicated to leave those migration scenarios for later. And let's keep to that, leave it for later, don't mention it now.

Kazuho Hoguku: Thank you. I think that's a logical argument. So maybe we can continue the discussion offline or the mailing list. Thank you.

Lucas Pardue: Okay, cool. So we are at time. There are more open issues. Thank you very much, Kazuho. One of the things we've considered—speaking as chair now—is running a virtual interim dedicated to QMux to give us enough time and breathing space to talk about some of these issues. We'll probably poll the list and see if there's interest. I will note that previous interims were very unfriendly to Europe time, and again this one, so if we're going to do that, it's probably going to be something EU-friendly. But stay tuned on the list for that topic. And meanwhile, please do implement and play. We have an open issue about code points for QMux as well that I'd like to get us resolved for the next draft. Anyway, I think we'll need to close it there just so we've got some time for the final final item. Anyway, next up we have Suhas.

Suhas Nandakumar: Hi. Okay, you're trying to load the deck.

Lucas Pardue: While we're loading, Suhas, I know you talked at the MoQ session and we're a bit short on time. I don't know if there's much more you can add to the discussion we've had on ALPNs and stuff, but...

Suhas Nandakumar: Right. I'll just try to keep it really brief. So the idea here is that we wanted to kind of send MoQT over QMux. The challenge here is that both the transport can evolve independently and we had three possible options: one is that you do just MoQT through QMux, set up parameters to understand what the QMux version is that this has the benefit that, no matter what MoQT version is that QMux can change and if both client and servers support you can you can get it get it happen. Or the other option is that like what Kazuho was kind of presenting which is we we have a combination—ALPN that's a combination of QMux and some combination of the MoQT version to Lucas's point. Yes, it it does have the permutations kind of explosion where until both the draft settle down you will have to specify each version of both. And the third version was what Cullen was talking about which is which is more kind of slightly more work but it's more I at least my my personal feeling is that more cleaner where the QMux will be like how H3 in WebTransport H3 is the ALPN that get that's put not the MoQT but that's H3. In the same way for QMux applications the TLS ALPN will put in QMux version and then the next available protocols will will say either H3 or it will say the MoQT version if you're using Etiquick. If you're using the WebTransport MoQT over WebTransport, the next available protocols in the QMux parameters would say it's H3 and H3 will get upgraded to what we call as the MoQT. I I do think there are each of these cons has options have pros and cons and whatever recommendation the the QUIC working group would help in QMux would be helpful for the MoQT working group as well.

Lucas Pardue: Yeah, when I suggested you come and talk, Suhas, I think we've made significant progress this week on kind of unpicking the different overlapping issues and I see Cullen in the chat suggesting that you and he can go off and come up with something and go back to the MoQT working group. So I think this is a good outcome even though we only showed your introduction slide.

Suhas Nandakumar: That's totally fine. I think important thing is that we're discussing this so it's good. Thanks for inviting me.

Lucas Pardue: No, no good and thanks for giving some time back on the agenda, it's much appreciated. Okay, next up we have Tong Lee. Are you are you in the room? Are you remote? In the room, yes.

Tong Lee: Hello. Hi Lucas. Okay. Hi everyone. My name is Tong Lee. Actually this draft has been discussed in the email list and also Christian suggested me to go to the IRTF, but we don't have a session this time in Shenzhen so I'm really appreciate for Lucas to give me the chance to share my idea on the calibrating minimum RTT and low ACK frequency.

Tong Lee: So reducing ACK frequency is standardizing in QUIC working group. So I don't have the control right? So next slide can you help me to change the yeah. So however we find some issues and low ACK frequency. Usually when we send an ACK for every packet, then the sender can get the accurate minimum RTT estimation by tracking the per-packet RTT samples. However, when we send fewer ACKs, we can only get one RTT sample for many packets. For example, if we send an ACK for every four packets and packet two obtain the minimum RTT sample but no ACK is sent for this packet, we only send a packet for the after the packet four. So we might get a larger RTT sample. In this case, the minimum RTT estimation might be biased. Our experiments in Wi-Fi links report 8% to 18% larger minimum RTT estimation. This might be even worse in the satellite links. Next slide.

Tong Lee: So how to calibrate minimum RTT estimation? So it is recommended to use the one-way delay by sending packets with the timestamps. Next slide. First we can calculate the per-packet one-way delay at the receiver and then we can recognize the packet who achieved the minimum one-way delay. In this example, the packet two achieved the minimum one-way delay. Next slide. And also in this example packet five is the latest packet. So when we need to send an ACK, the ACK delay and departure timestamp of the packet two rather than the packet five should be reported. So at the sender then we can calculate the minimum RTT sample by using the T2, T1, T3 and T4. T3 and T4 is the ACK delay. Next slide.

Tong Lee: So what's new for the QUIC protocol? I think we have to add some—a new transport parameter called timestamp_support. This is a new field maintained a new field named timestamp_support should be added to for negotiation between both parties. And also a timestamp frame, this is this can be reused according to the existing QUIC draft from Christian Huitema. And also it is recommended to use the min_owd_ack frame, a new frame to report the ACK delay and departure timestamp of the packet who achieves the minimum one-way delay. Next slide.

Tong Lee: So in conclusion, low ACK frequency biases the main RTT estimation and also it is recommended a new method to calibrate the main RTT estimation. The receiver can calculate the per-packet one-way delay and reports the timestamp selectively and the sender then calculates the RTT samples according to the timestamps. We also need some modification to the QUIC. One is the transport parameter, the second is to recall the draft from Huitema and introduce the timestamp frame and also it is recommend to add a new min_owd_ack frame to report the ACK delay and departure timestamps of the packet who achieves the minimum one-way delay. That's all for my report. Thank you very much. Any comments welcome.

Lucas Pardue: I'm just going to say please get in the queue, come and talk. I'm going to lock the queue soon just so we have enough time for the final agenda item. Yoshifumi, are you are you here?

Yoshifumi Nishida: Yeah. Can I speak?

Lucas Pardue: Yep, go for it.

Yoshifumi Nishida: Okay. My name is Yoshi from Apple. So thanks for interesting presentation. I just wondering this is still useful with normal QUIC. Let's say we QUIC implementation send ACK every two data segment and use BBR as a congestion control, is this still useful? You see some performance gain?

Tong Lee: Sorry, what is your question about?

Yoshifumi Nishida: Sorry, this I was wondering this method is still useful with normal QUIC.

Tong Lee: Actually for normal QUIC, if we just send one ACK for every two packets, actually the minimum RTT estimation is not so biased. So I think I think we don't need this this patch or this modification, but if you if you have this modification it still work. Yeah the answer is it still work.

Yoshifumi Nishida: Yeah I just wonder if you have some data points. But that's okay, thank you.

Christian Huitema: Christian Huitema. First thank you for mentioning my draft. I'll be very happy to work with you to publish a version 09 of that draft and incorporate your ideas and and do that, so just send me an email and I will I will give you I mean all the the way to do that invite you in the repo and all that. The one remark on the scope, the the place in which I have seen that being as most needed is with multipath because with multipath we have lots of ambiguity on how to compute the delays of the different paths in a multipath session. So if I was to redo I mean a new version of the timestamp draft, I would make sure that it can encompass the support for multipath. But as I said, I'm perfectly happy to work with you and update that draft.

Tong Lee: Yeah, thank you.

David Schinazi: Hey David Schinazi. Clarifying question. I think I might just be confused. We have a draft adopted in the working group called draft-ietf-quic-receive-ts. Why doesn't that one work for you?

Tong Lee: Uh yeah, this is a very good question. Actually this is also a very good way to calibrate the minimum RTT estimation because we report more timestamps in a best-effort manner, is it? So but I think in some scenarios, for example when the throughput is very high, then we have to drop some timestamps for one ACK because the packet size limitation. So we might we might report less RTT sample timestamp samples as we need, so the sender still might miss the minimum RTT samples. This is one maybe a corner case. And also I think report as many timestamps as possible might be a common idea and also it's useful in many cases, but in the main RTT estimation it might it seems it's more costly. Thank you.

David Schinazi: I see. Thanks for explaining. My suggestion would probably be to merge these two efforts because the use cases feel quite similar instead of having two separate extensions.

Tong Lee: Yes. Yes.

Kazuho Hoguku: Thank you for presenting this and I have a question and that is like have you measured changes to PTO threshold or like time-based loss threshold? Because even though min RTT might increase and SRTT might also, RTT-var would decrease in that case and then the loss recovery might become aggressive. So I was just wondering if you have measured those and the actual impacts on the loss recovery side rather than just talking about min RTT.

Lucas Pardue: I'm going to jump in. Sorry Tong Lee, just for time we're quite short. So it's a good question. I'd encourage you to to take it to the list or follow up. There seems to be quite a lot of of interest in this, especially in relation to the receive timestamp. There's a lot of stuff in the chat as well that I'd encourage you to catch up on. But yeah, I think we'll need to close it there just so we've got some time for the final final item. But thank you very much for for your presenting here.

Tong Lee: Thank you.

Lucas Pardue: Marcus, yes, in the room as well.

Marcus Ihlar: Do we have clicker support or do you drive the slides? Ask Matt. Yeah. Okay. I'm Marcus. I'll be talking about bits again. Okay, so there is a lot of authors on this draft. The presentation is made by me and Giuseppe. Next slide, please.

Marcus Ihlar: So a little bit of background, we define the spin bit for RFC 9000, it can be used to measure round-trip delays, but there is also desires to measure other things like packet loss. And there is also so that the spin bit has some measurement accuracy problems, it degrades with network impairments. And as most of you recall, there has been a lot of work going on here, a lot of discussions on how to enable different kinds of bits for different kinds of measurements and so on. That was handed over to IPPM working group which produced a kind of protocol-agnostic way to a definition of measurement techniques using explicit bits in packet headers and that resulted in RFC 9506. Since this one was published, there has been an attempt to kind of map some of these bits back onto QUIC, and that is this draft draft-ihlar-quic-explicit-measurements-00. Some of the motivation for doing this is that there is a need, especially from operators, to be able to do thing like diagnostic measurements. There are a lot of work going on now in the IPPM working group and other places on defining protocols and tools and frameworks for benchmarking, and usually they are driven by endpoints but there is often times a desire to be able to have measurements done at several vantage points along a network path so you can kind of compose your your benchmarking or diagnostic measurements over several segments of your network. Also, we believe that this work kind of complements a lot of lower-layer telemetry work by basically tying measurements to application sessions. So in IPPM we have defined a number of protocols that allow you to to do measurements of very specific domains like using IPv6, MPLS, etc. to to measure certain segments of your networks, but being able to actually tie measurements to a specific application session is also very useful. Next slide, please.

Marcus Ihlar: We'll skip this one. This RFC 9506 is presenting a bunch of bits that allows you to measure packet loss, delay, etc., etc. We can go through this more in more detail, we will be presenting this with a little bit more time tomorrow at the SCONE session as well. But some of the issues here, so we have defined a number of bits in this in this RFC that allows you to measure loss, packet delay, etc., etc. and there have been attempts on mapping this onto QUIC, but the problem is really that yeah, how do we fit this in? The spin bit was made a part of the short header of or QUIC 1 RTT packets, but if we want to like have more bits going in there, we would essentially use up all available spare bits in in the packet header and we might not even have enough for for the measurements we want to enable. So that's one of the problems with this. Next slide, please.

Marcus Ihlar: Another problem is that these bits in the in the QUIC short header, they are greased, so they look very random at most points. There is also possibility that these bits might get used for other purposes in various ways with different kinds of extensions. So kind of detecting when a session is a measurement session or like when these bits are used for measurements is possible potentially, but it's not really straightforward. So yeah, these are some of the issues and this is feedback that that this proposal has gotten from QUIC working group in various sessions. So what should we do? Next slide. Yeah, more AI slop. Next slide.

Marcus Ihlar: Yeah, so the idea is to to do something modeled on on the kind of communication model that we have in SCONE. So we want to make measurement sessions very explicit. So the idea is that we define a new packet type with a new QUIC version, like we do with SCONE, and these packets can then be prepended to regular QUIC packets in the same UDP datagram as those payload packets. This would allow us to use six bits of payload in the first octet of that packet, and the QUIC endpoints would announce support for this with transport parameters and yeah, like I said, they would be inserted in front of regular packets in the same UDP datagram. Of course, these packets would, just like in the case of SCONE, they would need to have the same connection IDs as the connection IDs in the packets that follow, because then you can ensure correct routing of these packets. So why would we want to do it in this way? Well, we now get an explicit measurement layer that removes the need of reserving space in QUIC short headers, so you can use them for those bits for other things. It also makes it very easy to identify measurement sessions from the measurement nodes on path, because now we have a very explicit way of of detecting that. And we also think that this is pretty much aligned with the lower-layer mechanisms that we have defined for measurement. So we have this work called IOAM, which is which is basically doing encapsulations in various layers where we introduce various options for for telemetry. So there are IPv6 options, MPLS, and so on. So this in some sense fits fits that spirit of of doing things. Next slide, please. Yeah, that's how a packet could look. Next slide, please.

Marcus Ihlar: There are of course a number of trade-offs and issues with doing it in this way, so we're kind of doing this presentation as a heads-up, we will also have more time to discuss it in SCONE session tomorrow. But one significant issue or trade-off with this is that at least some of these bits require that you send this with every packet, but doing it in this way you you bloat the packet size a little bit or the UDP datagram size, you would have to at least you would require at least eight octets of additional payload in order to make this work, probably more depending on the connection ID lengths used in the end-to-end. Another interesting issue that we need to discuss is whether these things should be integrity protected or not. In SCONE, we explicitly do not want these signals to be integrity protected because the network nodes are supposed to modify them. In this case, networks should not modify the bits, so we might want to protect them somehow. That would of course add a little bit of extra overhead and complexity, but that that's something that could be done. Next slide.

Marcus Ihlar: There are of course issues with faked or skewed measurements. This is an issue with the spin bit already, but endpoints may want to produce false measurements by setting bits according that is not according to these algorithms that are described in RFC 9506. And of course it's very important that networks observing these should not use these measurements to trigger like policy actions or something. There are of course also a number of privacy and security issues that need to be discussed in this. For instance, the nice property from a measurement node perspective that that you can very clearly identify and detect these these sessions when you when you make them clearly identifiable like this, of course also makes it easy to track that a certain flow is is doing measurements and and this is something we need to consider. So for instance on-path adversaries might use this to track user behavior, on-path adversaries might also want to specifically target measurement traffic to to disrupt operators by, you know, messing with that traffic and so on. Next slide.

Marcus Ihlar: There are more open issues as well, for instance if we do this we might be able to not need to have a spin bit in the short header, and of course that would be a trade-off for like if you only care about the spin bit, this would add unnecessary overhead potentially and so on, but it could free up some space in the short header. Also, if we look at the the methods described in RFC 9506, some of these methods require you to to send the pack like to send the bits in every packets whereas some methods require less frequent marking. So there could be an opportunity here to maybe negotiate specific methods depending on your need and that might reduce the overall overhead. And I see a few people in the in the queue, and if we run out of time we can discuss this tomorrow as well so...

Lucas Pardue: We did already run out of time. We are over. I think there's based on some of the comments I'd like people to like make their statements clearly, yeah, and note that there is time on the agenda in SCONE if you want to discuss this further and there's mailing lists. So please go ahead, Ted, but please keep it as brief as possible to respect time.

Ted Hardie: Ted Hardie. Because of the issues previously raised around plus SPUD and the spin bit, this needs a full BoF before it gets consideration for adoption anywhere.

Marcus Ihlar: Yeah, no, this is a heads-up, so yeah, for sure. We're not we're not asking for adoption at this point.

Speaker 9: What Ted said. Plus one.

Martin Duke: Martin Duke. Yes, BoF. But to the extent that you want endpoints to do this before we do this work, I think I think we need some confirmation that endpoint implementers are interested in doing this. I'm a little skeptical frankly. That said, like there's—I hate to say this—but there's no reason an operator that's trying to do, you know, IOAM-type stuff can't just inject and remove these things at network ingress and egress, in which case a lot of those issues go away.

Marcus Ihlar: That's one way of doing it. Bit dangerous maybe, but yep.

Lucas Pardue: Okay. I think we'll wrap it up there, Marcus. I know maybe you wanted a bit more time but this is how the cookie crumbles at the SCONE some times. Excellent. Thank you. For brief wrap-up, I don't think there's much to say. We have some clear emerging outcomes for the QMux draft. I'll summarize that and send an email to the list and then we'll look at getting the authors to merge some of those PRs and cut a new draft that can be an interop target is what I believe. And yeah, for the other other documents or sessions that were presented, please keep that discussion going. Thank you very much all and I look forward to seeing you in the next meeting. Bye for now.