Markdown Version

Session Date/Time: 19 Mar 2026 08:30

Chair: So, let's get started, shall we? Looks like there are a lot fewer people than usually. I see 36 so far, and not many in the room either. So if you're in the room, please log into MeetEcho so that we can see you. So, this is MASQUE. We are doing a hybrid meeting. At this point in the week, you should probably be familiar with how this works. This is the 'Note Well'; you should also be familiar with that. You agreed to it when you signed up for the meeting, so I will not spend any more time on this here. And this is the agenda for today. First of all, we'll give you a quick update on current items in the working group, and then we have presentations for the IP compression by Jaroslav, then David Schinazi will talk about the architecture draft, and then we have some time for the ECN draft. Any agenda bash? That does not seem to be the case. So we'll move on. First of all, an update on the connect-ethernet document and the connect-udp-listen. David and Alejandro have done the shepherd write-ups for each other's drafts, and they look good to the chairs, so we will move forward with those documents and forward them to the AD, who will then forward them to the IESG. I see a thumbs up from the room, but I couldn't see from who that was. Oh, that's David. Hello. Then we have a question for the authors of the draft of connect-ip-dns. Haven't heard a lot recently, but it seems like we might be ready for working group last call.

Jaroslav: I believe we still want to do some interop testing. It's a fairly straightforward spec, but it's still, I think, a good thing to do interop testing. So if anybody has a prototype, would like to test DNS and/or NAT64 - NAT64, sorry, I should say PREF64 - please reach out to me and David.

Chair: What's the implementation status?

Jaroslav: I have one that interops with itself, but it would be great to have more.

David Schinazi: Yeah, sorry, I owe Jaroslav an implementation on this one and haven't had time. So that's kind of what we've been waiting for, or if anyone has that opportunity as well, that would be awesome. Yeah, but hopefully soon.

Chair: Sounds good. And then we'll talk about quick-proxy next. So we had, we did a working group last call on the quick-proxy document, and we got some feedback from Martin Thompson and Kazuho. So thank you for that. We will request SECDIR review, and the authors requested to change the status from experimental to standards track. Once all of this is done, we would like to do a second working group last call to once again confirm the consensus in the working group. Yes, Tommy?

Tommy Pauly: Yeah, thank you. So just to comment on the engaging with feedback, as we had mentioned to the chairs, some of the comments from Kazuho we have already incorporated into the editor's copy and he's reviewed that. But a number of the other comments, and greatly appreciated, and I think as Martin had noted, they were kind of delayed reviews of things that had been discussed in the working group. At this point, and I just want to kind of make sure the working group hears this, so if you disagree, speak up. I think a number of the things, like essentially around the solutions for how we fix the loop problems and who picks the CIDs, that's something we talked about a lot and we went again over it with the authors and I think we still have the correct choice in the document. So we'll try to articulate that over the emails. But we don't expect to be making any large design changes to how we solve the loop problem.

Chair: Sounds good. Can we expect a new document before Vienna then?

Tommy Pauly: Yeah, certainly. That's not a problem.

Chair: Okay. Then a quick update on the proposed re-chartering of the working group. We discussed this at the last IETF meeting, and we will run adoption calls for the two drafts listed here that we will hear presentations about today. Once these adoption calls have gone through, we will start the process of re-chartering. The chairs will send a suggested new charter to the mailing list, and we can discuss it there. And once the working group has reached consensus on what the charter should be, we are ready to progress this new charter with the AD, I believe the IESG. This will then allow us to run an adoption call on the MASQUE proxy document that David Schinazi has been working on. And that's all for updates from the chairs, and we can jump to the presentations and we'll start with compression. There we go. Jaroslav, will you be the one presenting?

Jaroslav: Yes, I will be presenting. Can you hand the control to the clicker? Great. Hello, everybody. My name is Jaroslav, and today I would like to present what was called Connect IP Optimizations last time. Now it's generalized and is called HTTP Datagram Compression. Tommy Pauly joined this effort as co-author, so now we have double enthusiasm and energy behind this proposal. First of all, I would like to give an executive summary for those who might remember what was in the previous version of this proposal, so what have changed in 01. And quite a few things changed based on the feedback that we've collected in Montreal. So first of all, expanded scope beyond connect-ip. The original proposal was very, very focused on connect-ip, but apparently a number of proposed optimizations or compression techniques apply to things such as connect-ethernet, connect-udp, and potentially they might apply to whatever other things might come encapsulated in HTTP datagrams. Added notion of derived fields; more on that later. Realigned capsule naming and structure, so they would be similar to how naming and structure of capsules that define contexts in other MASQUE drafts. And added couple more failsafes. So first, MTU limit for reconstructed packets, so as a result of reinflation of the packet or decompression of the packet, the resulting packet would still fit into MTU on the egress. And added option to specify maximum segments per template, so that a single template would not be ridiculously broken into too many small static segments.

Now, let's dive into the details. So first, what this proposal is designed to solve. The primary, by far the most important, purpose of this proposal is to reduce MTU pressures when using quick datagrams. So if you're using HTTP/3 datagrams that are encapsulated in quick datagrams, then you're not supposed to be doing fragmentation and your MTU is smaller than 1500 bytes, and the smaller it is, the least efficient transfer that is. You obviously also need whatever the application layer protocol is there to do proper MTU discovery or be adjusted to smaller MTU that you have, and that is always - not always but often, I should say - a challenge. Somebody who's been building VPN protocols, VPN implementations in the past, in my experience, 90 plus percent of all the troubleshooting issues associated with VPNs are MTU-related. Second, a slightly orthogonal but still important optimization that is offered here is checksum offloading. So if you're performing TCP/UDP checksum calculation in CPU, even on modern CPUs, that's non-trivial amount of time that you would be spending in computing those TCP/UDP checksums. And as I will demonstrate, in certain scenarios, this is just a pure waste computing TCP/UDP checksum on one end and then ignoring them on the other end.

So this proposal has three mechanisms. One is templates to remove static segments from the payload, second derived fields that can be computed by the receiver, so you don't need to transmit them, receiver can calculate them, and third is TCP/UDP checksum offload. So those mechanisms are stackable, meaning that you can create a template, you can define set of derived fields, and you can connect them, so you can say that my derived field context definition is using template context as a parent. Reusable templates. So many sections of packets have many parts that are exactly identical. Typically these mechanisms are used per flow. So if you have flow of packets, so that is a single transfer TCP/UDP whatever between source and destination, you will obviously have the same IP version, the same source/destination IP, source/destination port and lots of other boring fields in packets that would be exact same. So why don't we define a template and remove those fields from actual transferred data. Now, the proposed mechanisms mechanism is quite flexible, meaning that we don't define in this draft certain templates, we define a mechanism for defining templates by the sender. So sender chooses which parts of the packet would be repeatable, so would would would apply the same would apply over and over and over again, and removing those static segments from actual packets by defining context ID.

We would expect most implementations would do that per flow, so defining source/destination IP and other things per flow. One of the feedback that Bunch Schwarz provided on the mailing list is flexibility might mean that implementers would shoot themselves in the foot and will define some too specific templates, so we will provide additional guidance on how templates should be defined. But in some cases, based on my experience, sometimes you actually want a template that would apply to multiple flows. So if you have some kind of sensors that are sending UDP packets from random source ports, that does happen sometimes, you might want to define a template that would not be using source port, but only IP addresses and destination port. In some cases, you might want to go into application layer. So if you're transferring quick inside connect-ip and you know your connection identifier length, destination connection identifier length, you might compress it away in a similar fashion. Also, this mechanism could be applicable to connect-ethernet. If you have source/destination MAC addresses and your ether type exact same for packets within given flow or between two peers that communicate a lot, again this is something that you can compress this way.

So it's up to sender to define how to split the packet. It's up to sender to do that in a sensible fashion. Receiver simply follows the template, reinflates the packet back, and it doesn't need to be aware of what why this template exists, what exactly the receiver is reconstructing. So each peer separately announces support for templating and limits of concurrent templates that it can maintain, and also now we have we introduced a limit of how many segments per template receiver wants to have. A reasonable limit would be something like 10, for example, so that sender would not break packets into too many small chunks of static template. Again, there is no here strict correlation between flows and templates. One could potentially define a template that would cover multiple flows or define multiple templates for a given flow. Packets can always be sent with context ID zero, meaning that comes without any compression. Packets could always be dropped obviously, so if some abnormal packet comes that doesn't fit into template, then sender has a choice. Sender can define either a new template, sender can try and transfer this packet with template ID zero if it fits into MTU, or sender can potentially drop this packet.

So templating can apply to any connect carrying payload in datagrams. So today that would be connect-udp, connect-ethernet and connect-ip, but again this mechanism is quite extensible, perhaps future connect protocols that use datagrams will be able to use templates. Second mechanism is derived fields. So certain fields in the packet can be calculated by the receiver, most obvious ones are IPv4 total length, IPv6 payload length, UDP length, IPv4 header length, and TCP/UDP checksums. They of course require understanding of packet structure. So if receiver claims that it can derive certain fields, it needs to be able to process all the necessary headers and place calculate the field and place it where it belongs. And the proposal the draft now proposes IANA registry so that more fields, more derived fields could be added in the future. So derived fields that we have today are very IP-specific, they would apply to connect-ip and connect-ethernet, but again in future we might have other protocols that would be using HTTP datagrams, might have something completely different for derived fields, so perhaps it's good to keep it open-minded, open-ended.

And third part of the proposal is TCP/UDP checksum offloading. Now this is again as I said it's a little bit orthogonal to the previous one, so it's not about increasing effect effective MTU size, it's about not carrying unnecessary internet checksum. So TCP always carries TCP valid TCP packet always carries valid checksum. UDP packet is in most cases carries checksum, there are some escape close when UDP checksum can be all zeros, but in practice these day and age it's quite rare. Surprisingly enough, checksum calculation on CPUs is not as cheap, so in my tests on modern ARM64 and X64 CPUs, when you have CPU limited transfer of 1500 bytes packets and to compute checksums, you lose about 5% of performance. And there are some other architectures like RISC where that don't have add with carry instruction, so checksum calculation becomes even more expensive over there. Now, while of course in normal circumstances checksum is always set by sender and in vast majority of practical implementations of anything that carries packets it's set by NICs. So when you have a regular computer that is sending packets over wired or wireless interface, it usually doesn't bother calculating checksum in the CPU. It just passes the packet to network interface card, tells network interface card where offset is for the checksum, and network interface card calculates this. But if we have local process that is originating those packets, those packets are somehow intercepted by connect-ip client or process or whatever it happens to be, then well there is no network interface card in the flow, the packet is getting encrypted, encapsulated into all the quick HTTP/3 machinery, getting encrypted, and then only getting passed into NIC that can take care of checksum of the outer encapsulated packet, but it cannot really look inside and help you with a checksum there.

The second use case that I've run into is tun/tap interfaces with generic segment offload. So at least on Linux, you can configure tun/tap interface to do generic checksum offload so that TCP and UDP flows are packets are aggregated in packet trains. So instead of having to do syscall per packet, kernel can aggregate multiple packets into a single like a mega packet of 64 up to I think it's up to 64 kilobytes. So you can pick it up with single syscall. You obviously need to break it into chunks when you process that, but that significantly improves performance. But the downside of this process, if you do that, is that this mega packet doesn't come with a checksum. So when you after you break it back into those chunks, you need to place calculate and place checksum on each and every chunk. Now on the receiver side, similarly, you don't always need those checksums. If you hand packet after decapsulation from connect-ip into the process terminating flow, maybe operating system would allow you to do that without valid checksum. Again, similarly, if you're passing that into tun/tap interface if you're using generic segment offload, then you actually not expected to place any checksum there, you're expected to build those packet trains with a single large packet. Or if you act like a router and after decapsulation of connect-ip you send the packets through the network somewhere else, then you in most cases have luxury of network interface card calculating those checksums for you.

So the proposal is pretty simple, is to signal that hey I'm capable of I'm okay if you don't include proper TCP/UDP checksum. Instead, the common approach is to include checksum of the pseudo-header for TCP/UDP and provide offset to the checksum and to the beginning of TCP/UDP header that is used for recalculation of checksum. So depending on the operating system kernel hardware capabilities, typically those two parameters are passed via XDP or internet interface card or whatever in in case of tun/tap interface it's a special virtio header where again the same the exact same information is passed and then the checksum is recalculated. Unfortunately, I couldn't find any IETF references to this process, so the only reference I have here is Linux kernel documentation how it's done. If anybody have a better reference, please let me know. So again, as I mentioned, so each party signals that it supports it, and in practice can apply to connect-ip but could could also very much apply to connect-ethernet as well.

So in the latest version of the draft, these are the capsules that we propose. So there are three capsules for templates. ASSIGN_TEMPLATE is creating a new context with definition of static segments. Each segment consists of segment offset, segment length and segment payload. There is a rule that segments must not overlap and they must not touch, so between segments there must be at least a little bit of variable payload. Based on the feedback that we've got, now there is defined also ACK capsule, so whoever received ASSIGN_TEMPLATE is expected to ACK that with TEMPLATE_ACK. And then once this template is no longer is in use, it can be closed by sending TEMPLATE_CLOSE capsule. Now there is next_context_id field in the ASSIGN capsule, and that allows to stack these three optimization mechanisms. The second is DERIVED_FIELDS, where again similarly you define context ID, you can optionally specify next_context_id if you layer it on top of another optimization, and then you can provide list of derived field types. So those are field types that you that you want receiver to recalculate on the packets. And finally for checksum offload, similar structure of capsules except this you this time you pass checksum_field_offset and checksum_start_offset so that this could be passed to the network interface card or whoever who would recalculate checksum on your behalf.

So based on the feedback, unfortunately limited feedback that so far we've got on the list, the plan is to define recommended safe templates for typical use cases, so to reduce implementation mistakes resulting in MTU oscillation or unnecessary dropped packets. So for example, one might decide that compressing away TOS is a good idea, but in practice obviously you might get some packets that come with a different TOS and your template would not be beneficial, so definition of certain conservative templates probably is a good idea. And we also need to notice that checksum offloading and TCP/UDP checksum derivation cannot occur at the same time. So that's impossible conflict, that is invalid configuration, so you need to choose either you derive TCP/UDP checksum and insert them or you do offloading, which means that you insert TCP/UDP checksum of the pseudo-header and provide offsets. So thoughts, suggestions, comments? I understand that's quite complicated proposal, lots of moving parts, but any questions including clarifying questions are very welcome. Gorry?

Gorry Fairhurst: Gorry as an individual. You talk about IPv4 header length, but you don't talk about the IPv6 extension header chain. Are you intended to do IPv6 extension headers as well?

Jaroslav: So in the template definition, it's abstract. It doesn't talk about IPv4 or IPv6 or anything else. It's up to you to define depending on how you parse IPv4, IPv6, other headers. Or was your question about derived fields?

Gorry Fairhurst: My question was what were you going to do with IPv6 extension headers. Were you going to support them?

Jaroslav: Right. So again, when when it comes to template compression, yes, they are supported as anything any other static repeatable header. Is there anything that you foresee that would need to be derived for IPv6 extension headers?

Gorry Fairhurst: I'd need to think and I will read your draft and give you comments. I'd like to check the checksum stuff as well, but it sounds like you might be doing a good thing here. That might be a good response at least for UDP. TCP with no checksum I'll leave to other people to comment on.

Jaroslav: Thank you.

Chair: Hello. Based on some experience of software development, I guess sometimes optimizations make sense on paper but then when you try to apply them in practice, they turn out to not provide a lot of benefit. So I wonder if you've had any chance of like actually implementing this and collecting numbers of you know like improvements that you've seen in your implementations. You've mentioned the checksum overheads on ARM processor, so I wonder if you have like any before and after results that you could share.

Jaroslav: So I have for checksums, as I said, it's roughly 5%, so performance jumps in my tests on modern ARM64 and X64 CPUs from 11 to 12 gigabit per second per CPU core. So that's that's quite a bit in my book. When it comes to templates, I've done very limited testing, but in my experience, this is more about compatibility with things that do not really like limited MTU, not as much about just raw throughput.

Chair: Yeah, I guess for compression I would maybe like to see like what the actual like practical like saving of like bytes on the wire is versus like not doing any templates, I guess.

Jaroslav: Right. Yes, and especially if your link is limited in terms of throughput and maybe lossy, so yes, that would be that would be an interesting test. Thank you. David?

David Schinazi: David Schinazi. Just wanted to say that I like this a lot, and I think the new version is even better than the like with this more generalized system. It's what happens, you've been warned, anytime Tommy shows up on a draft it becomes a grand unified theory of something. But I think in this case it's quite good. So yeah, I'm supportive of this work. I think it'll be useful and I think we should adopt it in the working group.

Jaroslav: Thank you very much.

Chair: Just checking, this reminds me very much of IP header compression, and is it capable of doing the same compression that you could do? So the key difference, and I looked into quite a few IP compression schemes, it sometimes requires a little bit of archaeology. The key difference here, the why why this is unique and why I feel that this needs to be reinvented yet again, is we have a very unique luxury of having a reliable channel where we define those templates and where we send those capsules defining those templates, and unreliable channel where we transfer packets. All the all the mechanisms that I've seen previously, and it's perfectly possible that I've missed something, they are kind of in-band when you when you don't have those two separation when you don't have reliable and unreliable channel. Also, here we have quite unique thing called context ID that doesn't really exist in other places as far as I can tell.

Chair: No, I like that one. I was just wondering if the power expression power of the template is enough to express the kinds of compression that you could do with older mechanisms. For instance, IP header compression had this thing where you could actually put in a that you transformed an increasing number from a full number to a delta.

Jaroslav: Yes, however here on the datagrams, we don't really have a similar sequence counter that we could that we could derive other sequences from. And within a single quick packet, of course you could have multiple datagrams at the same time, you could have streams, so the quick packet number is not going to help. So yeah, I don't think this mechanism is applicable. Now, in addition to static templates, now this new version introduced derived fields, so something that I mean receiver can just compute. It's perfectly possible that there will be other mechanisms in the future and that's why next_context_id one of the reasons why next_context_id is here, so if somebody in the future will come up with some other compression that is can be applied on top of what is proposed here, that should be perfectly compatible with that.

Chair: Thank you.

Mirja Kühlewind: Mirja Kühlewind. So I definitely support adoption. I think this looks good to me. There is of course at the new proposal is on the one hand more generic, on the other hand can also be a little bit simpler if you have these static fields. So I think that's actually good. But I think that's a general point about like how generic you want to be because that creates more complexity and we have like extensions are actually quite cheap, so we could like assign as many as we want.

Jaroslav: Right. So again, one of the reasons why this proposal is generalized in this version, I really didn't think about connect-ethernet when I presented it, and Alec came and after the meeting and said, hey, what about connect-ethernet? And I realized, yeah, it probably would be good to have a generic enough. Maybe, you know, some people of the future will do connect-SCTP or something else. It would be good to have exact same machinery.

Mirja Kühlewind: I mean, I didn't make up my mind but you can always create, you know, a separate extension for connect-ethernet or whatever you have in future as well and just keep it simple at the moment. But I mean that's something we can discuss further and I don't know what the right answer is. I have two quick more detailed questions. One is I see now that you actually put the next_context_id field in here. We discussed this last time and I think we need to like agree on a general solution that we use for all extensions, the same solution. So that's probably a separate topic.

Jaroslav: Right. And again, this is something that we should discuss, but there is also here a pattern, right? So we have assign capsule: type length context ID next context ID, we have ack capsule, we have close capsule, which are absolutely identical, the only difference is type ID. So maybe we should define some kind of context ID something something that everyone will follow and maybe even connect-udp-bind people that currently don't. But yeah, that's something that we can take on the list.

Mirja Kühlewind: It's kind of was my next question because that seems also the assign and ack capsule, that seems to be a pattern that shows up very often and I was also wondering if we should use that in a more generic way. And about the close one, I think like because this is probably because you have the max template number, maximum number of templates, so you also need to remove them or you could also have a moving window or whatever. Again, there are many options. I don't want to make it complicated but like not defining a different mechanism for every extension we have for all these kind of things would be useful, I think.

Jaroslav: Yeah, and again, sometimes you have something, you know, long-standing and sending packets very rarely but super important, you know, like signaling. So having explicit close, I think, is better.

Mirja Kühlewind: I mean, you it's like that's not the cheapest solution but it's also not very expensive. You can always start a new connect request in parallel. We have multiple streams or whatever and you can have a higher, like that would keep the extension simple and would optimize for the non case where you like don't need many templates or whatever. So that's another option.

Jaroslav: Sure. Yep.

Marcus Ihlar: Marcus Ihlar, Ericsson. I think this is great work. I think it's a big improvement and I really support this way of doing it. I like the idea of having some safe profiles. I think that's going to be really important, at least initially when we start deploying this. My question is: are we going to maintain some form of registry, or how do we document this? Is it the IETF who are going to own these profiles? What happens when we need to define new profiles? Are there any thoughts around that?

Jaroslav: So right now the way the way I imagine this is template profiles would just be example templates in the draft. They won't have any kind of identifier, they won't have any kind of official standing in any registry. This is something that authors of this proposal recommend implementers as a reasonable starting point. The only the only new registry that this proposal has right now is derived fields. Again, right now all the things are very IP-centric. Again, maybe in the future there would be other derived fields for other protocols.

Marcus Ihlar: Okay. Yeah, I think that makes sense. It'll be interesting to see later on if we need, you know, if we start compressing different protocols, how we handle that, but maybe we can take that as it comes up.

Jaroslav: Right. And it's perfectly possible that it won't be within MASQUE working group, right?

Marcus Ihlar: Yeah, yeah, sure. Okay.

Jaroslav: Anyone else? I have no idea how we're doing on time. There is no timer.

Chair: I think we're looking about right. Okay. Well, thanks very much, Jaroslav.

Jaroslav: Thank you.

Chair: Thank you for this presentation. And then we move on to the next item: the MASQUE architecture document. David.

David Schinazi: All right. How are we looking? Yeah, that works. Can you present my slides? Thanks. Hi, everyone. I'm David Schinazi, still a MASQUE enthusiast, and here to talk about the MASQUE architecture document. It's still currently called draft-schinazi-masque-proxy, but assuming based on how this discussion goes, we might rename it to draft-schinazi-masque-architecture. So, quick quick history. And some of you might recognize some of this. We discussed this a year ago in Bangkok. But more importantly, we started this whole nonsense seven years ago in two days, back in Prague, and realizing that I really haven't changed my draft template since then. But so, when we got the the original proposal for MASQUE had its own protocol that like did a bunch of things, it pretended to be HTTP and then wasn't. But once we kind of created a working group, it became a list of small extensions to HTTP. And through a series of very enthusiastic contributions and discussions, we eventually published a bunch of RFCs. But none of them have MASQUE in the name, or almost probably not even in the text even. I think it's only in the acknowledgments section.

What happens regularly to me, and I've heard to folks at other companies as well, is you have someone on a product team that's working on their application. Their the privacy reviewer tell them they need to protect the user's privacy, so they shouldn't when they query that data, the server shouldn't know which user is querying it. And in some cases, the people say you should be using MASQUE, and the application developer searches for MASQUE and gets really confused because they find this document from 2019 that is completely disconnected from the reality of what we shipped. Hilarity ensues, but it's not ideal. So, I wrote this draft initially a few years ago of it kind of tries to answer the question of what is MASQUE? And the second feature of it is that you can send people a link to this document to have a single reference for what MASQUE is.

The MLS architecture RFC got published last year, and it actually references this document. And it mentions, you know, when you're fetching this thing, you should use something to protect the privacy, such as MASQUE, Tor, or VPN. But like that to me is kind of shows that there's value in publishing this as an RFC, so it'll be nicer to refer to an RFC instead of to a draft. So, what actually changed in the draft since the last IETF? Or since I presented this a year ago. The main thing is AI has gotten a lot better at making dumb slide illustrations. Though if you look closely, it mentions AI and blockchain as part of this thing on the right, which MASQUE does not do. Anyway, still hilarious. Thank you, Gemini. But like the big change in instead of focusing on what a MASQUE proxy is, we focused on what the architecture of MASQUE is. Which was a suggestion from Lucas and from Dennis, if I remember correctly.

So, what's in it? It gives a little bit of history. It talks now about architectural principles. Like one of the core principles of MASQUE that wasn't discussed before is the fact that we're running over HTTP. And part of the benefit of that is some privacy because it makes it look like web traffic. But you also get a bunch of other benefits, like you can get through load balancers, and it goes into that detail because in my mind, what made MASQUE really successful was the fact that it was easy for CDNs to deploy because it was over HTTP. So the document talks about that a little bit. Also, what privacy properties we have. And it talks about related technologies, like Oblivious HTTP and DoH. Sebastian just filed an issue that we should talk about CONNECT as well. I think, like CONNECT isn't MASQUE per se, but it is definitely in the related technologies. So I'll I agree and I'll add that to the document soon.

What's next? We have this great, beautiful future for MASQUE. Like we mentioned in the chair slides, it's not technically in our charter, but the good news is we're planning a re-charter anyway, in part for this, but also to kind of start winding down the working group and mention these last final extensions of other MASQUE documents that we're finishing. So the actual new draft fits better into the re-charter than the one before because I think it mentions an architecture document. So my main question for the group here is, have you kind of read the latest one? Does this direction of talking about architecture instead of the proxy sound better to you? And if if yes, rename the draft before asking for adoption. All right, Jaroslav.

Jaroslav: Yes, I read this draft and I think this is very important document, architecture document, to tell to people what is MASQUE. One thing that I think is missing, and I don't know if it belongs here or it should be a separate document or something else, is there are so many flavors of MASQUE out there. So you have connect, HTTP CONNECT, which is kind of MASQUE-ish. You have templated connect TCP, you have connect-udp, connect-ip, connect-ethernet, and extensions of connect-udp, and you can do that over HTTP/1, 2, and 3. And when somebody says, I support MASQUE, what exactly does it mean? Would be good to, I think, this document could be extended to have an overview of all things MASQUE with some recommendations what you should and shouldn't be using and what kind of things you should be avoiding if possible.

David Schinazi: I I think that's a great idea. The document goes into that a little bit in the capabilities section, but I like your idea of having maybe recommendations of: 'Oh, if what you're trying is to proxy all your web browser's traffic, then connect-udp and CONNECT are a great choice, but if you're trying to replace your IPsec VPN, then connect-ip is a better choice'. Things like that sounds like it would totally fit in this document in my mind. Thank you. Can I ask you to file an issue on the on the GitHub, Jaroslav?

Jaroslav: Sure, yes, and I might even contribute some text, if you don't mind.

David Schinazi: Oh, even better! Thank you very much. Lucas.

Lucas Pardue: Hello. Yes. I've not had a chance to read the document, I'm sorry. But before Jaroslav talked, I was like, is it singular or plural? Is it MASQUE architecture, as in, there's different ways you could build things using MASQUE? It seems kind of like it. I don't want to bikeshed a name, but are we just talking architecture, are we talking like taxonomies of how to put these pieces together, or terminology? I hate I hate those just terminology documents that point pointless, but coming up with a way to enable people to talk about the different ways to instantiate MASQUE, I think is the most powerful thing. Like we talk about CONNECT a lot, but I just posted a link in the chat. There's this thing called the CONNECT protocol, right? And it's got nothing to do with any of the CONNECT that we use. It's for doing some RPC over HTTP. It's really confusing to anyone outside this little sphere of of our world when we're talking about things. And actually, the fundamental protocol aspects are really simple. Like we're making it too hard. So I fully fully endorse like this working group taking on board the document. I've got confidence we can figure out the right way to frame it. And I don't think it should take forever. I think we could like iterate on this and get something that's really done this year and just publish it.

David Schinazi: Great. Thanks. That makes a lot of sense. I think I'll have to noodle and on the best way to phrase that. I think starting with recommendations and then, but like yeah, as part of those recommendations is like how you fit all those and how they interact is a good idea. So yes, I think that sounds good. Thanks, Lucas. And I'll read the document and come up with some more concrete... it might be that you answer all of that already. I honestly can't say, so let me get back to you with some more concrete feedback. Great. Thanks. All right, that's all I got. Thank you, everyone.

Chair: Thank you for this presentation. And then we move on to the next item: the ECN draft. Mirja.

Mirja Kühlewind: Do you want to get me the clicker? Ah, there you go. Thanks. Hi, I'm Mirja Kühlewind. Together with all the other M names here on this slide, Magnus, Marcus, and Martin, we're working on this extension for ECN. We revised the draft yesterday, I think. So you have all read it, I hope. But the good news is, it's much shorter now. So last time we had like a long discussion about extensibility or combining different extensions and we didn't really come to a conclusion there. And like, at least what I got from the discussion is also that there's a request to keep this as simple as possible. So what we did is actually we revised the draft completely, took a completely new design, new approach here. We really define only one extension that covers both ECN and DSCP. And it's it's going for zero per packet overhead. And so we also removed everything about combining different extensions from the draft. So we can keep that discussion separately. We still probably need it in some point, but we don't need it in this case as the new extension always supports both ECN and DSCP at the same time and you cannot separate it anymore.

So, this was the old one. We just like forget about it, we don't have it anymore. This is the new approach. We use only context IDs to signal both the ECN and DSCP values together as a set. So like, each context ID that you announce defines what the whole byte has to be. We use still we use HTTP headers for the negotiation or for the announcement of the support of the extension and you can also put an initial set of context IDs into the header. So if you just need, like, one static set of context IDs, you know, you're already done at that point. But we also define an assign and ack capsule in case you want to provide more DSCP code points to use later during the connection.

This is the format we are using in all cases. So it's kind of a five-tuple where the first value is the DSCP code point, and then you have to define always all four context IDs to cover the four different ECN values. Why do you need to cover always all four? Because if you use ECN, you could also have some remarking or whatever on the network, so you never know which one you might actually see on a packet, so you should always support all four of them. But it means also you can only get, if you want to use different DSCP values, you automatically also have to support ECN with this extension. You always have to provide four context IDs. But that's something we could also optimize for if people think it's needed like and, you know, providing another capsule or whatever that only provides you one DSCP value with no ECN support.

Then this is how the header looks like. So as I said, in the header you can announce already an initial set of context IDs you want to support, as many as you want here, but it's always this five-tuple. It's always telling you first what's the DSCP code point that belongs to these four context IDs and then you have to announce four context IDs.

And then the same structure is used in the capsules. As I said, two capsules: one to assign the context IDs. Again, you can like put these tuples or as many of these tuples into the capsule as you want, and then also an ack capsule that needs to acknowledge these five-tuples that are actually in use. What's missing in the draft is more discussion about what happens if you didn't get the ack yet or if you will never get it. Like, you can optimistically you can use these context IDs and then they might never receive at the end or you have to wait for the ack or whatever. So that's just like I think we need to address that in the draft. It's not there yet. But other than that, that's a proposal. Are people more happy now? Or is this too simple? Gorry.

Gorry Fairhurst: Yeah, I'm more happy. Are you going to statically define the DSCP zero as in always allow it?

Mirja Kühlewind: No. Like, you kind of this is kind of the example we have in here. Like, this is kind of how you would announce it and you have to just announce it manually.

Gorry Fairhurst: Okay, well, I'll read the draft. I skimmed it and I like it so far.

Mirja Kühlewind: Good.

Chair: Yep, then I think we are ready to ask for adoption. I mean, from the authors' side.

Chair: Thank you for the presentation, Mirja. We will not run any adoption calls with show of hands in the room today because our participation is a lot lower than we usually see at IETF meetings. It's about half from what we can see in MeetEcho. We will be running the adoption call on the mailing list, as we always do, so expect that shortly after the IETF meeting. Thank you. And with that, we are ready to wrap up this session a little bit early. Thank you to everyone and see you again in person, hopefully, in Vienna.