**Session Date/Time:** 20 Mar 2026 01:00 This is the complete verbatim transcript of the TSVWG working group session at IETF 125. **Zaheduzzaman Sarker:** Okay. This is 9:00 morning in Shenzhen. Welcome to the TSVWG working group session at IETF 125. Here I am with you, Zahed, and my chair, co-chair, Martin. **Martin Duke:** Greetings, transport enthusiasts. **Zaheduzzaman Sarker:** Yes. So, and he also likes to or loves to talk. So, Martin, could you get us through the slides? **Martin Duke:** Sure. This is the IETF Note Well. It's Friday, but intellectual property, code of conduct—if you haven't read it, read it. You guys know how to do a meeting by now. You guys know about this already. All right. This is the agenda for today. It's an eclectic mix of things. I've decided—we've experimented with not with separating out time for discussion, so hopefully people have fit their slideshows within the allotted time there, and there should be a little time for discussion after each presentation. Would anyone like to bash this agenda? Okay. You want to talk about this? **Zaheduzzaman Sarker:** No. **Martin Duke:** Okay. Well, you can read as well as I can, I guess, but we did publish one RFC, so that's exciting. We've got two things over at the RFC Editor. Two documents that are somewhere between the working group and the RFC Editor. So hopefully we'll make some—those to change state in the next few weeks here. This is our regular call for document review. I'll just give the same spiel I always give: the TSVWG is a mixture of different interest groups that are all here together to talk about transport. If you stay in your silo and only read the things in your little narrow world, then nothing gets reviewed. So I think the proper communitarian thing to do is to go outside your sector a little bit, review some documents so we can produce good quality and make allow things to progress. So please do that. Reviews are always welcome, and as you can see there, we have a document in Last Call right now, draft-ietf-tsvwg-l4sops. Even if you're not an L4S enthusiast, a read for clarity and just basic correctness is always valuable. So please take a look. Here are the milestones, as I suggested a moment ago. We have two documents that are going to go soon. Next month is probably about right, and then SCTP after that. **Zaheduzzaman Sarker:** Yeah. So we used our chair's discretion to change the milestone dates. So that's an like prediction just to give an idea to the outside world where we are coming from, because there are some other SDOs and some other organizations waiting for something to happen here. So the dates are important, but let's try to keep those milestones achievable. **Martin Duke:** Yep. No liaison statements this time, so that's one less thing to do. One thing we don't have a slide for but we always pitch is the TSV Review Team. There's a group of about, I want to say, 20 people that that do Transport Area reviews for documents that are in IETF Last Call. And the purpose of that is to help inform the IESG of possible transport issues for their review process and balloting process. It's an important function and it's one that backstops the AD because ADs sometimes have bad days. It is not a lot of—I'm on the review team; it's not a lot of work. You read maybe one paper a quarter or so. Magnus is the—who I believe you all know—is the triage chief for that. If you would like to participate in the review team, please let him know. And again, I strongly encourage you to do it. It's a good way to catch—to kind of see some of the cross-section of what the IETF works on, and it's a way to help the transport community. So, thanks to those of you who are members of the review team, and those of you who aren't, I hope you consider joining it. All right. With that, let's go back to the agenda. Can find it... there it is. And first up is Magnus, or is it Michael today, who's going to talk about the DTLS chunk? **Michael Tüxen:** I will do it. **Martin Duke:** Okay, Magnus. Great. (Presentation: [DTLS Chunk for SCTP](https://datatracker.ietf.org/meeting/125/materials/slides-125-tsvwg-dtls-chunk-for-sctp-00)) **Magnus Westerlund:** Yeah. If you try the slides and give me slide control... I will give you slide control once I find the presentation. There you are. Go ahead. Thank you. So, I will talk about the DTLS chunk for SCTP. I'm the presenter here, but I have three co-authors. So, let's dive into this. So, a little bit of a refresher on where we are with the DTLS chunk. So this picture represents the how the DTLS chunk, which is an SCTP chunk, it does the encryption, integrity protection, and encapsulation of other SCTP chunks when it's been established. It uses the DTLS 1.3 record as is. And then you have this API where it says "keys," where you set keys and cipher algorithms to be used. You will need a second piece, which is the key management part, which is this upper layer which is, for example, a DTLS 1.3 key exporter and some wrapping code around this to do the setting of the API, etc. And that ensures that I mean we use the PPIDs to separate the user-level protocol from the key management traffic. So... and the initiation of this is negotiated using SCTP parameter in clear text. And to prevent down—so there's two things here. One is to basically, if you require security always, you should have policies to ensure that you don't accept non-protected associations. The other aspect of this is that in this parameter there's key management methods—there can be more than one offered—and to avoid downgrade attacks in this process, the recommendation here is to always include these—the contents of this parameter in the key derivation process when you derive keys to set for the API. That way, the only way for both sides to get the same keys is to use the same input. So you need to have seen both offer and answer to both. Ah, protection. Yes, Jonathan says it should say "protection." That's true. It protects against downgrade attack this way. So this is now how the DTLS chunk looks in more bit detail. It has the standard chunk header. We have using some of the chunk flags. We have a two-bit P-flag, which is the amount of pre-padding, and we have the R which is the restart context used to protect this chunk. So, the payload part here, between the pre-padding and post-padding, that's the whole—that's one DTLS record layer per RFC 9147. And here we'll see—talk slightly about how we're using DTLS and what what features of this. So you have the generalized header on the left of the DTLS stuff. So you have this DTLS in the record layer, you have this header, and after this you follow with the actual encrypted content—the encrypted rest of the encrypted record—which is all up to DTLS. So you have this first byte which contains a number of flags, and the epoch. We do use the epoch; this is the two lower-end bits of a 64-bit value, actually, which we use. The Connection ID—I'll talk more about it on the next slide, but we actually now changed our view and said we should not use it. Then you can have 8 or 16-bit sequence number, and based on how this is with the pre-padding, etc., we always recommend 16-bit sequence numbers to avoid minimizing the risk for having any kind of wrapping issues, etc. And as you see with this format, there's no gain by using the 8-bit sequence number. And the length field, as we have the chunk length—the record, we already have a length field, so we don't need it. So therefore, this middle structure with three bytes of DTLS record header is what we expect. And then we'll get the encrypted record to always start on the—I mean not always, but we expect it to normally start on the second 32-bit word of the payload, which makes this alignment possible so that we have a nice alignment for those who wants to use this and do in-record encryption and decryption. So, the removal of the DTLS Connection ID. This—the Connection ID has been included for a while; this was for kind of completeness, etc., but the design team has had discussion around this and we really concluded that we really don't have any use for it. This is unnecessary complexity in the API and specification point of view to ensure that it works safely. And we also found an issue with this is that because, compared to normal usage in DTLS of this, because we're just using the record, we need to have additional rules for it. Because if you would actually change the Connection ID between for an association between different key settings, you would actually end up with a problem because we don't be able to always correctly separate this and demux it. So you would have to be forced to have consistency on the same port, and that's across all associations that it runs on, or a single association it's like. But yeah, so basically we think it's—we can just remove this. So basically, this is your chance to think, protest around if you think the design team has done the wrong direction here. Yes, and on Martin Duke's question here about doesn't SCTP already provide four-tuple robustness as the connection IDs? We already—yes, SCTP already has the with the V-tags and associations on the port, etc. combined with addressing it's—we do have the possibility to run multiple SCTP associations even on the same address pair. So, we had a number of issues at last meeting and just want to report back on how we kind of—with the feedback we got then, etc., and what we've done in the draft. We had made it mandatory to support pre-padding in the DTLS chunk, just as I talked about before. We also decided, yeah, let's require support of full DTLS record size. We have no requirements on supporting any kind of DTLS record size negotiation. And we did no changes to this API around number of key invocation APIs. After going through it a bit more, what we have in the draft now seems to be sufficient for what's intended. Of course you can, if you want something which is event-based triggered, etc., an implementation can always do an implementation-specific one for that. And we have now included wording around that we—this draft do updates RFC 5061 to enable—so 5061, this is the ASCONF chunk which enables you to dynamically add addressing, add new addresses for new paths in SCTP. And we're basically saying that don't use SCTP-AUTH; we will solely rely on DTLS chunk for authentication of these messages. This has the implication of that you can no longer do—do this on completely new five-tuples. If it comes in a new five-tuple, the information necessary to find the association is encrypted. So you will have to do this from on the path that already exists, but you can that way announce new addresses before use them so you can get added to the five-tuple record. Martin Duke? **Martin Duke:** Yeah, Magnus. When you say "implemented," do you mean there's like—there's code that's tracking this, or do you mean you just—you just solved the issue in the draft? **Magnus Westerlund:** So, for the moment, this is just text specification. I—Michael maybe can answer if he has—I don't know if he's gotten to update his because I think FreeBSD might be the only one—I don't know. Our Ericsson implementation doesn't use—doesn't support 5061, so we haven't looked into it. Michael? **Michael Tüxen:** I don't have code for that right now but will. Linux has an implementation, and I don't know if they—they also support ASCONF and SCTP authentication, so they might or might not yet do. **Martin Duke:** Okay. I mean, not a big deal, but like the slide title made me think this was code. **Magnus Westerlund:** Yeah, no, no, no. This is implemented of how we implement in the draft. So this is focused on what the draft does to update the issue. Okay. Yes. So, this somewhat busy slide is trying to go through the whole kind of outcome tree. So this is—the initiator of a new association has sent—sent an INIT. And this is looking then at the first line, okay: when you receive an INIT with on the left side, DTLS with management—key management parameter. The parameter in the INIT that allows us to negotiate DTLS chunk. What different outcomes you can have. So on the left side you have: okay, the responding side supports DTLS chunk. Then it looks into this what's actually included in this parameter. So server selects a key management method out of the working ones, and then send an INIT-ACK back with the parameter of and with the selected option. And you would result in continuing on with, assuming here that INIT-COOKIE-ACK works, you would have a SCTP association established with DTLS chunk enabled. If there's an error case here, which is if there's actually no common key management method, we have an error code. You would—so you would send an ABORT with the error code "no common DTLS key management method." If the responder doesn't support it, the server sends an INIT-ACK without the parameter. And now this is actually where—why this slide is included, because we in design team we had some discussion around "okay, if this happened, should we have additional errors, etc.?" But we basically just said, okay, we don't need to add anything now, but it's up to client to detect here this—the client directly unprotected sessions. If it was expecting an unprotected session, it has a policy saying "this site we do really want to communicate securely" because based on the application, there's something fishy here, etc. Then the client will send a—send an ABORT chunk here and say "no, we're not continue to establish this association." You could go all the way to establish the association and just then tear it down, but it's basically saying it's up to the client here to say "no, this—this doesn't look like it." If you allow unprotected, you can go on and send COOKIE-ECHO, etc., and this will result in an unprotected, no DTLS chunk enabled, plain text association. That's what's the yellow box represents. On the right—most right side here, we have the when the server do not receive an INIT without the DTLS key management parameter, but then the server looks, "Oh, do I have a policy that requires on this port for this application to actually requires protected associations?" And if it allows, then you go back to what I just talked about, but if it disallows unprotected, then we have an error code saying "DTLS chunk support" and you can send an ABORT chunk there directly and terminate the establishment. So this is basically the whole complete space based on what's incoming on the—that you received an INIT on the responding side. So, the current version has had a lot of text changes. We are restructuring and cutting down duplication, etc., in the document. We're not fully done. We also done the requested the IANA to pre-allocate values where we can do this. So the DTLS chunk is now chunk in hex 41. We also have a chunk parameter type which is 8006 in hex. And we worked on the DTLS key management method considerations to clarify this, and done on the API side, we have on the abstract side added key management, and this is to clarify that you need to have which methods do you support, you need to set before you establish or during—so you have this. You also get to know when you establish the association, the key management method's going to need to know what was actually offered. So this is the input data to the key derivation from the key management method parameter that you basically need. On the socket API side, there have been some redefining of the data structures for the for etc., and this is to address what's actually acceptable for both FreeBSD and Linux when it comes to socket, so say, what's allowed in that socket API when it comes to how data is represented, etc., and point usage and things like that. So, about we mean we sent the liaison after last meeting, about the same week, and informed RAN3 and SA3 about that we work. I just want to remind people that what we reported that we were making progress on solution components, and what we said that we're going to send liaisons when publication request has been done. So this is just a—we haven't received any liaison reply, and that's not expected if we sent for information. But our next kind of on liaison side here will be when we go to sending any of components to publication request, then we need to remember to send a liaison to inform them if we've gotten to that stage. Short report on the key management. So we have the working group draft on key management method, draft-ietf-tsvwg-dtls-chunk-key-management. It has been updated. It still has a number of TBDs. The focus has been to get the chunk to in order, etc. We have the—this is depending on the extended key update in TLS working group. It's still—a lot of is happening with this document. There will be discussion later today at 2:00 to 4:00 in that slot in TLS working group in Shenzhen time. Ericsson is also looking just to update their methods for aligning with all the changes to DTLS chunk and and how to have more—how you describe this interaction with the DTLS chunk API. But we see it just as a double checking of of the API, etc. The implementation work is still catching up, but it had made some progress. We think we are currently mostly at—what we know, we have no real technical issues. We work on editorial improvements. The goal here is to have a really targeting a working group Last Call prior to Vienna. So this is mostly having a very solid text. We have discussed this a lot, etc. So but if we have implementation and a solid draft, we can hopefully have a working group Last Call for this part, for the DTLS chunk, before Vienna. So that's what we're targeting. So, any question? **Martin Duke:** So Magnus, if I read the slide correctly, the design is basically done and you're just cleaning up the document? **Magnus Westerlund:** Can you speak into the mic? It's very hard to... **Martin Duke:** Yes. Um, so—oh, that's much better. Um, so if I read the slide correctly, the design work is basically done and you're just cleaning up the document at this point? **Magnus Westerlund:** Yes. That's at least what we expect, so—and should be verified through the implementation work and interoperability testing, but... **Martin Duke:** Sure. So this would be a great time to review the draft if anyone is interested in doing that. **Zaheduzzaman Sarker:** So Magnus, I am expecting a—yeah, so—working group Last Call in before Vienna basically is that as you wrote in there? Is it achievable? **Magnus Westerlund:** I think so. I think we need another good big edit—I mean, we need to continue with the editorial pass through the draft, but that's basically it and see and and I hope that the implementation work will also be ready. But it's basically, if we focus on the getting the editorial in from a draft perspective, it's—I think it will be ready. So it actually needs—there's definitely things in it which isn't as good as we want them now, but another editorial pass and it should be ready. **Zaheduzzaman Sarker:** Okay. Sounds good. **Martin Duke:** Any other comments or questions from the floor? All right. The team really spent a lot of emotional effort to reach convergence on this, so I'm proud of them for driving this to a complete solution because we had one fall apart before once we got into the details, and it seems like we've gotten through the details this time without falling apart. So nice work, everyone. Okay. If there are no other comments or questions, we will move on to Mohit, who I presume is remote unless he's hiding very effectively, and he's going to talk about FQ-PIE. I'm going to call up your slideshow then pass you slide control. **Mohit P. Tahiliani:** Sure. I hope I'm audible? **Martin Duke:** Go right ahead. **Mohit P. Tahiliani:** Perfect. I hope I'm audible? **Zaheduzzaman Sarker:** Yes. **Mohit P. Tahiliani:** Can you hear me? **Zaheduzzaman Sarker:** Yes, Mohit. Go ahead. **Mohit P. Tahiliani:** Perfect. Thank you. Hi everyone. I'll be giving an update on the internet draft on FlowQueue-PIE, we call it FQ-PIE, draft-ietf-tsvwg-fq-pie, a hybrid packet scheduler and active queue management algorithm. So, just a quick overview of the draft in case you haven't taken a look at it: it basically combines two things: flow queuing and the PIE algorithm, similar to how FQ-CoDel combines FQ and CoDel. FQ-CoDel is already standardized in—I mean, there's already an RFC in 8290. And the functioning of FQ in FQ-PIE is pretty much similar to how it is in RFC 8290. The functioning of PIE part is pretty much similar to how it is in RFC 8033. However, we have made one modification: RFC 8033 provides two ways by which PIE algorithm can calculate the queue delay. One is called the Little's Law, which is an estimator of the queue delay, and the other one is a timestamps approach which is pretty much similar to how CoDel does it. And we recommend using timestamps to calculate the queue delay. As of now, there are three implementations of FQ-PIE. One of the implementations is in the Linux kernel and it has been supported actively in multiple Linux distributions, including OpenWrt. We have an implementation in FreeBSD, thanks to Grenville and his team, and there's also an implementation in the ns-3 network simulator. A quick update on the status. So we can see that all the other algorithms in this category like CoDel, FQ-CoDel, PIE have been already having a Linux and FreeBSD implementation, plus they have a specification. The FQ-PIE internet draft has been adopted, and the quick history is that it was adopted in IETF 124, in the last IETF, and since then I have updated and published the 00 version of the IETF draft as well as—I got some feedback on the mailing list, those who read the draft—thanks to them. I have also published the 01 version of this particular draft. Some discussions that have already happened on the draft: this is a quick summary about it. Some of these discussions have happened in person during my visits to IETF; some of them have happened on the mailing list. The first important discussion was from Greg who asked us whether, you know, was there any kind of a study to show what impact the queue delay calculations have. For example, if you use Little's Law versus you use timestamps, does it make any difference? So we—we worked on that and we presented the status of that work in the last IETF. And thanks to Greg for that suggestion. After the draft was adopted in the last IETF, Chris spent some time on reviewing this draft and also gave some suggestions. So thanks to Chris for that. He suggested some minor changes in the draft which have been already done, and the latest IETF version which is the 01 version, it already includes changes. There was another interesting discussion about adding the L4S support for FQ-PIE, and the discussion is still going on. But for now, the summary of the discussion is that we believe that there are more insights that are needed before we think about enabling L4S in any FQ-based mechanism, not just FQ-PIE. FQ-CoDel has already supported L4S; we know that the Linux kernel implementation of FQ-CoDel supports L4S by having a CE threshold parameter. However, an in-depth study would probably help us to understand the performance impact, and my team has already started performing this study with FQ-CoDel. Overall, personally, I feel that a separate draft that describes how L4S support can be enabled for FQ-based mechanisms—because we already have three FQ-based mechanisms right now: we have FQ-CoDel which is in RFC 8290, FQ-PIE we are discussing right now, and we also have FQ-Cobalt which is a part of the Cake queue discipline. So I'm also working on a separate draft to see if we can have a clear guideline of how L4S can be supported in FQ-based AQM. And of course this work on FQ-PIE and FQ-CoDel will help us to understand how to build that draft. The third recommendation/suggestion from Chris was about assessing the performance differences between FQ-PIE and FQ-CoDel. And this is something that we have been doing. Unfortunately, this IETF we could not participate in the Hackathon, but in the past three to four IETFs we have been always participating in the Hackathon and have presented results on FQ-PIE and FQ-CoDel. Some of those results have already been published. I will link that publication in the draft once the publication is available. Meanwhile, even in today's presentation in the subsequent slides, I have some results from the real experiments that we have done on a Wi-Fi network in terms of comparing how FQ-CoDel and FQ-PIE are working. Moving forward, we are continuing to enable support of FQ-PIE in many other third-party tools. We have been working with two tools: one is called go-tc. It's a traffic control implementation in pure Go language, and we have already enabled the support of FQ-PIE in that. That's a link to the pull request, and the code has already been merged in the main line. Apart from that, there is another tool called QOSMate, which is for the quality of service tool for OpenWrt, and we have just a couple of days back opened pull request to enable the support of FQ-PIE, primarily for applications such as gaming as well as non-gaming applications. So these are on the implementation fronts we are working on. These tests have been performed by a company called Quantum Networks. The name is Quantum Networks, but it's actually not anything related to the quantum per se; it's just a brand. And thanks to them, they have been performing a lot of experiments on FQ-PIE. So a particular model of a Wi-Fi access point called QQI-W235, they have been using FQ-PIE versus FQ-CoDel tests, and it's a small room AP; it's not for enterprise-grade AP. It's a small room AP, and they have been trying to use it with when there are 20 to 25 users in a room. And currently the testing has been done only on the 5 GHz channel. They are still going to perform the test on 2.4 GHz channel because a lot of devices that operate in home environment or in office environment may be still operating on 2.4 GHz. So those tests are still going on. These tests are not performed by us; we have just been helping them in enabling FQ-PIE, and then they have been running these tests. And we have been using LibreQoS bufferbloat tests for looking at it. So we can see there's pretty much similar behavior. On the left is what we see we get with FQ-CoDel, and on the right we see what we get with FQ-PIE. We can see more or less we get the Grade B or Grade C, which is kind of the acceptable latency as of now in these environments according to their feedback. So we continue to work on these tests, and maybe in the next IETF we'll have some more updates from Quantum Networks which I will present with maybe 2.4 GHz. That's it for today's presentation from my side and happy to take discussions and questions. **Martin Duke:** Anyone? Queuing enthusiasts? All right. Thank you, Mohit. **Mohit P. Tahiliani:** Thanks, Martin. Yeah. **Zaheduzzaman Sarker:** Thanks, Mohit. If you have not logged in to DataTracker to this meeting hall, please do that, especially here in the on-site and in Tokyo, because this gives us an idea like how big our participation is and help us fixing our logistic for next meeting. Please do that. And that's also like we don't have a blue sheet, so this is also like kind of like works as a blue sheet for us. Please do that. Sign in to the on-site tool preferably if you are in on-site here in Shenzhen or in Tokyo. Next up is Shueyan from ZTE who's going to talk about ECN over IPFIX. (Presentation: [Export of ECN information in IPFIX](https://datatracker.ietf.org/meeting/125/materials/slides-125-export-of-ecn-information-in-ipfix-00)) **Shueyan:** Okay. Thanks, chairs. This is Shueyan from ZTE. My presentation is on the draft export of ECN information in IPFIX, on behalf of the co-authors. Just have a brief introduction on the IPFIX protocol. IPFIX protocol is defined in the RFC 7011. And the key concepts for IPFIX: first one is about the observation point and how to define the observation flow, and it may involve some processes communication. This processes may include the metering process and exporting process and collecting process. And through the observation flow, you can extract some necessary information to export the collector. And through the collector analysis, export the outcome to the operators or network administrators. So why IPFIX for L4S? The main consideration is that IPFIX can provide a very flexible information elements to support L4S monitoring. And for the necessity for this draft, through the methods introduced in this draft, it can provide network visibility and diagnostics and can help to access the information in the underlying network, and through this draft we can provide uniform information elements which can help multi-vendor interoperability. And this draft also fills the gap for L4S ECN monitoring using IPFIX. From this figure, you can find the L4S service monitoring using IPFIX. Because L4S service is end-to-end, and the end user, you know, the L4S client and server, the traffic will communicate it between the client and server. And the traffic—L4S traffic will be identified as ECT1 and classical traffic identified with ECT0. And the IPFIX, actually, observed—the observation point for IPFIX, it can be used in the network everywhere. It can be an interface or a set of interface or a virtual tunnel. And the routers for the network, it uses a metering process of IPFIX to observe the traffic. This traffic for this draft, it is defined as the packets with the common capability such as the same IP address and all the TCP port and also with ECT set as 1. And through the traffic observed, this information will be sent to the exporting process. In this process, it encapsulates the IPFIX message and metering the traffic records and then export this information to the collector. This draft defines some IPFIX information elements. This information are shown in the right side. It includes the IPv4 header ECN, IPv6 header ECN, and some information elements for the statistical data for ECN packets in the network, includes non-ECT, ECT1, ECT0, and CE packets, and also include MPLS header ECN. From the left figure, we can find the ECN field in the IP header is observed. This ECN field uses two code points to indicate whether there's any congestion experience in the network. And the below figures are for the MPLS header. Actually, in RFC 5129, it defined a EXP field also used for ECN. Actually, it can also be used for traffic class. So, but for ECN, it only uses CM or non-CM to indicate whether there's congestion experienced in the MPLS tunnel. We also notice that there's another draft which is still an individual draft, which uses the MPLS MNA to support ECN code points. We also will consider use this draft but also needs to wait the MPLS WG discussion. And except these information elements, we also find maybe some controller—some control information needs to be exported, as the IPsec tunnel and L2TP tunnel, just as the figure showed in the left. And for the TCP, as we all know, it uses ECN Echo and CWR tag for the ECN indication. But this information has been supported in RFC 9565. We also received some very positive feedbacks from the TSVWG and OPSWG. First is from the Greg. His question is related to IPFIX notification which may involve the communication between the collector and exporter and observation location. This question has been addressed in the current version. And we also received the comments from Gorry which related about the reference correction. We also made the corresponding update. And Johappen raised two questions: how to distinguish L4S traffic from the mixture of the packets, and how to avoid overhead when exporting IPFIX information. This question has been addressed—you know, the two question, the second question has been addressed, but for the first question, it may use—need to use filtering method, so we will make further change in the future version. We also received experts from Sebastian and Ingemar. Their question related to CE marking probability and ratio IEs. We also made the corresponding updates and incorporated to the current version. Next step for this draft: actually, this draft is at a very early stage, and we would like to receive the comments and feedback from TSVWG. And we would like to know whether the information for the ECN we have finded, or whether they are enough, or whether there are anything missed, or any feedback or comments are welcome. **Martin Duke:** Thanks. Are you planning to do any performance experiments with this, or are you just focusing on specifying the protocol? **Shueyan:** Yeah, use IPFIX to export this information. **Martin Duke:** Well yes, but... all right. Got it. Thanks. **Zaheduzzaman Sarker:** So I think you got already some good comments on the list, so please keep us in the loop whatever you are changing. And I think let's see. Gorry, go ahead. **Gorry Fairhurst:** Hello. Can you hear me? **Zaheduzzaman Sarker:** Yes. **Gorry Fairhurst:** Good. My ethernet adapter just failed, so I'll not turn on video. I'll just ask. So, thank you for the talk. You are considering just exporting the two ECN bits, not the whole traffic class with the DSCP. Is there a reason for just doing the two bits rather than doing the whole eight-bit field? **Shueyan:** Yeah, yeah, you're right. We only export the two bits of ECN field. And you are also right, actually, there's some information elements defined for the the whole byte, eight bits for DSCP. Actually, when I presented at OPSWG this Wednesday, the chair also send his suggestion to us, because there has one information element defined in IANA for the DSCP, the whole eight bits has been defined. So maybe for only to export two bits ECN field, maybe expensive or some cost. From his perspective, it's not a suggested way. But from our from the co-authors' perspective, maybe because L4S has deployed in a very broad way, and we also need to create the information elements to export the statistical information for the ECN packets, so we would like to specify a special information elements to export ECN, only ECN field. We also would like to get your comments and suggestions. **Gorry Fairhurst:** Okay. I'm looking forward to seeing the next version of the draft and how you use this information. So thanks ever so much for answering my question. **Shueyan:** Okay, thank you. **Jason Livingood:** Hey, Jason Livingood. Thank you very much for the draft. I encourage you to continue to work on it. We are very at Comcast interested to see how it develops. I'll be asking some of our network engineers that focus on IPFIX to take a look and provide feedback. And I can say that it is an operational gap in L4S deployments today to have, you know, standardized way to derive ECN data from network flows. So really encourage it. Otherwise, we're off doing it in sort of bespoke ways with specialist tools and so on. It would be really nice to have a standard way to do it. So thank you. **Shueyan:** Okay, thank you for your comments. **Zaheduzzaman Sarker:** Yeah, from the operational perspective, I think this this could be something really useful for—any more questions? I think we're done. Thank you. **Martin Duke:** Okay. As far as we can tell, Jiayu is not present at this time, so Daniel Huang is going to go ahead and present HP-WAN hackathon results. (Presentation: [HP-WAN Hackathon report](https://datatracker.ietf.org/meeting/125/materials/slides-125-tsvwg-hp-wan-hackathon-report-00)) **Daniel Huang:** I'm Daniel Huang from ZTE, and I would take a couple of minutes to share some HP-WAN hackathon results as well as the updates. Actually, HP-WAN is an—circulate in IETF community for one and a half years. Thank you very much for Zahed and as well as Gorry's help. And, yeah, here's what HP-WAN about. HP-WAN means high-performance wide area network. Our chief use cases large volume data transmission over a network, but when we're talking about these use cases, we have already have a dedicated networks, but HP-WAN—what HP-WAN is trying to address means over the shared and public network for for high throughput, low latency, high availability within the jobs completion time. The major problem of the existing mechanism solutions is the poor convergence to speed, unscheduled the traffic, long feedback loop, and the concurrent mult-flow transmissions. Yeah. Here's the footsteps of HP-WAN in IETF. And we did the first side meeting on IETF 120 and the follow-up with the many news in Woodaerial, and we have a non-working group BoF in 121 and a couple of following up the side meetings. And in Shenzhen meeting, we have made the early productive hackathon, which provided the transport-oriented prototyping to share the early implementation results including the RSVP-based as well as the Quick-based solutions. And here's the summary of the HP-WAN hackathon and the prototyping results. Actually, we we tried to set up an end-to-end solution and made the simulation on the topologies over the public shared networks. And we have service scenarios based on the HP-WAN framework and implemented the functions such as the rate negotiation, admission control, traffic scheduling, and resource reservations with the distributed signaling such as RSVP. And actually, what we can see from the second and third figure is the before and after the HP-WAN mechanisms for the services. The traditional seesaw effect largely has been flattened out under our hackathon prototypes. And we actually we put the minimum and the maximum rate in place to to avoid the congestion as well as to guarantee a specific and designated completion time for the services and the jobs. Here's the issues raised for the TSV working group. And the first one is we are trying to employ the host-to-network coordination mechanism. There's just a signaling protocol, so our question is whether or not the signaling protocol could be homed at TSV working group. So actually we—our prototype is based on RSVP, but actually we do not have any solution preferences right now. We have a couple of working solutions in parallel. And the second point is the potentially impacts of the signaling upon the existing congestion control algorithms. And the third point is the now we're working on the service profiles and Yang models for the transport protocols which could be related to transport protocols in Woodaerial. Next steps: Yeah, as I mentioned, we're now working on the service Yang model for the fine-grained host-to-network collaboration, and also ZTE and China Mobile and the partners are trying to do some live deployments of HP-WAN service and solutions for the inter-regional and even intercontinental HP-WAN data transmission, such as between China and Japan, Japan and South America, and Europe and South America and Europe. We're trying to refine the state of the art and the framework draft of HP-WAN. Yeah, thanks. That's all. **Martin Duke:** Thanks, Daniel. I'd like to open the queue for questions. **Daniel King:** Great. Thank you. Daniel King. Thank you very much, Daniel—another Daniel—for presenting. So just also wanted to highlight that the hackathon was kind of a really useful opportunity for three distinct domains for high-performance computing services to, you know, show how you would sort of request and set up and reserve network characteristics to kind of meet the workload requirements. But what we found actually was there was no consistent way to kind of signal the service. So we started fleshing out a new service model, but actually maybe that's not the right word, a workload model. And talking to some folks who work with Kay and Slurm, we were able to kind of generalize requirements for high-performance computing services. So that's kind of the first piece of work that we've got is this intent model. Was that a question, Zahed? **Zaheduzzaman Sarker:** No, go ahead. **Daniel King:** And then we can set up connection across a domain—it may be in a data center, it may be across sort of disaggregated LLM architecture, we've got multiple data centers. But what was also missing was how you retrieve information from the network about congestion and congestion notification and sort of service state because the orchestrator that's taking the workload request and then the thing that sets up the network path in the domain, there is no feedback loop. So if something needs to kind of change because of deteriorating network conditions, there is no way to report that. So I think that's actually where the working group can help is to look at some of the congestion notification messaging and how often and how frequently do we need to poll for that information. Thanks. **Zaheduzzaman Sarker:** So, yeah, so just to understand. So what you we are looking at here, like we have in the host, you receive some congestion information, so they have like congestion kind of like an and idea, and then you'd like to get this congestion information towards the network, especially the ingress of a domain where it can then take the—take the congestion information from the host to reroute or reselect or whatever in the routing plane. **Daniel King:** Right, right. That's basically the part you're missing here. Exactly. Thanks. **Zaheduzzaman Sarker:** Okay. Cool. I mean, I think, yeah, it looks interesting. We have some RSVP stuff going on in TSV as the host, but I'd like to hear from Gorry. Gorry, what do you say? **Gorry Fairhurst:** Maybe I get some video. Okay. Um, yeah, thank you for the talk. And thank you for an update after the HP-WAN BoF. That's very, very interesting. And I think it is relevant for TSVWG. However, TSVWG will need to take protocols and specifications as input to see whether they can standardize them. So we need to discuss this more on the list, I guess, and see what is actually the best way to go. It's a fun presentation, it's an interesting topic, and I think if there's implementation and use cases, that would be really interesting to hear more. Thanks ever so much from my side. I like to hear the update. Is there plans to actually develop some mechanism and try it out? **Daniel Huang:** Sorry, Gorry. Actually, what's new for for HP-WAN in TSVWG is we we employed a standalone signaling between host and network. Currently our prototype is RSVP-based. **Martin Duke:** Daniel, at the hackathon, how many implementations did you have? Was it just you guys or did other people bring code in this space? **Daniel Huang:** Actually, we have ZTE as well as China Mobile and another test unit institute from China. And we were so invite Juniper, the camera to to join us as mentioned the service Yang model now we're working together with Juniper, so it's the community. **Martin Duke:** Okay. Super. Right. Glad you got some interop. Thank you. Any other questions for Daniel? Okay. Then, yeah, please keep us updated and then see. Thank you. **Martin Duke:** All right. We are way ahead of schedule. We'll get you out of here well early. Thanks to the speakers for staying within their allotted time. And to close it out today, we're going to have Jason Livingood talking about L4S, as he so often does. Let's see if I can get these slides going. This is going to be a quick one. (Presentation: [L4S Update - Livingood](https://datatracker.ietf.org/meeting/125/materials/slides-125-tsvwg-l4s-update-livingood-00)) **Jason Livingood:** All right. So I'm just going to give my regular dual-queue low-latency networking update, which is basically our Comcast deployment update in the US. And it's a single slide which is this. We now have over 10 million homes and growing that are enabled with L4S and NQB. It's like 350-360 million devices when you look at the number of devices in in homes. A lot of those are obviously IoT. The largest application usage by volume of traffic that we see right now is cloud gaming. So this is primarily all the Valve Steam games and NVIDIA GeForce NOW. There are some others working on that that we think we'll see enter soon. And then after that, real-time communications, which is primarily FaceTime and then one other provider that's coming soon. In terms of what's next for us, we're working on deploying it to our commercial customers which are small to medium-sized business. Right now we're only deployed to residential users. And going back to the prior presentation about IPFIX, very relevant. We're really focused on improving network operations reporting and internal measurements. Both of those very routine things as you scale deployment, but we're kind of shifting from the early deployment phase with lots of bespoke manual reporting to how to put it into the, quote, operations machine, if you will. And then finally, we're working on an implementation for full duplex DOCSIS. And for those of you that don't know, in DOCSIS there's a new standard, full duplex, where you can have symmetric, you know, 2 gig, 3 gig, whatever service. But the way that that actually works in the spectrum is that you have time where the device is only talking upstream, pause, only talking downstream, pause, etc. And that creates a little bit of a latency problem in those pauses when it's switching from up and down. And so we're working to make sure there's a continuous flow that's going bidirectionally so that you don't have any kind of a latency difference with those kind of symmetric services. So it's a little bit of spectrum magic that we're working on there. Excited to bring it to those customers as well, since that's, you know, a growing part of our deployment where you can do symmetric multi-gig services over a hybrid fiber-coaxial plant. So that's it. Happy to take any questions if there are any. Probably not. **Speaker 1:** This is Mike Heard from remote. And I was curious: does your in—does this environment where you've been deploying have any classic queues that you have to worry about or is that simply a non-issue in this case? **Jason Livingood:** Well, when you say "classic queues," what do you mean? **Speaker 1:** The—either 3168 or non-ECN non— **Jason Livingood:** Uh, no, we haven't observed any issues with them there. **Speaker 1:** Thank you. **Mohit P. Tahiliani:** Um, yeah. Thanks, Jason. Thanks for the update. I just had a couple of questions. Number one, so I guess we are doing L4S marking in the devices? **Jason Livingood:** No. Well, in the applications, the devices—at bottlenecks, so like the cable modem router and the CMTS and everything else through the rest of our network are basically honoring the markings, so by not bleaching it or modifying it. And then there are two queues in the cable modem router and in the CMTS. Um, so—but the network itself is not, um, you know, marking. We're obviously replying—excuse me—relying on the application to do that. **Mohit P. Tahiliani:** Okay. And what kind of congestion control response do we provide to the L4S marks? So is it the cubic that is modified, or do we have any insights on how the endpoints respond to these L4S marks? **Jason Livingood:** Well, the endpoint being the application, right, or the user device? **Mohit P. Tahiliani:** Uh, yes. **Jason Livingood:** Yeah. So, you know, they're following all the responsive, you know, L4S, you know, mechanisms and in some cases we're, you know, seeing them switch from saying, you know, they're capable to, you know, they're seeing congestion so throwing CE marks. So the, you know, typical expected adaptation is—seems to be working correctly. **Mohit P. Tahiliani:** So is it like a Prague congestion control kind of adaptation of the CWMD? **Jason Livingood:** Ah, I see. It depends. Stuart would have to mention, you know, what iOS and macOS do. Um, you know, kind of depends from a an app provider standpoint. **Mohit P. Tahiliani:** Right. And which are non-Apple devices probably they will have a different controller, for example. Right. Okay. Thanks, Jason. That was helpful. **Martin Duke:** So I entered the queue with one question; now I have two. So—I'm sorry. Your equipment is marking CE, correct? **Jason Livingood:** It can, yep. But it's not saying ECT1, for example, right? **Martin Duke:** Yes. Okay. Got it. All right. I just want to clear that up. The other question was—and it's okay if you can't answer this—but you said 10 million plus homes. Can you give us an indication of what percentage of your total customer base in the US that is? **Jason Livingood:** Oh, that's, you know, probably more than a third. Okay. Um, so, you know, call it a little bit below 30 million homes in total for broadband. Okay. And of that, about 70% are capable if they have the right device. So of that, you know, call it 20 million that would be potential, and that's growing. That'll eventually get to the full amount because it depends upon a virtual CMTS footprint. **Martin Duke:** That's great. So as you know, I have a long-running experiment with you guys. Well, I'm not really doing—we're not really coordinating, but I'm running a Chrome experiment and I'm waiting for the signal to go above the noise and, you know, the bigger that percentage gets, the more likely I'm going to get that. **Jason Livingood:** For sure. And I should also note, you know, that dependency to some extent on the devices depends on a DOCSIS 3.1 device that supports L4S. We still have a fair number of old DOCSIS 3.0 modems. Those are like 10 to 15-year-old devices, many of which are customer-owned. Um, but there's a big push this calendar year to, you know, prompt customers to replace those. So I think that what we'll start to see in that as that natural replacement cycle occurs is they'll be replacing it with L4S-capable devices. **Martin Duke:** Awesome. I'm sure I'm going to ask this question again in three months, so we'll cover it then. Go ahead, Stuart. **Stuart Cheshire:** Thank you. Um, there were several questions asked, so I will try to explain as best I can. So I think everybody knows L4S is a partnership between three parties: it's the sender, the receiver, and the network in the middle. And really to get the benefit, all of those three have to be participating, which is why it takes a while to roll out. It's not as bad as IPv6 where you need it end-to-end on the whole path. As long as other devices pass through the bits without mangling them, which they should—if they don't, that's just a broken network. Maybe I'll come back to that in a moment. But assuming that, the only place that has to know about L4S in the network is the bottleneck. I'm actually a very happy customer of the Comcast service at home, so for me that's the cable modem in the upstream direction. In terms of the current state of Apple products, Apple fully supports the Prague congestion control algorithm in TCP and QUIC. Right now we are doing random AB testing, so everybody in this room running iOS 26, or I think even previous versions, some percentage of the time it will try ECT1, L4S with Prague, and we gather that telemetry. If you want to try it full time, you can go into developer settings on your iPhone and there is a tri-state setting for L4S: enabled, disabled, or system default. System default means toss a coin and do it randomly. But if you really want it full time, you turn on enable and you will get L4S for all TCP applications, all QUIC applications, and FaceTime also respects that setting. So for people who are interested in experimenting and getting packet traces and looking at behavior, you can turn on L4S and guarantee that you're getting it. You can also turn it off and guarantee that you're not getting it, which is useful if you want to do comparisons. I was going to say one other thing... oh, so Martin, in terms of you looking for information, the reason that's not easy is because the benefit of L4S comes in a very specific circumstance. And you could argue that's a rare circumstance. I think it's still important even though it's a minority of the time—over all the minutes in the day, the times it matters is kind of small, but when it matters, it matters. And when it matters is when you have multiple flows competing. So if you're playing an online video game on your symmetric gigabit fiber or whatever and it's completely idle, well, idle networks always have low round-trip times—you're just not stressing them. And if you're downloading a movie to put on your iPod iPad to watch on the flight, then it'll saturate the network and the round-trip time will go to hell on a classic network, but who cares, you're only doing one thing. So any time you're only doing one thing at a time, it can do either A or B acceptably. Where it matters is when you're doing both. And for a lot of us, that isn't all the time. But when it does happen, it's super critical. We get complaints on a fairly regular basis at Apple from VPs complaining to the networking team about how our software's terrible and why can't we make it better. And the symptom is they're on a video conference call and a lot of us at Apple run pre-release versions of iOS—it's part of our testing to get the bugs found before we ship to customers. So the life of any Apple engineer is constantly filing bug reports, and we actually have a button shortcut on the phone to say "file a bug report about this" and it will take a screenshot and it runs a script called "sysdiagnose" that gathers a whole bunch of log files and compresses them into a tar file that's about half a gigabyte. Now, when I'm uploading half a gigabyte over my Comcast 35-megabit upstream—I know I could get faster, but actually I haven't upgraded intentionally because I'm testing L4S and I want to see how well it works. It takes about two minutes to upload that, which is fine. I'm not staring at the phone screen; I just file the bug report, put the phone down, and it runs in the background. I have no problems. Some of our VPs when they do that and they're on a Webex call, the call just goes to hell. Sometimes they lose video, they lose audio; sometimes they just get an error saying "call failed." So they've had to learn to never file a bug report while they're on a video conference call because it breaks the call. So the fact that it works 99% of the time doesn't really excuse that 1% of the time it totally fails. And it's that 1% that I'm really focused on is making the customer experience good all the time, not just most of the time. And it makes the telemetry difficult because we have the same situation at Apple: when we're gathering average metrics across all connections, well, 99% of them would have been great anyway. So the significant data kind of gets lost in the ocean of all the stuff that would have been fine. So I hope some of that information was useful for Mohit who wanted to know the behavior of iOS. **Martin Duke:** Yeah, I actually have two specific questions—sorry, Magnus, I have two specific questions for Stuart, so I'm going to jump. **Stuart Cheshire:** Well, we have time and I'm here. **Martin Duke:** Yeah. So, first of all, thanks for that perspective. I mean, I'm running into the same thing: I'm looking at the 99th percentile and hoping to see something and not seeing it. And as, you know, the Comcast rollout and I'm screening for the Comcast ASN, so as that triples, I'm hoping that hits the right side of my curve. If it doesn't, you know, I'll have to decide what to do. And I'm not focusing on this now, but I hope to return to this and maybe look at the downstream which, you know, has some maybe some advantages which I'm not doing today. My other question is you say you're doing Prague, and this is like a pet peeve that I've articulated before, but like when you say Prague—so Prague is a diff applied to an underlying congestion control, right? That collapses to something else when there's not when they're not ECT marks. So is that Cubic plus Prague or is it something—okay, great. Thank you. **Magnus Westerlund:** So go ahead... oh, actually, let's drain the queue and if we have time at the end, I do have another question. But let's hear from Jason. **Jason Livingood:** And before we get to Magnus, one really quick thing that made me think of this. And it's not—we're working with one of our partners, it's not FaceTime, but it's a real-time communications client. And we're observing a very interesting convergence of what happens when you change the delay assumptions, you know, with L4S, but you have a forward error correction algorithm that has a lot of assumptions about how delay works. And we think that we're what we're hitting up against is sort of a threshold where we can't do any better because of what FEC is trying to do or trying or assuming about what's happening with delay. And so we're trying to dive into that a little bit more to see if we can kind of unwind what's happening inside of that apparently very complex and old FEC approach. **Stuart Cheshire:** Actually, Jason, I'll say plus one to that. I'm working with the FaceTime team over 15 years. The FaceTime algorithms have been tuned and tweaked and they gather telemetry and then they tune them some more, and it's this iterative process over a long period of time. And it's been tuned for the typical conditions that most of our users have seen for the last 15 years. You throw FaceTime on an ultra-low-latency, low-loss reliable network, it doesn't really know what to do with it because it's never seen one of those before. So you don't magically get the benefit of L4S if the application has not actually been designed to take advantage of that. Now, if you're using TCP or QUIC, you do get it for free. But if you're building your own protocol, then unfortunately that's on you to revise your protocol to take advantage of it. **Zaheduzzaman Sarker:** Yeah, I think this is really good discussion and this is ringing some music on my ear. That's good. And I do believe like CCWG is now started to work with media rate adaptation. Maybe this is something that you guys can go there and show them because the FEC definitely has an assumption that need to be kind of fixed perhaps if you have a really low latency. **Martin Duke:** Poor Magnus has been waiting in the queue forever. So let's give him a chance to talk and then we can resume discussion. Go ahead, Magnus. **Magnus Westerlund:** Yeah, it's fine, it's fine. Jason, in regards to your traffic here, are you seeing a growth in actual traffic usage per—on a per-user basis so to say? Rather than just in total volume because you increase the number of homes. Are you actually seeing more L4S traffic? **Jason Livingood:** Yes. Yep. And we're also starting to see, you know, because I watch, you know, with my little bespoke non-IPFIX reports, some of the volumes by peer, interconnected network, and occasionally starting to see little tiny spikes of volume where I think people are doing experiments from an application developer standpoint too. So yeah. **Magnus Westerlund:** Okay. That's good. Thank you. **Stuart Cheshire:** Yes, actually, I remembered one thing I forgot to say, it was sort of probably implied with what I was saying with the video conferencing problem because I have a Comcast L4S cable modem. I take great joy in uploading my sysdiagnose bug report while I'm on a video conference and it makes no difference at all. So the other thing I wanted to ask is a request for help—totally different topic, but related to L4S and ECN and ECN bleaching. I sent an email a couple of days ago on the attendees list. A couple of people have talked to me sympathetically, but I actually haven't got any direct contacts. So I'm just going to ask it here. We had some Apple testers who found throughput degradation because the network was setting CE on every packet. This one happened to be China Unicom, but that may be just where they observed it happening. And walking around at the IETF, I saw this meeting is sponsored by China Unicom. So I thought hopefully I can have a conversation because in these situations so often some testers at Apple found a problem, and they talk to the carrier relations team, and they talk to their partners at China Unicom who are sort of in the business part of the company, not the engineering part of the company. And they said, "Well, we can't fix it because the Chinese government runs the backbone." So end of discussion. And my view is, well, there are human beings running this network. So we just need to find the right human beings to talk to and say, "Do you realize your network was doing this? I'm sure it's not intentional." You don't want to be standing in the way of progress. We've seen this is the next chapter in internet growth. We went from dial-up modem to broadband and the throughput went through the roof and we're kind of running out of steam in terms of just more and more bandwidth. The next frontier is consistent low delay and low loss. And I'm very confident that if we talk to the right engineers, they will say, "Yes, we're on board. We want to do this." So my challenge right now is to figure out the right engineer. So if anybody in the room happens to know the right person, that would be great. So that's my request for help. **Zaheduzzaman Sarker:** What happened, Martin? Do you know someone? **Martin Duke:** No, no, no. I just I thought we were going to have a conversation, but I guess that's over. All right. Well, thank you, Jason. That was—turned out to be quite a fruitful discussion. And we are well ahead of schedule. Thank you all for attending, especially those of you who made the trek out to China from wherever you came from. And we will—there's two documents that are in or close to WGLC. We've got the—which is the one that's in there now? I'm drawing a blank. draft-ietf-tsvwg-l4sops. draft-ietf-tsvwg-l4sops is in—yes, Jonathan, go ahead. **Jonathan Lennox:** Yeah, I just wanted to call attention to two drafts we discussed in AVTCORE that I think are interesting to this group, and I think we're still discussing with Gorry which of us will handle it, but it's probably going to be us. One is—they're both for WebRTC optimizations is the goal. One is to embed DTLS inside STUN packets, and the other is to do the first phase of SCTP inside the SDP if you're doing an SDP offer/answer to set up an SCTP connection, so as to save a round trip on the SCTP setup. So obviously the SCTP knowledge we're going to need to get from here. The SDP knowledge is probably, you know, we're probably the best repository for that. But just wanted to call attention to that, especially people who are knowledgeable about those areas. **Martin Duke:** All right. Thank you, Jonathan. In addition to what I was saying, we have two documents in in close to Last Call. One was the draft-ietf-tsvwg-l4sops document, and we're going to have SCTP DTLS—draft-ietf-tsvwg-sctp-dtls-chunk—coming quite soon. Gorry, do you have AD wisdom for us? **Gorry Fairhurst:** I was just going to say that probably we should be discussing those two AVTCORE documents on the mailing list, and we might expect some short presentation at TSVWG next time to make sure we have the right people in the loop. And we will find out which working group should take them after that discussion. It's a fun presentation, it's an interesting topic, and I think if there's implementation and use cases, that would be really interesting to hear more. Thanks ever so much from my side. I like to hear the update. **Zaheduzzaman Sarker:** Okay. I think that I agree with that that TSVWG on the on the loop for the AVTCORE doc, but looking at it and with my background in AVTCORE, AVTCORE is the right place to do it. But anyway, you should come here and do a presentation so that we can have a more official kind of statement on that. **Martin Duke:** And we will see you at that presentation in Vienna. Have a good weekend and a good flight to wherever you came from. See you in Vienna. Bye.