Session Date/Time: 19 Mar 2026 06:00
János Farkas: Okay, it's the top of the hour. I think we should get started. Welcome, everyone, to this session of the DetNet Working Group at IETF 125 in Shenzhen and remote. We chair this group together with Lou Berger, I'm János Farkas, and we have our secretary, Eve Schooler, at this meeting as well. And we would like to thank to Carlos Bernardos and G. Dong for facilitating running this session on-site. You can find the meeting material at the usual place.
And at the first place, I would like to remind you the IETF Note Well. This is a reminder of some of the IETF policies. For the details, please check the links and RFCs here. And most of all, by participating in the IETF, you agree to follow the IETF policies and procedures, and participants are expected to behave in a professional manner and courtesy to colleagues. I would like to remind you that all the contributions, verbal and written contributions, become part of our permanent records. This session is being recorded as well. And if any of these things are not okay with you, then please cease participation. I would like to remind you again of the conduct guidelines to respect colleagues and behave in a courteous manner.
And for the meeting participation, the logistics: on-site, please join the on-site tool. This is important for the virtual blue sheets. And on remote, as usual, please be muted when you are not speaking. And please join the note-taking. This is a joint note-taking. You can find the link on the slide, in the chat, and the meeting materials. And other logistics at the usual place.
I would like to highlight that we have a very tight agenda. So the times listed on the agenda include discussion times. We have a few recently adopted documents. The discussions are limited to the issues that we should focus on these documents. And a few other documents we will see here during this session. As for some of these contributions, we plan to have polls and we plan to ask two questions. First, what do you think, is it interesting for the working group to start working on a topic? The other question is: is the current document suitable foundation to kick off the working group work? A reminder that for the working group documents, the authors/presenters, please focus on the changes and the issues happening from the last meeting. And for non-working group documents, please focus on what do you suggest to address with your contribution.
Working group status: good news that we have one new RFC. This is the DetNet controller plane. Many thanks for all the contributors. And also two more in the queue: the RAW technologies and the RAW architecture are coming up pretty soon, so stay tuned for that. We don't have anything at working group last call, but we have a bunch of queuing solutions recently adopted. And as we asked in email as well, please capture the raised issues during the adoption call. There are some outstanding issues, work to be done and so on with these recently adopted documents.
We have a couple of documents that are not on the agenda. And I would like to mention the one in the bottom of the list which has no update for a year now, and we are planning to move it to a dead state if it is not picked up. It's always possible to pick it up.
Few words on liaisons: we have received one for-information liaison from ITU-T Study Group 13 on the work conducted there on deterministic networking related items, recent publications and results. And as one of our queuing solutions is based on recommendations published there by ITU-T Study Group 13, you will see more details during the presentation. We expect to potentially send a liaison letter from our side to them to inform the work-in-progress status.
And I would like to remind you the IPR policies as well. So please follow the IPR policies. We ask about it during adoption call and when moving to working group last call as well. And it is the mailing list, the main forum to progress our work. So please use the mailing list. The consensus is determined in the mailing list for all subjects. And we have good technical discussions on the list, so please keep that up and continue, especially for the newly adopted documents, which is just a start for the work. We can have virtual meetings, interims, if and as needed, or we can also schedule informal meetings. Please let us know if you see the need for such meetings and we set it up. And as a reminder, we keep asking about document status. So please do that. And we have not received status updates for some of the documents despite of the request.
Anything to add from your side, Lou?
Lou Berger: Just we have Jinu in queue.
János Farkas: Please, Jinu.
Jinu Jung: Hello. Hello, chairs. In your presentation, the stateless fair queuing document is listed as the adopted draft. But I'm not sure about it. So please make clear about it. Thank you.
János Farkas: You mean the first one here on the list, right?
Jinu Jung: Maybe the last one, draft-joung-detnet-stateless-fair-queuing.
János Farkas: Yeah. So the situation is that it's adopted, but we are trying to clarify all these relationships with ITU-T to be totally on the clear side on the relation and this draft should provide what's augmented to the ITU-T recommendation. And once that is clarified, we would like to ask you to make the document as usual draft-ietf-detnet. So make the next step and make it in a working group document.
Jinu Jung: Okay. Thank you very much.
János Farkas: Okay. No further questions, comments. Let's move on to the first presentation, which is Jinu with taxonomy.
(Brief silence as presentation starts)
Jinu Jung: Hello, everyone. I'm Jinu Jung for the presentation of the Dataplane Enhancement Taxonomy draft. Because we don't have much time, I will just read through the presentation material. So if you have a question, please raise the question after the brief summarization of the slides.
So this is the overview of the draft. It is to facilitate the understanding of the data plane enhancement solutions. It has defined a few criteria. One of them is based on the performance and four of them based on the functional things. It also specified two reference topologies. It has specified seven suitable categories that the adopted solution is based upon.
So the change in this five version, version 5, is the addition of the section 8, which is titled as "Considerations for interoperability between solutions". So, what is the interoperability? It is defined between technology domains—I will call it TD—how well solutions can operate without special treatments. TD is defined as a contiguous segment of a network where all constituent nodes share a common data plane solution.
So how can we measure how well these solutions can interoperate with each other? We have come up, the authors have come up with two metrics. The first one is called Gateway Complexity (GC). It is about how difficult and complex it is to combine two different TDs to make them work as intended. The second metric is so-called Performance Preservation Level (PPL), how easily the requested end-to-end service level can be met throughout the end-to-end path in the multiple TDs.
So these two metrics are raised in the draft. As you can see, it is kind of compromise in between. If the gateway complexity is high, the PPL is very good, and vice versa. So it's kind of complex situation. So the gateway complexity involves both control plane and data plane. For example, in control plane, end-to-end admission control is a serious issue. And another thing is the cross-domain network configuration. For example, slot length negotiation. The slot length can be long enough—it should be long enough to accommodate all the traffic generated during the slot. This may be kind of confusing to you, so if I clarify it in more clear term: so the slot length has to be long enough for the maximum possible traffic generated during that slot length. So this can be kind of obvious to some of you, but maybe not to the others. This is not a trivial issue to decide the slot length because longer slots lead to more traffic. If you are interested, you can refer to the reference 1, which is specified at the end of the slide.
The another part is data plane. The metadata involved in the solution has to be translated, inserted, and maybe deleted if possible. And another thing is flow reshaping. And the third one we can think of is flow aggregation and de-aggregation. So let's consider a little bit further about the flow reshaping. Flow reshaping can be critical for interoperability. If flow is reshaped or damped perfectly, then a TD can be operated as if it were an isolated system, right? However, flow reshaping may require per-flow state maintenance, which can be very complex in a large-scale network. And packet damping, as well as reshaping, may be impossible when multiple packets or multiple flows contend for the same egress interface. So the basic idea is that reshaping and damping is not the solution for scheduling. It is a subsidiary or complementary solution for the scheduling itself.
Another example can be the interleaved regulator defined in TSN standard. It reduces the complexity by having only one per-flow queue—I mean, only one queue, not per-flow queue, but it is not free in terms of the latency bound. What is the fact is that the maximum bound of FIFO system cascaded with IR remains the same, not the flow's maximum bound. You have to keep that in mind.
So, as you can see, the intermixing, interoperating these two TDs can be very complicated in many ways. Another aspect, another metric we can think of is the PPL, Performance Preservation Level. The solutions of certain categories can be beneficial. Solutions of other categories is not. So it can be considered in very technical way. And how to measure PPL over domains is still under study. As I have said in the beginning of the talk, it can be a design choice for the balance between GC and PPL. So it is still under study.
So this is the example interoperability level table that we have, the authors have made so far. So in the X-axis and Y-axis, there are suitable categories, which can be as many as seven for now. So we can have a combination of these 7x7, maybe 6x6, because the same category can be neglected. 6x6, that is about 36 possible combination of the metric result. So this is for further study. And if you have some idea or suggestion, please let us know.
The future plan is to elaborate section 8, which I think is the last section, I mean, the last part of the draft that we have to elaborate. And after section 8 stabilizes, then I think the document is almost done. Thank you. If you have any question, please. Okay, then let's move to the next slide.
János Farkas: Yeah.
Lou Berger: Since we lost one presentation, you actually have nine.
Jinu Jung: Okay. I'll move to the second slide, Latency Guarantee with Stateless Fair Queuing, so-called CS-Core. It is now 8th version. Overview: what happened after IETF 124? There were three revisions: 6, 7, and 8. So in revision 6, some of the subsections and sections are added. The most prominent ones are 6.3.2, 6.3.3, and 7. So they are about header format, admission control, and finally the approximation of the CS-Core. I will elaborate each of them one by one. Oh, and one more thing I have to say about the revision is that the revision 7 and 8, I have tried to minimize the duplicative parts with the ITU-T standards. We added relevant references and added a section that describes the relationship to those ITU-T standards. So basically, one important thing about the revision 8 is it is currently a complete document. It means it is readable by itself. So if you read the revision 8, then you understand the whole idea. Okay, that's one thing I have to mention at this point.
So from now on, I'll speed up a little bit. This is the header format defined in subsection 6.3.2. The first IPv6-based header format is given here. I will not go into the detail, but you will, if you take a look, you will just understand. It is recommended to use the Hop-by-Hop options header, which is designed to be examined and processed by all the transit nodes. And there are, in the Figure 1, there are two exemplary metadata. The first one is the $L/r$ and the second one is the finish time ($FT$). This is a little bit obvious.
But the second one is maybe not, which uses MPLS post-stack MNA. MNA is the MPLS Network Action. So in this, there can be two options, there can be two possibilities. One is the in-stack MNA, the second one is the post-stack MNA. In-stack MNA is within the label stack. Post-stack MNA is outside. Okay, it's that simple. But in this draft, we recommend post-stack MNA sub-stack. But the in-stack MNA is not excluded. So you can use both, maybe at the end of the draft. But currently this is suggested as an example. I would not go into the detail as well. All the header fields, they are complicated, but if you understand, they are not that difficult to understand. Okay.
So this was the subsection 6.3.2. This is subsection 6.3.3, admission control process. In short, subsection 6.3.3 suggests two different admission control process. The first one is the default process, which is quite obvious and based on the well-known RSBF, that the entrance node or the source requests the service rate explicitly. And if that is acceptable, then the flow is admitted. This is all we know. The second one is the alternative process, which is a little bit complicated. The source or the entrance node specify two different rates. One is the desired rate and the other is acceptable rate. So desired rate is the higher one, and minimum acceptable rate is the minimum service rate the source can tolerate. And in the process of the admission process, the core nodes decide the path available rate based on the minimum operation. And finally, at the destination or egress node, or whatever you call it, decide the optimal service rate, which is in between the desired rate and acceptable rate. So there are two possibilities. Using the alternative process can be beneficial if the flow can allocate a better and network-acceptable rate. So lower end-to-end latency can be expected.
This is the section 7, approximate CS-Core. This is kind of technical, but I will try to explain as brief as possible. If you look at the figure on the right side, there are multiple queues, but they are basically strict priority scheduler, so-called SP scheduler, which is used worldwide. Everybody use it, every router use it. Okay. So it's very simple and deployable. One thing different from traditional strict priority scheduler and approximate CS-Core is that the priority rotates at a fixed times periodically. Periodically, the priority shift. So if the Q1 has the highest priority, Qn has the lowest priority, but if Q2 has the highest priority, then Q1 has the lowest one. But that rule doesn't change until the end. Okay. And if the arriving packet has some finish time, and according to that finish time, the packet is assigned to a certain queue. So finish time decide the allocated queue. So it is a rotating priority scheduler with finish-time-based allocation. By doing that, we can guarantee the end-to-end latency bound, specified as theorem 3, with that inequality.
János Farkas: Sorry, do you want to take... because there are people in the queue.
Lou Berger: No, I'm going to interrupt, actually. I'm sorry to interrupt. We only have a couple of minutes left and we haven't... excuse me, I'm sorry to be rude, but we need to go back to the first slide and make sure we cover the topic that we asked to be covered in this slot. In the first slide, you talked about—or second slide—you talked about that you've made changes to address the ITU section, the ITU relationship. We mentioned earlier in the session that we need this clarified in order to complete the adoption process and get it published as a working group document. There was some discussion on the list that, at least my reading of it is that there's another version needed before we can publish. Can you tell us your plan for that?
Jinu Jung: Okay, yeah. The plan is at the very last page. I will go, I'll just skip every page. I will go to this slide. Yeah. So future plan is that any duplicative parts will be removed, and only a couple of sentences or a paragraph regarding the philosophy or the principle of the stateless fair queuing will remain by only referring the existing ITU-T standards, right? The existing ITU-T standard describes the framework and requirements. There are duplicative parts still in our draft, and many people, or some people, wants to remove them completely. So I try to follow that suggestion. Yeah. So that's the revision plan. One more revision is necessary, I think.
Lou Berger: Thank you. We'll work with you online to make sure that that's acceptable, and then we'll get the draft published as draft-ietf. That really needs to be the focus at this point, is getting that draft into full working group status. Thank you very much for the presentation. G, if you don't mind switching to the next one because we're out of time. And while we're doing that, Mike, you can ask your question and then Jinu, you'll have to respond on list, please, because we're really out of time. Okay. Thank you.
Mike McBride: Thanks, Lou. And thanks, Jinu. And yeah, thank you for proposing this update and removing the duplicative parts of the draft. It's good. I do still have a concern that the proposed header metadata structure proposed here conflicts with the existing ITU standards. We can follow up offline, but if the ITU standards are inefficient or incorrect, they should be updated. We should not create an incompatible standard.
Jinu Jung: Yeah. Thank you. Thank you for the suggestion. What I'm planning to do about the ITU-T standard is I will update the curriculum and try to harmonize with the IETF draft. Okay. If that is acceptable to you and Scott.
Lou Berger: Let's take that discussion to the list and we'll need to make sure to have confirmation there. Quan, I'm sorry, you'll need to move to the next presentation unless Jinu wants to take more of his time because he's already eating into his time for this presentation. And Jinu, I really ask you to stick to issues raised and the plan to resolve them. Thank you.
Jinu Jung: Okay. Thank you very much for the suggestion.
(Short silence)
Lou Berger: Jinu, I think you're... oh, we were... I apologize to Yeon-cheol. We took some of his time here. Who's presenting? I know you're an author on this one.
Jinu Jung: Yeah. Yeon-cheol is presenting for the N-Score.
Lou Berger: Yeah. So sorry about that. We actually ate into his time. We'll give you a little time back, but sorry about that.
Yeon-cheol Ryu: Hi, everyone. My name is Yeon-cheol Ryu from ETRI. This time I presented On-time Forwarding with Non-Work Conserving Stateless Core Fair Queuing called the N-Score. This draft was adopted as a working group draft before the this meeting and it has the updated to the version 1 to address the comment during the adoption call. Goal of this draft is the provide a latency-based bounded end-to-end latency and jitter. It ensure the packet bounded within the application-specified latency range through the priority rate allocation. Taxonomy of this solution is the flow level and rate-based left bounded category.
The version 1 update mainly addressed the errata and reference updates based on the comment during the adoption call. Thank you for your comment, Andrea Francini and Zhongfeng Du. First is the editorial error fixed, and second is description clarification because the previous version simply described the packet are stored in the ascending order of the eligible time and finish time. This part which led to the misunderstanding, so therefore I, in this version, clarify the description as the packet are first push into the temporary queue with ascending order of the ET, and upon their eligible time, the packet are transferred the service queue and push with ascending order of the finish time. Overall, the packet are serviced after the eligible times in order of the finish times. This make it the non-work conserving schedule. Third is the reference to the CS-Core and taxonomy part added.
Currently this draft is no issue remaining now, and N-Score and CS-Core solution are planned to the implemented on the implementation for the FPGA. We already have completed the software design and currently coding for the implementation of the N-Score and CS-Core both function using the FPGA. Thank you.
Shaofu Peng: Hi, thank you. I'm Shaofu Peng from ZTE. I have a question is that you implement the N-Score, so that if a packet arrived at downstream node, you have to damping the packet to holding some time so that you remove the jitter. Okay. So I want to know the implementation details or the design details about how to design this damping buffer. It is categorized by, for example, it is depended on some... sorry, is it related with some classes or it is just stateless or someone? Maybe you can give some clarification in the next version about the design details. Thank you.
Yeon-cheol Ryu: Okay.
Jinu Jung: Thank you, Shaofu. As a co-author, if I can answer you briefly: the N-Score is similar to CS-Core, it doesn't have per-flow state maintenance mechanism, so it is scalable. And similarity to CS-Core, it calculates the finish time at the very beginning of the flow at the entrance node, and in the subsequent node, so-called core nodes, do not need to maintain any per-flow state. Only difference between CS-Core and N-Score is that N-Score is non-work conserving or left-time bounded. Okay. Thank you.
(Short silence)
Yeon-cheol Ryu: Next is the On-time Forwarding with Push-In First-Out (PIFO) queue. This is also adopted working group before the this meeting, and it also updated to the version 1 to address the comment during the adoption call. Goal of this draft is guarantee application-specified minimum and maximum end-to-end latency. It schedule the packet to meet the specified latency targets. Taxonomy draft of this solution is flow level, non-periodic bounded category.
The version 1 update mainly addressed the errata and reference updates based on comment. Thank you for your comment, Zhongfeng Du again. First is editorial error fixed, and second is the figure description clarification because previous version is simply described, so in this version, updated more detail. And third is reference to the taxonomy is added.
This draft have issue, issue from the Shaofu Peng. Thank you for your comment. Issue is: how to guarantee the packet scheduling before the maximum departure time, and can you please add mathematical formula and the further illustration based on the reference topology in future version? For to the this issue, as mentioned the draft, the maximum departure time is determined by the node delay upper bound from the arrival time, and the admission control verifies that the difference between the node delay upper and low bound can provide the sufficient time to process all packets. So in the future version, I plan to the add mathematical formula for the more detailed explain regarding admission control part. It's thank you.
Lou Berger: Any questions? Okay, we'll move on to the next. I think it's Peng or Shaofu, sorry.
Shaofu Peng: I'm Shaofu Peng from ZTE. So it's about the Deadline Based Deterministic Forwarding solutions. Thank you. So this is just a really brief overview of the updates about the status. So it was just adopted by the working group as the 00 version draft.
So here's the open issues during the adoption call. So first is that add the definition of the abbreviation such as earliest deadline first, and the authors plan to solve this as a suggestion. And another is that clarify that solution scope, whether covering option 1 to 4 or just option 3 and 4? So the option 1 and 2 is stateful and the option 3 and 4 is stateless. So the authors decide to delete the option 1 and 2 to maintain the scalability of the solutions about it. So the next step is that we'll address the issues in the next version and focus on the optimal solutions. So any question or comments on this?
Lou Berger: Thank you for the update and sticking to the request. Appreciate it.
Shaofu Peng: Okay. Thank you. If there're no questions, we'll move on. Okay. Sure. It's about the Timeslot Queueing and Forwarding Mechanism. This one is also just adopted by the working group as the 00 version. And here are some open issues during the adoption call. So first it add a section "Operational consideration" to describe how to apply the TQF mechanisms in the overall network. And second one is put section "Evaluation" into the appendix part. And the third one is polish the text and revise minor errors about it. So for the three points, the authors decide to revise the document according to the suggestions. And there's also a suggestion from the chairs: there are more similarities between TQF and TCQF, and better to merge them. This hasn't been addressed because the authors of the two draft had not find the right way to really merge them, and it might need more discussion on it. And the another is about the questions on the end-to-end delay bound of this solution: is it the sum of the each offset plus to the slot length or it's SPL of it? So the authors confirmed that the formula is the former one is that for each flows, there will be a pre-calculation on it, and in each nodes there will be a fixed-delay offset. So the delay will be the former one. And authors has replied in the mailing list. So the next step for this draft is to try to address the to merge the two draft, but need more discussion on it. I see János in the queue.
János Farkas: Yeah. Thank you for taking this up and actually bringing the commonalities up in the list, and thank you for taking the initiative. I hope the discussion goes on because I think it would be very beneficial for the working group if the two solutions could be merged. And kind of my take was from the discussion on the list that one of them is a kind of subcase of the other. But I see a queue building up. I think Jinu is next and then Toerless. Please use the queuing tool as well. Jinu, please.
Jinu Jung: Yeah. I have the... could you go back to the page 3? Yeah. I'm the commenter on the last bullet about end-to-end delay bound. Although we have exchanged some emails, but we are still struggling to find out which one is the right one, okay? That's the one thing I have to mention. Another thing is that if the first expression is correct, then I think TQF and TCQF are still completely different with each other. One is the flow-based, one is class-based. Class-based is scalable. But if you insist that TQF is flow-based, I'm not so sure it is scalable because we do not usually, for scalability, schedule flows for each slot per each hop. I think it is too complex. That's my suggestion. Thank you.
Shaofu Peng: Thank you Jinu for the question. Yes, I think it is reasonable to think that TQF may looks like a class-based mechanism. However, I think we can, for each flow, we can allocate the specific time slot stack in order to time slot schedule in the list for this flow. So I think each flow is well protected by the assigned slot queue. So I think it's also a flow-based mechanism. Anyway, for multiple flows, we can assign a dedicated slot on each node for multiple flows. So that means multiple flows can share the same slot stack. So in this sense, it may also look like a class-based mechanism. But I think the main feature is flow-based. Thank you.
János Farkas: Thank you. I think this is an important point in which we should continue on the list. And I would like to recognize Toerless, please. Toerless, go ahead.
Toerless Eckert: Yeah. I've already tried to explain this a few times, and that's why I've written the Flow Interleaved draft, which is also reference from TCQF and from gLBF. In TCQF, the number of cycles and the time for these cycles is very much limited to only what is necessary to overcome a link's latency. So that we can basically deal with the variation in link latency independent of how many flows we want to multiplex. That keeps it very simple. We have high-speed hardware implementation and independent of the total number of flows. It does not solve the problem that if you have a million flows, each of them just sending one packet per second, or let's say per hundred milliseconds, we want to interleave them in a way that they're nicely stretched out over the hundred milliseconds. So what we're doing is, we have this long, long set of cycles, so to speak, for the flow interleaving at the ingress of the network. That's what flow interleaving does. We don't do it hop-by-hop, but if you need that flow interleaving, you put it on the edge of the network as an ingress function like we have others like aggregation or so on. And in TQF, you have this function effectively on every hop. That's what makes it per-flow because for every flow you're now per-hop doing that stuff. So that's why I think they're fundamentally different. And that's why, you know, for example, to achieve the same thing that TQF does flow-based, we need kind of two layers: we have a very simple TCQF layer throughout the network, and the flow interleaving happens on the edge.
Shaofu Peng: Yes, we have some discussion on the list to distinguish the difference of these two mechanisms. Yes, I think TCQF has the fixed timeslot offset. So I think it has scalability issues that it just can support a limited number of flows because all these flows want to send before, for example, one slot offset, but there're so many flows and the link speed is limited. So the must have some flows cannot be sent as soon as possible. I think this is the difference. I do think it's a special case of TQF. So maybe we can add some clarification in the next version to illustrate how this special case is work in TQF. Thank you.
Shaofu Peng: Okay. Well, we may need more... okay, please.
Yi Zhao: Yi Zhao from Huawei. Personally I would prefer to make them as separate documents. So there is no free lunch. So for TCQF, the advantage of it is simple. So including the configurations. So if we consider a like one-size-fits-all solution, then we want to degrade to some kind of simple solution, what happened with the all the configurations and provisioning kind of a thing? If it's naturally by natively it is a simple solution then the configuration and provisioning considerations are all very straightforward. So I don't think the downgrade in some way or with some twist of the TQF make it look like TCQF is a good way to merge them.
János Farkas: Toerless, please. Go ahead. Please, please.
Toerless Eckert: Yeah. I think the other part just to re-emphasize on this: the flow interleaving we can do on the edge. I didn't put that in the TCQF draft, which we could do. I mean, I first need to catch up with the... I didn't manage to do a new version for this IETF. I apologize, there was too much after January. So I'll be back to it shortly. But once I'm caught up, I'll bring up the question of what we do with the flow interleaving because this edge function on ingress for flow interleaving is not only applicable to TCQF, but it would equally be applicable to gLBF, right? Anything forwarding-wise that is quasi-synchronous as both of these are, right? One with cycles, the other without cycles. But it can be applied to that, which is I think one of the reasons why it should stay a separate edge function and not be, you know, integrated into TCQF.
Shaofu Peng: Yeah. Thank you, Toerless. Yes, you're right. This is the start point for flow aggregation. It is informational draft. We, our modification is mainly summarized the data plane and control plane requirements and to provide some consideration for the enhancement, including you mentioned the different data planes, different queuing solutions. TQF may have different flow aggregation coordination or interleaved interleaving such as these solutions. I think we may clarify it in further versions.
Balazs Varga: Thank you for the presentation. I have two comments or questions. The first one is a general clarification question. I think that in the data plane, in all the data plane documents, we have already aggregated, identified, described how to aggregate flows. There are specific formats for flow aggregation already described. And you said that this is a informational document which is just describing requirements. So my question would be whether those aggregation methods defined for the DetNet data planes, is there something wrong with them? Do you have some additional requirements and you would like to modify it, or what is the relationship to the already defined aggregation formats?
Shaofu Peng: Thank you. In data plane, as we just discussed, we think there may be some coordination and interleaving solutions and different queuing mechanisms may have different data plane solution extension for flow aggregation. So we may provide the requirements for flow aggregation, especially in data plane. So maybe the detailed extension or detailed enhancement are within the each queuing solution. So we in this document we just provide the requirements, scenarios and the enhancement consideration.
Balazs Varga: Okay. My statement would be that these requirements are already covered with the existing documents, but let's discuss the details. I have also a specific question on slide 4 on flow identification. This is also something what the workgroup already spent some time to define how flow identification is done in DetNet. Are you proposing new methods for flow identification? Because this additional metadata, that is somewhat disturbing for me. That looks like there is something wrong with the existing flow identification methods and it is not clear what you are arguing for, what is wrong with that.
Shaofu Peng: Yeah. Thank you. We will add some clarification. Thanks.
János Farkas: Thank you for the presentation and the discussion. You asked about adoption. We as chairs had discussed this and actually was talking about Balazs's point completely independently. So a couple of comments. Number one, I think Toerless's observation that this is a bit of a framework is accurate, particularly because it doesn't go into all the details on the queuing. So that would be great to add to the title and also add it to the intro of the document that this is a framework for aggregation as opposed to a specification. So that's one good adjustment that we should need before considering adoption. The other one is, and this goes a bit aligned with Balazs's point: we really need to discuss the relationship with the existing data plane formats. So this isn't queuing, this is the packet formats and how the different DetNet data plane formats are supported in this aggregation framework. And as Balazs commented, there's a lot to build on, but we really need to cover the data plane formats that we have before saying that this is something that's really covers the working group's scope. Thank you. Toerless.
Toerless Eckert: So yeah, that wasn't clear to me if it even makes sense to purely constrain to just the encapsulation part as opposed to the shaping timing part, which very often is going to be part of what happens on ingress. So that would be good to be very clear about what we actually want to do here.
János Farkas: I think the discussion reflects for me that it's too early for adoption poll. So I would suggest to consider the feedback you received. thank you again.
(Silence)
Shaofu Peng: Hello, everyone. Next presentation, topic is the Data Fields for DetNet Enhanced Data Plane. So since we have adopted seven queuing solutions, so the authors may think that it's a good time to consider the queuing-based metadata. So this document defined the metadata and the common data fields for the DetNet enhanced data plane or data plane the solutions or queuing solutions to support the, for example, the deterministic latency and aggregated flow identification. The this version has updated, for example, we aligned with the suitable categories as defined in the data plane taxonomy. We revised the deterministic latency metadata to align with the adopted seven queuing solutions. We also discussed the option of the encoding for DetNet enhanced data plane metadata. For example, the reuse of the existing DSCP or the encapsulation in IPv6, SRv6, and MPLS networks. We also add Jinu as co-author and thanks for the suggestions.
So for the deterministic latency option, it it may carry the queuing-based metadata. We defined the seven categories, seven types for to align with the suitable categories. The deterministic latency information should be carried for forwarding nodes along the path which can apply the corresponding queuing mechanism and related information in the packet to achieve the end-to-end bounded latency. So we defined seven types and the corresponding information. First one is in the right bounded category, an example is the EDF queuing mechanism. For this type, it must carry the maximum time bound. And for the second one, flow level periodical bounded category, the example is the TQF queuing mechanism. It must carry a set of time slot, so we carry the time slot ID in this type of information. And the third one is the class level periodical bounded category. So the example is the TCQF and we need carry the cycle ID in this type. The fourth one is the flow level non-periodical bounded category, and the example is the PIFO queuing mechanism, and we may carry the maximum time bound and minimum time bound. And for the fifth one, class level non-periodical bounded category, it may carry... the example is the L-GBF and we may also carry the minimum and maximum bound. And for the sixth one is flow level rate-based unbounded category. The example is the CS-Core, and we may carry the maximum packet rate and its allocated service rate and the completion time. And the last one is the flow level rate-based left bounded category. It may... the example is the N-Score, and it may carry the maximum packet rate and allocated service rate and finish time and eligible time.
So there are above is all the information we discussed with each queuing mechanism and carry the basic queuing metadata in in common format. So we will update the format or the metadata and follow up the the process of the queuing solutions. The co-authors would like to know that if we need to define a common format for all to cover all the queuing solutions or the working group may prefer that each queuing solution to define their each queuing metadata. So welcome the comments and suggestions. Thanks.
János Farkas: I just want to agree with your point that it would be good to see the queuing solution getting a bit more mature for this, like whether or not we can have some common specification for the metadata. So I just agree with what you said. Any questions, comments? Yeah, please. Go ahead.
Shaofu Peng: Yeah. We also discussed this in PCEP session that we need to to provide some common format or common data field encoding in control plane. So the PCEP chair also want to to confirm that if we should adopt the PCEP extension after the DetNet have rough consensus, or we can adopt the PCEP extension first and then follow up the DetNet progress before the publication.
János Farkas: My personal view is that the DetNet Working Group should reach rough consensus first because the PCEP Working Group does the work for the DetNet Working Group on this extent. So but this is a coordination we can discuss further offline. But I think still the queuing solutions should become a bit more mature. We just started the work in the working group via the adoption, and then we can think about how to progress this work. And then if we have the rough consensus, we can reach out to PCEP Working Group. That's my view.
Shaofu Peng: Okay. Thank you.
János Farkas: Thank you. Any further comments on this one? Or we can move to the next one, actually.
(Brief silence)
Shaofu Peng: Hello. This is Shaofu Peng from ZTE. So this presentation is about the DetNet Enhanced Data Plane Interoperation, some considerations. Firstly, let's see what happened in the TSN solutions interoperation. In this case, the interoperation is based on two parts: one is the common fields such as priority, and the second part is the local state maintained at each node. So see the following figure, there are two bridges interconnected with different TSN mechanism. In bridge 1, it configure a local-per-stream filtering and policing policy, that we map any stream with priority 3 to apply a gate control instance. In this instance, it will set IPV7 or IPV6 and the packet will be stored in the IPV-related queue, that queue is enabled the CQF mechanism for CQF scheduling. Similarly, on bridge 2, it also configure a local-per-stream filtering and policing policy, and it will map any stream that with some specific D-MAC, S-MAC, VLAN ID, and priority 3 to apply the stream gate instance and set IPV7, and the packet is stored in the IPV-related queue and that is enabled ATS.
So, following the similar principle, EDP solution interoperation can also be based on two parts: the first is common metadata such as latency deviation (E), and the second is differentiated metadata such as... yes, that is related with specific EDP solutions. It may, for example, may be timeslot or cycle or some finish time and so on. So there's a minor question is: DSCP can act as a common metadata? I don't think it is necessary for all EDP solutions. For example, some specific metadata of individual solution don't use DSCP for the packet mapping or scheduling.
So, see the following figure, there are three EDP domains interconnected with different mechanism. The differentiated metadata is used to guarantee the delay bound of the path of each domain. However, there may be some remaining jitter caused by the mechanism. So this latency deviation can be carried in the packet. It is termed as common metadata to remove this jitter at ingress node of the next domain.
So the end-to-end path across multiple EDP domains can use multiple Binding-SID. Binding-SID is used to represent sub-path of each domain. The end-to-end path is represented by multiple Binding-SID such as Binding-SID 1, 2, 3, and so on. The head-end of end-to-end path only encode the metadata of the first domain. And when the packet arrived at the ingress node of the next domain, all encodings added by the first domain are removed, then it will damping by the received common metadata, then encode the new metadata according to the new, for example, EDP 2 mechanism. Note that the common metadata should be recognized by all EDP solutions. Instead, if we use an explicit list to represent the end-to-end path and encode all metadata of all EDP domains simultaneously in the packet, that is too long, complex, and unnecessary. So we have a basic conclusion, that is: the common metadata in EDP or the common fields in TSN are the key role of the interoperation.
So this is the IPv6 EDP interoperation encodings. They may have IPv6 basic header and hop-by-hop, routing header, and DOH. The differentiated metadata generally encoded in the hop-by-hop or routing header, while the common metadata is generally encoded in the DOH. I will not go detailed in the forwarding process.
Yes, this is the MPLS EDP interoperation encodings. It is based on the MPLS MNA, and the Binding-SID is encoded in the bottom of the stack, and only the BSID of the first domain is extracted to contain the detailed label stack. Yes, I will not go detailed also. So this is what content, any questions?
Jinu Jung: Thank you Shaofu for the nice presentation. I really like your idea about differentiating solutions based on the metadata. I actually thought the same thing. But my problem about that idea is that I just found just like Jinu just... Chuan just suggest, the metadata of different solutions are just so different. I do not think there is any common metadata for throughout the whole solutions. Maybe some of the similar categories solution can have a similar metadata, such as CS-Core and N-Score, and your EDF deadline-based forwarding. But other than this closed set of the solutions, the common metadata can be found very hardly. So yeah, that's my suggestion. Thank you.
Shaofu Peng: I have a simple answer, that is, I think the common metadata is necessary for all solutions, whether is flow-based or class-based, whether is rate-based or time-based. Because whatever the solution is, if for the interoperation, the downstream domain just care the delay result, the delay bound guaranteed result of the upstream domain. So the remaining jitter, I give it the term is the latency deviation. So the latency deviation must be contained in the packet.
Jinu Jung: I'm sorry to interrupt you, but the latency deviation or end-to-end latencies... some of the solutions do not use them as metadata at all.
Shaofu Peng: I know that, but I think it is necessary for all solutions. Maybe we can discuss this in the list to reach a consensus on it.
Jinu Jung: Okay. My final suggestion is that if you, if you would like to proceed with this draft, I recommend to narrow down the scope a little bit. Because not every solution can be categorized or summarized as the metadata.
Lou Berger: Yeah. We are out of time on this slot, but thank you for the discussion. Since the two of you are there in person, maybe you can take advantage of being together and talk through where there might be similarities and you can look to combine the work if possible. If they really have overlap, then it might be good to bring them together. There are some terminology things I'm also not sure of in this draft, but that's okay. We'll see how the work progresses. So thank you so much and let's move to the next presentation. Thank you.
(Short silence)
Shaofu Peng: Hello. This presentation is about the Mechanism to control jitter caused by policing in Detnet. The main updates from version 1 to 2, mainly related with the cross domains, how about policing jitter is removed.
So again, this is the overview of the solution. Let's see the picture. The end-to-end delay requirement by the service flow equals to the path delay and the edge-to-edge policing delay budget. The budget further equals to the head-end policing delay, that is a run-time time, it is variable, plus to the endpoint damping delay. So the endpoint damping delay can be carried in the packet, used for the holding time imposed on the endpoint of the DetNet path, then the packet send to the application destination.
So the direct question is how to set the edge-to-edge policing delay budget. There may be two options. The first is it may reach the order of magnitude of service burst interval. For example, a DetNet flow may have burst service interval 100 microseconds and contains three packets P1, P2, P3 per SBI. So an extreme case of non-conforming arrival pattern is that the P1, P2, P3 arrived back-to-back. So P3 has the largest policing delay, maybe 2/3 of the SBI. So that one it can be used as the budget. Alternatively, it may be a very small value even zero based on sampling or configuration. For example, if the application source control the flow rate to let it conforming, to fully comply with the T-Spec, then the budget may be zero. Anyway, smaller budget is better.
So we can see this figure how the policing jitter works. So when the application flow arrived at the head-end, it firstly taken by the shaping function, also named as policing function. So the back-to-back packet may be staggered or distributed in the service burst interval. So some packets may get a large shaping delay and other packets may get a small shaping delay. So the arrival pattern will be changed to shaped pattern. Then the shaped pattern will get a guaranteed bounded delay or jitter along the DetNet path. So when the packet arrived at the endpoint, the shaped pattern will recovered to the arrival pattern again. That may be a non-conforming pattern. So the jitter caused by the policing is removed.
So this is some considerations about the multi-domain. There are also two options to apply the policing jitter control. The option 1 is there are separate policing per domain. In this case, the policing jitter control at the ingress node and exit node of each domain independently. So it is applicable to the case that transit domains maintain the flow states. In this case, each domain will contribute a separate budget to the end-to-end delay. So the delay may be a large. While option 2, it will take only single policing for all domains. In this case, the policing jitter control only at the ingress domain, the ingress node, and exit node of the egress domain. So only a single budget contribute to the end-to-end delay. We can see these two figure in the... above figure, the arrival pattern is changed to shaped pattern in domain 1, then when the packet left from the exit node of domain 1, it will recovered to the arrival pattern again, then the arrival pattern will further changed to the shaped pattern in domain 2 and at last, the flow will recovered to the arrival pattern again at exit node of domain 2. While in the figure below, the arrival pattern is changed to the shaped pattern, the shaped pattern will get a common bounded delay or jitter along the end-to-end path that across multiple domains and and egress domain of and exit node of egress domain the shaped pattern recovered to the arrival pattern.
So that is the encoding of metadata. There is some considerations for how to carry damping delay, endpoint damping delay in the packet. So it may carry in the IPv6 DOH or it may carrying the MPLS MNA header.
This solution has presented at IETF 121 meeting and there is a concern about the difference between this proposal and ADN. We have sent a clarification mail to the mail-list. In brief, the difference: this proposal is used to avoid jitter caused by policing delay, while ADN is used to avoid jitter of path delay. So we can refer to page 3 for the relationship between policing delay and path delay. Questions and comments?
Jinu Jung: Thank you, Shaofu, and sorry for another comment. Could you go back to the maybe page 8 or I cannot remember, maybe page 6. Yeah. So the difference between the path delay and shaping delay, that's the main difference you mentioned about this, the difference of this draft and the ADN draft, right? But I cannot see any difference between the path delay and shaping delay in this diagram. So H means head, E means egress, I think. Then the domain 1 is a network of nodes, right? So there is a delay between H1 and HE1 or there are delays between H2 and E2. These are just ordinary network delay, right? Even if it is based on the path or queuing or shaping, they are all delayed the same. So I still can't find any difference between two drafts. And more than that, yeah, I'll come back to that later. Please answer.
Shaofu Peng: Okay. Thank you questions. I think the difference still exists because if the arrival pattern does not take the policing function, it will cannot get the delay bound for any solutions. I think that is the requirement of the DetNet architecture document. It explicit the policing function is necessary.
Jinu Jung: Yeah, but my point is that whether it is based on the delay is from queuing or policing or whatever, that ADN draft covers all kind of delay. And it can guarantee even zero jitter if you look into the ADN draft. So I I still concern about it. Okay. We can go offline. Thank you.
Shaofu Peng: Okay. Thank you.
Lou Berger: Yeah, we are out of time on this slot, but thank you for the discussion. Since the two of you are there in person, maybe you can take advantage of being together and talk through where there might be similarities and you can look to combine the work if possible. If they really have overlap, then it might be good to bring them together. There are some terminology things I'm also not sure of in this draft, but that's okay. We'll see how the work progresses. So thank you so much and let's move to the next presentation. Thank you.
(Silence)
Lou Berger: I think this one is remote.
(Silence)
Balazs Varga: Yeah. Thank you very much. So this is a presentation about the draft which is dealing with using SRv6 as a data plane for DetNet networks. Because it is dealing with SRv6, there is also a related Spring Working Group document draft which is describing the redundancy protection, and the Deterministic Networking SRv6 Data Plane draft is describing how to use those SIDs and the characteristics defined in the redundancy protection for DetNet data plane.
This slide is just showing the update on the draft. The scope is to leverage the existing IPv6 encapsulation using DetNet specific SIDs, in this case this is the redundancy SID, how it is called in the Spring Working Group draft, and the draft also optionally allow to use the traffic engineering mechanism what are provided by the SRv6. The technical content of the draft is pretty stable. The only changes what we have made for version 2, which is the current version, some editorial changes and there was also a terminology update to be fully in line with the Spring Working Group document draft. So we are using also the term redundancy SID and we are also providing the reference to the Spring Working Group draft, which is by the way will be discussed tomorrow in the Spring Working Group and we will ask for working group last call for that document.
So that was the update on the draft. The content is pretty stable, it is fully in line with the Spring Working Group work, and we think that it is the right time to ask for working group adoption to deal with that in the working group as a further step. That's it in a nutshell.
Lou Berger: Okay, thank you so much. We do think it's important to coordinate anything we do here with the Spring Working Group, and we have not discussed this with them. But that said, we would like to run a poll here. We're going to ask two questions. They were on the slide, you saw them before. The first one is just really about the topic: is there interest in the working group to work on the topic? So please take a look at the poll and respond whether you're in the room or remote.
(Brief silence as poll runs)
Lou Berger: Jinu, are you in queue for this one or for a previous one?
Jinu Jung: Yeah, this presentation... for this presentation, if you allow.
Lou Berger: Oh, okay. So let me move on to the next question while you ask your question, because we can run the polls in parallel. So for just closing out the previous question, we are getting very reasonable participation of those in the room, and they're almost completely positive. They are interested in working on the the topic.
I'm going to start the next question, which is: is this document a good foundation for the work? And if we agree, well, that'll lead to an adoption poll assuming the Spring chairs don't have objection. So Jinu, while we're running this poll?
Jinu Jung: Thank you. Yeah. I'm okay with draft, except one thing: the title. I mean, I think the data plane, the term data plane is a little bit too wide because this draft is about the replication and elimination. But we use the term in this working group "data plane" for also for some other meaning, right? The queuing, scheduling, kind of even lower layer of the data plane. But this draft seems to cover both of them. I'm okay with that, but some of the reader may confused. That's my only concern.
Lou Berger: Yeah. We've used the term "enhanced data plane" as a sort of a euphemism for the queuing part. And we've been in previous in the working group "data plane" was used to describe the formats used to identify DetNet flows. So it is definitely overloaded usage, but I wouldn't blame that on Balazs. He's sort of inheriting it from the working group past. Now that said, Balazs, if you want to add to that.
Balazs Varga: Yeah, well, I think this is definitely something what we can discuss during the workgroup work. Maybe we can extend to add that because this is not changing what we have already defined for IPv6 data plane. And maybe it would be worth to add some references to further former work done before that we are not changing that, but what we are just adding here is the support for the redundancy functionality and using the redundancy SID. Yeah, I think this is something what we can add as clarification during the further work on this draft, yeah.
Lou Berger: Okay. That's okay. Thank you. Okay, I'm going to close the poll and then Shaofu have a moment. It looks like we have some pretty good support. There are a couple of no's. If you have no, if you've said no and want to briefly state your objection, I'd definitely be interested in hearing about that. Shaofu?
Shaofu Peng: Just what about the if in native IPv6 case, how to encode the flow identification and sequence number information because this document focus on SRv6. So I what about native IPV6 case? Not just only for SRv6.
Balazs Varga: For me, it was hard to follow. It was very low the volume. But if I understand it correctly, you have referred to flow identification and some information on that should be added to the draft if I understand it correctly. But maybe let's move it to the list.
Lou Berger: If the comment I heard was "what about non-SR IPv6?" And I think the scope of this draft is just for SR. So just make sure the scope is clear. I think that was the comment, of course, Shaofu is in the room so he can clarify. But we should wrap up. We have a little, we have a moment if Shaofu wants to say anything.
Shaofu Peng: Yes, my question is about for the native IPV6 case how to encode the flow identification and sequence number, not just only for SRv6.
Balazs Varga: Okay. Thank you.
Lou Berger: Toerless, I think you'll get the last say on this topic.
Toerless Eckert: Yeah, I mean, it doesn't... I'm not sure if the draft covers the full scope of what we could do with SRv6, right? So that's that's a little bit the problem that I would certainly think for all the advanced forwarding mechanisms that SRv6 is certainly almost a requirement if we go for IPV6 as opposed to MPLS so that we have it hop-by-hop stateless. And so I think that's that's the main challenge, that we make sure that we capture the topic comprehensively and not only in a for a subset of functionality.
Balazs Varga: Okay. So far all the other SRv6 functionalities are assumed to be optional to be used. So you have your choice whether you would like to use it or you would like to use other like traffic engineering, like path steering, and stuff like that. But good comment. Thank you.
Lou Berger: Okay. Thank you very much for the presentation. Thank you workgroup for the polls. The chairs will go and talk to the Spring chairs by email and expect to see an adoption poll on this. If we do run into an issue coordinating with the Spring chairs, we will certainly let the workgroup know. And now for our last presenter, Carlos.
(Brief silence)
Carlos Bernardos: Yeah. This is Carlos Bernardos presenting on behalf of my co-authors. I'm going to try to be brief to leave time for discussion at the end. So just a quick recap because this has been presented at a previous reincarnation of this, has been presented in in past meetings. The motivation is that we have real use cases that involve or require multi-domain and we have a first... well, we had some drafts in the past documenting some potential solutions as examples and then we were kind of motivated or encouraged by the chairs and the working group to work on a framework for multi-domain. We did present one in last IETF in Montreal, but that was a PCEP-based framework and we were also asked to kind of generalize that. So this is what we have done. Now this draft, A Control Plane Framework for Multi-Domain Deterministic Networking (DetNet), aims to be technology-agnostic control plane framework for multi-domain. So we are just using the PCEP as an illustrative example, but any control plane technology satisfying the requirements should be or could be could be used.
So quick kind of summary of the changes. So in the previous version, the one that was presented in IETF 124 in Montreal, we were using PCEP-specific language, we were assuming that a domain had a mapping with a PCEP control domain, and we were kind of naming procedures based on PCEP: the hierarchical PCEP and the recursive PCEP mechanisms. And then we also, another change that we have done in addition to make everything agnostic, is we have added a explicit functional requirement section in the new version of the draft.
So one key question that we should address here in if we decide the workgroup decides to work on multi-domain is defining what we understand as a domain. A domain may be defined in different ways. We in this draft assume that domain represents a collection of network resources managed as a single entity for path computation, resource allocation, etc. And we take an a controller-based kind of domain. So domains are controlled by a single domain controller instance. So this is the primary definition, although we acknowledge in the document that there may be other definitions like a pure administrative domain definition, so all the nodes that are under the same administrative entity, which is a common definition of domain, but also sometimes you can find a technological domain, meaning that a domain is characterized by sharing the same data plane technology. But that doesn't inherently define a control boundary, and that's why we basic... we propose to have this control domain, controller-domain-based definition of domain.
This is an example of a potential scenario where we have three domains. I will not go into the details. You have one that is wireless, a couple of them that are wired. We have nodes there and we have also end systems at the end, that as we will see later may be DetNet-aware or DetNet-unaware. And here in this scenario, you may have hierarchical approaches or stitch approaches as we will see.
Very quickly, we consider two coordination models for the multi-domain control: an hierarchical model and also a peer-to-peer stitching model. I think we we don't need to go into the details. It has been already presented in the past with a PCEP-based approach, but the the assumptions are the same, the the assumptions are the same. And then we also added this new section on functional requirements for the control plane framework. So we assume that there is an intra-domain QoS budget allocation, so each domain controller must compute the path segment meeting the specified share of the end-to-end latency jitter loss budget. We also assume that or that there is a functionality on capability advertisement, so the domain controllers advertise the abstract reachability and availability budgets. And well, I will not go into again all the details for the sake of timing, but there are some functional requirements listed in the document for the functional control framework. We also list some multi-domain flow considerations on end-to-end path computation, end system awareness, resource management and potential flow aggregation. And again, I will just directly go to the next steps and questions so we have time for that.
So we try to do this exercise of generalizing based on the feedback that we got, and also adding the functional requirement section. We feel that domain definition is a key point to address in any case if we want to do in the workgroup multi-domain work of any kind. And the questions will be if you believe as a workgroup that there is interest on working on this and potentially adopting this as a starting point. And back to the chairs. Thanks.
Lou Berger: Toerless, I guess you have a question or comment?
Toerless Eckert: Yeah, thanks a lot. I haven't had the time to catch up with all this work, but I think we had a great conversation in Montreal and then of course also with the joint meeting afterwards. So I think I'd love to see, you know, just a minimum possible RFC kind of, you know, to promote the idea of inter-domain DetNet just, you know, something you can do on edge routers between domains, like from between a TSN industrial environment and a service provider, even pointing to the fact what, you know, Telefonica was thinking of: okay, we may not have full DetNet in in the service provider network, but we have so much over-provisioning, right? So the really practical entry point into DetNet on these these edge routers, with then the question is what are the minimum things we need to deliver on them. But but also as a way to to get DetNet promoted where there is no competition right now, because when we go into any of the other places, TSN is already there and the whole large scale things were well we're just already working out right now, right? So but the place where you already are experimenting and can put it in, I would love to see the smallest document we can do to get that promoted even if obviously just informational. I think that that would help a lot, and then all these technical details that tack along from it, yeah, haven't found the time.
Carlos Bernardos: Yeah. Thanks for the feedback and definitely we we can do that. We are actually involved with Telefonica in a project that is going to start in May doing this, doing SRv6 among other industrial partners, Siemens and others. And we are going to actually develop technologies and show that, showcase in a high TRL environment. So we we can leverage on that and bring that also as a contribution as Toerless mentioned. So thanks.
Lou Berger: Carlos, sorry to interrupt. I just want to let the room know we've started the poll. So please join the, you know, respond, provide your opinion. Would appreciate it. Carlos, back to you.
Carlos Bernardos: Yeah. I was saying that we are about to start a new project that involves Telefonica among other industrial partners, Siemens and others. And we are going to actually develop technologies and show that, showcase in a high TRL environment. So we we can leverage on that and bring that also as a contribution as Toerless mentioned. So thanks.
Lou Berger: Okay, I'm going to close the poll. The first poll, "are you interested in this topic?". We've had similar level of support, similar level of participation, excuse me, and no objections. So that seems like some good support for working on the topic. I'm now going to move on to the second question about the document. And Jinu, you proceed. Thank you.
Jinu Jung: Thank you, Carlos. Just quick comment. As I just present in my contribution about the taxonomy draft, the taxonomy draft is also considering the interoperability between the domains, even though it is about the technology domain, not the control domain. But anyhow, I figure that two important aspects within the control plane, one is the admission control, end-to-end admission control. I think it is not trivial. And the second thing is about the slot configuration or how the cycle operates periodically because every domain has its own slot length, and I think it is very hard to coordinate all those domain-level slot coordination. That's my comment. Thank you.
Carlos Bernardos: Good points. We we can work on those details. Thanks.
Lou Berger: It looks like we have no more comments. The participation in the second poll seems to be similar numbers. And but we are getting a couple of objections, a few objections. I'll remind the workgroup that when we adopt a document, it's the start, not the end of the process. I think we have sufficient support to move towards an adoption poll in the workgroup. I'll confirm this with János, and I look forward to hearing from both those people who support it and also those who have technical objections and what those may be.
And with that, we are actually out of time and right on time. Thank you all for participating in the good discussions. G and Carlos, thank you for representing us in the room and helping us run the session. Eve, thank you for the great note-taking and your support. And János, anything else you want to say?
János Farkas: No, I also want to thank everyone as you explained in detail. Thanks, everyone, and see you at the next IETF.
Lou Berger: See you soon. Thank you.
János Farkas: Thank you.