Markdown Version

Session Date/Time: 19 Mar 2026 08:30

Pavan Beeram: OK, it’s time. Can someone close the door at the back if you can? Thanks, Stefan. Welcome to the TEAS working group session at IETF 119. My name is Pavan Beeram, I’m one of the two TEAS working group co-chairs. Oscar Gonzalez is the other co-chair. He will be co-chairing this session virtually all the way from Spain. We have Italo Busi, our secretary. [Background noise] Getting some feedback from somewhere. So Italo would be helping us with the session logistics.

This is the IETF Note Well. This is the Thursday of the week, so if you aren’t familiar with it, please get yourself familiarized. This is a reminder of the process and the policies that you agree to abide by when you participate in the IETF. These processes and policies talk about what is expected of you in terms of conduct, in terms of privacy, in terms of intellectual property rights. So I would encourage you to look at the references that are specified on this slide. And if you have any questions, please do reach out to the chairs and the Area Directors.

For those of you who are in the room, please do sign into the session. Please do use Meetecho when you want to come to the mic and participate in the discussion. For those of you who are participating remotely, please do keep your audio and video off unless you are participating in the discussion. And if you do join Meetecho to say something, please do state your name each time you're there.

We would be using the HedgeDoc collaborative Markdown editor for note-taking. A link for that has been pasted in the chat. Please help us capture notes, and if you do say something at the mic, please do make sure that it's captured appropriately in the notes.

This is the agenda for today. It's packed. We have four working group documents on the agenda today. In terms of individual drafts, we'll have Tony talk about power-aware traffic engineering path placement. I'll give an update on the MP-TE drafts, and Luis will give an update on this draft that provides an example of 5QI to DiffServ DSCP mapping.

We have had one RFC since the last time we met. This is RFC 9889. This is the document that talks about realization of network slices for 5G networks using existing technologies. Congratulations to everybody who participated in this effort. We have two documents that are in the RFC Editor queue, a couple of documents that are under IESG evaluation. They have a couple of discusses that need to get cleared. We've made use of the face-to-face time to get at least one of them almost ready for clearing. The other one should be resolved in a week or two, and I'll touch upon that more in the next slide deck. We have three documents that are in what we call a post-Working Group Last Call state, and we've also had two new working group adoptions; one of them got adopted this morning.

In terms of liaisons, we sent a liaison to four 3GPP groups back in September last year seeking input on this draft that we have which talks about IETF network slice application in 3GPP 5G end-to-end network slicing. We did get a response from the RAN3 working group just before the last IETF, and we received another response from the SA group in December. The summary of the feedback that was given by the SA group is captured on this slide. We do have a slot today where Luis will talk about our response to the feedback that the RAN3 working group provided. The authors have stated that they would need more time to put together an appropriate response for the feedback we got from the SA group. We also received one additional liaison from the ITU-T Study Group 15 Q14. This came in in November, just after the last IETF TEAS working group session. It was an invitation for modeling experts to participate in virtual coordinated coordination meetings earlier this month. This was marked as action taken.

This is a reminder of our working group's IPR process. We do expect all the authors and contributors of the drafts to respond to IPR polls before working group adoption and also just before Working Group Last Call. If an author or a contributor is added to the document midway through the working group process, the expectation is that an IPR statement would be sent to the list by the newly added member.

We do have a working group GitHub. If you have a working group document and want to avail this, please do reach out to Italo or the chairs and we'll help facilitate that. Please do note that all consensus discussions still happen on the list. If there are detailed discussions happening on some issues thread on a git, the onus would be on the authors to bring that discussion to the list.

Lastly, we do expect status reports to be sent to the list just before each IETF. For any interim status checking, you can always look at the datatracker, also check out our working group Wiki, which we try and keep as up-to-date as possible.

Any questions on this deck before we switch to the next one? Sergio, go ahead.

Sergio Belotti: Just a question about the path computation draft [draft-ietf-teas-yang-path-computation]. I was expecting that it would be in the IESG process together with the tunnel model [draft-ietf-teas-yang-te], but now I notice that it is still in the after Working Group Last Call. So...

Pavan Beeram: Yeah, we have three documents in there. I'll touch upon that in a few minutes. I'll get to that.

Next on the agenda is working group document status. Thanks, Italo, for helping compile this deck.

We have two documents, as noted before, in the RFC Editor queue [draft-ietf-teas-yang-rsvp, draft-ietf-teas-yang-rsvp-te]. We have two documents under IESG evaluation [draft-ietf-teas-yang-te, draft-ietf-teas-te-types]. They both have a couple of discusses that need to get cleared. I think the high-level comment was regarding the scoping of what is being modeled in these documents. The ask was whether we need to constrain this to LSP-based technologies. At least for one of the documents, the TE YANG document [draft-ietf-teas-yang-te], we I believe we've made a strong case for that not to be the case. And I think we have one revision to be uploaded and then the discuss would get cleared. For the TE Types document [draft-ietf-teas-te-types], I think we've made the case. I think the discussion will continue next week, and hopefully that would get resolved.

We have four documents on the agenda today, so that leaves us with 24 documents, and the status of each of those 24 documents is covered in this slide deck. There are three documents in what we call the post-Working Group Last Call stage. That's the YANG path computation document [draft-ietf-teas-yang-path-computation] that Sergio was asking about, we have the L3 TE topology document [draft-ietf-teas-yang-te-mpls-topology], and also the ACTN POI applicability document [draft-ietf-teas-actn-poi-applicability]. For the two YANG documents, there are some minor editorial touches that need to be taken care of before they can progress to the next stage. The onus is completely on the chairs to get that going, so expect to see those two documents progress to the next stage the next couple of weeks. For the ACTN POI applicability document, it just went through a Last Call and we've just put in an early Routing Directorate review as well for it. In the meantime, the shepherd will reach out to the authors and work on getting it ready for publication request.

We do have eight documents that are in expired state. Some of those documents, which are basically YANG documents that are almost close to completion, we would like to urge the authors to revive those as soon as possible and help get them to the finish line. You can always leverage the weekly Friday calls that we have for modeling discussions and use that platform to push those through.

We have a couple of documents for which we just put in an early YANG Doctor review request. Those are the NRP YANG document [draft-ietf-teas-nrp-yang] and the YANG topology filter document [draft-ietf-teas-yang-topology-filter]. We have a few others that are deemed to be Working Group Last Call ready, and the chairs will discuss how we can line them up for Last Call after this meeting.

I will not be walking through the status of every draft that's captured in this slide deck. I'll dwell on a select few which we believe need some attention. For the rest, please do go through the slides, go through the status reports that were sent to the list, and if you have any questions, please do either post them here or send the questions to the list.

That said, let me jump to slide nine. This is the NRP scalability document [draft-ietf-teas-nrp-scalability]. There was a revision published for this last month. The authors believe that they have addressed all the open issues associated with it, and they are saying that this is ready for Last Call. So please do review it in anticipation of a Last Call before the Vienna IETF. We do want to progress this document along with the one that's on slide 11. This is the document that talks about realizing network slices in IP MPLS networks using this notion of network resource partitions [draft-ietf-teas-ns-ip-mpls]. There are some open issues, but we are hoping that those would get resolved before the Vienna IETF.

So the next document that I would like to draw your attention to is the NS models applicability document [draft-ietf-teas-ns-controller-models]. There haven't been any changes to this document in a while now, but the reason why I'm putting this up there is with regards to the ASNM discussion. For those of you at the ASNM BOF, it should be clear that it's just a matter of time a working group will get formed, and sooner than later we will have a discussion on what documents would get progressed there. This may or may not be a candidate in that discussion, but please do keep an eye out for coordination discussion regarding that. Luis?

Luis Miguel Contreras Murillo: I'll scroll through this document. Maybe our next step could be to revive it, to keep it alive, right? And then wait for the decision of how it will trigger the update.

Pavan Beeram: Dhruv, do you want to talk a little bit about... OK, thank you. So yeah, please keep an eye out for that coordination discussion. It will happen sooner than later.

The last couple of documents that I wanted to draw your attention to in this deck are the RSVP cryptographic documents [draft-ietf-teas-rsvp-auth-v2, draft-ietf-teas-rsvp-hmac-sha2]. We did adopt those relatively recently, but as per the authors, there were only a few editorial cleanups that need to be taken care of, addressing some comments that were raised by the security reviewers. So please do review these in anticipation of an expedited progress to the next stage.

Those were the documents that we wanted to draw your attention to in this slide deck. As you can see, the two major YANG documents are almost at the last stage; they should get pushed to the RFC Editor queue, hopefully next week. And then the other YANG documents that have been deemed almost ready for Last Call should get lined up pretty soon. And we also have the couple of NRP documents which we are hoping we can get to the finish line before the next IETF.

Any questions? OK. Let's go to the first presentation. I believe it's Luis.

[Presentation: 5G Network Slice Application in 3GPP 5G End-to-End Network Slice]

Luis Miguel Contreras Murillo: Hello, this is Luis from Telefonica. I will present the update on behalf of my co-authors. The first thing is to apologize because we didn't realize that there was this additional liaison statement from the 3GPP for the SA group, which is essential to be covered. So essentially, we couldn't complete basically our task. So essentially one of the conclusions is that we will work on that after this meeting.

So the history of the document: so basically this was the result of merging a number of individual drafts that were addressing the same topic of how to essentially identify the way of taking the information from the network resource model from 3GPP so that we could map onto the NBI YANG model for network slicing [draft-ietf-teas-ietf-network-slice-nbi-yang] and basically processing all that data.

The latest changes that we have considered are those corresponding to the RAN3 working group in 3GPP, which were collected in the liaison statement 2071, but also we were looking at the 1957 that was targeted in principle for the realization draft, for the RFC 9889 [draft-ietf-teas-5g-network-slice-application], with the idea of basically ensuring that we were covering all the angles. Obviously, we failed because we missed the SA liaison. So I will go quickly through the different comments and how we addressed them.

So in 2071, we received a couple of comments about the terminology, or yeah essentially we were making reference in the document to concepts like fronthaul, midhaul, or also the separation between DU, CU, and the interface among them. And essentially, the answer from RAN3 was that those concepts are not defined in 3GPP. This is actually the case, or are defined in a different manner. And the point of basically the conflict here was that part of the authors are coming from the effort in O-RAN and somehow we are biased a little bit in terminology aspects through the O-RAN vocabulary. So yeah, this was the origin of this deviation, let's say.

Additionally to that, also liaison statement 1957 also was commenting about the convenience of referencing a number of specifications from 3GPP, the 38.300 and the 38.401. So in summary, collecting all these three comments, what we did was to add a text, a paragraph, essentially commenting that the definition entities DU, CU, F1, and so on, which are applicable to 3GPP slicing scenarios are provided by 3GPP specifications. And we add this specific reference to 38.401 because we understand is the most complete. We also comment that similarly, definitions of entities or interfaces like midhaul are applicable to O-RAN slicing scenarios and are provided by O-RAN specifications, so also we point out the O-RAN architecture specification so that the reader could have the overall context. Anyway, what we comment is that both 3GPP and O-RAN specifications take precedence over the definitions that we could use in this document for either 3GPP or O-RAN concepts, so somehow referring the reader to the original documents to avoid any kind of confusion.

The second set of comments are basically editorial ones. Liaison 2071 comments about some error, some typo in referencing one figure; this has been already fixed. And also they comment about the fact that we were writing down in a figure that the relationship between DU and CU was essentially some concept defined by O-RAN. We were writing O-RAN specifically. So in order to avoid this issue, we have removed the reference to O-RAN and basically now all the figures appear as basically the relationship between functional entities, but without associating to either O-RAN or 3GPP or whatever other body.

And the last set of comments were those in liaison 1957 about the fact that were basically recommendations or statements from RAN3 that they don't enter basically in the space of defining the interaction with IP MPLS. And also the second comment was about an specific annex from RFC 9889. So in summary, these all these two comments are originally targeted for the realization document, so do not apply to our document, to the application one, so basically we don't take any action; we ignore these two.

So what is missing? Apart from addressing the SA liaison statement, the one received in December. So we will want to perform new editorial passes for text clarification. The document is very dense and probably in some cases difficult to read, so we need to go through that trying to simplify a little bit. And precisely because we need to do the addressing of the comments for this liaison, we will take profit of that. Also, we need to update the references. Right now some of the references are basically old and we need to update to the proper documents now, the RFCs in some cases. And once this is done, the idea would be to keep progress of the document, maybe request the directory reviews if needed, because we are putting some examples; I don't know if that would be needed or not. For sure receiving more reviews are more than welcome and everyone is invited to go through the document. And once we finish with all of this and we produce the new version, clean version and so on, then consider the Working Group Last Call. So this is basically the update.

Pavan Beeram: Any questions for Luis? Do you have any timeline for when we can expect a...

Luis Miguel Murillo: Well, I would like to have this maybe in the trimester as much so that we at least the coverage of the liaison so that we can go to next IETF meeting with all the homework done and basically, yeah, without further need of editorial things.

Pavan Beeram: So, I mean once you resolve the comments that we received from the SA group, I believe the expectation is that we would reach out to them again and say this is what was incorporated, and then we take it from there.

Luis Miguel Murillo: Sure, the idea would be to comment on the list or the way in which we have addressed the comments and for sure receive feedback again. Thank you.

Pavan Beeram: Thank you.

[Presentation: YANG Data Models for Network Resource Partitions (NRPs)]

Bo Wu: Hello. Good afternoon, everyone. I'm Bo Wu, and on behalf of my co-authors, I will be presenting this document updates. This is about YANG models of network resource partitions [draft-ietf-teas-nrp-yang].

A quick recap of this draft: this draft defines two major YANG data models for configuring and managing NRPs. The first one is IETF-NRP, which is network level model, which is used for network-wide policy configuration by network slice controller. And another one is NRP device model. So this device model is configured by network controller on individual network elements. And on the right, you can see there's a picture to shows two data models interact with network slice controller, network controller, and network device respective interfaces. And the foundation of this NRP YANG models, these two YANG models, is based on our published RFC network slice framework [RFC 9543]. The outcome of this NRP configuration is the NRP; you can see the figure shows that each NRP is a collection of network resources allocated from underlay network to support one or more network slices. So this is a quick background.

And since last update, this version 05 resolved five several major open issues. And we co-authors are very thankful for all the valuable comments from Tom Petch, Adrian Farrel, Med Boucadair, Luis Contreras Murillo, Xuesong Geng, Italo Busi, Joe Happen for the valuable comments. So based on these comments, we already resolved all these open issues previously kept in Appendix A, and in this version we removed based on we resolved. And there are no remaining open issues for this version 05. For these seven open issues, the first two are quite essential. The first is data model completion that before there's a placeholder for MPLS definition, and this version we align with the MPLS working group MNA NRP selector document [draft-ietf-mpls-mna-hdr]. And the other concern about NRP is the scalability. So in this version, we also added a complete operational guidance on how to get better scalabilities. And the other several ones are all for clarifications. The first is a reference to the slice framework and the topology filter dependencies because topology filter once are individual draft and now it's already a working group draft already [draft-ietf-teas-yang-topology-filter]. And the other is also the network slice controller device usage about this YANG models; this from this comments is from Luis and we added in section 3.2 and 3.2, 3.1 and 2, and 3.4 to reflect all the changes to clarify how to use these models. And NRP policy definition, this one is also from the Xuesong; it seems that this document is quite tightly relied on the another the batch one of NRP drafts, the network slice IP/MPLS draft [draft-ietf-teas-ns-ip-mpls], needs to repeatedly refer referring to that draft. Now we added a more of NRP policy definition in this version. The other thing just a small ones, like how we should also cover the segment routing and also adding more examples to illustrate how to use these models.

And here is a overview of this NRP YANG models. You can see we define dual model hierarchy that is network configuration model that define the topology, policy profiles that can be a that will be a network-wide definition. And for consistency, we just reuse the grouping definition in the network model to directly reuse the definition, so this will not use another different naming space. And you can see that on the this new version we added the selector ID I will show in the I will present in the next slides. And right now you can see we try to be flexible enough. This model, network model supports three partition modes: can be this NRPs can be control-only, data-only, hybrid, and the selector, data plane selector can be agnostic to cover IPv6, MPLS, ACL, all these flexible choices.

So this is the essential updates of this version. So you can see left is the old YANG models and the right one is the new one that you can see the MPLS part, we added both in-stack and post-stack, and the encoding could be three 13s, 20s, 32, all these options. And this MA definition is directly from the MPLS working group definition; we don't change but we just keep consistent with the MPLS working group definition.

And another concern is about scalability consideration. We cover we added three phases operational description to covering planning, provisioning, and monitoring to support if the scalability of NRPs. Like for example like planning phases, the service provider can defend the reusable QoS profiles, topology sharing by filtering or selection to underlay sharing among NRPs. And for provisioning, as I mentioned earlier, the NRP policy can be at network level and directly mapping to device model level. And the monitoring phase, there could be a like per-NRP topology visibility and also network-wide NRP list with native topologies. So this is for scalability consideration.

And so in this version we think we already resolved all the open issues, so we authors like to have more of wide review from working group, and we are planning to request Working Group Last Call. That's the end of my presentation.

Pavan Beeram: Thanks, Bo. Any questions for Bo? It's a significant update, so please do review the document. Like I said earlier, we still want to progress the other two documents first before we get to this [draft-ietf-teas-nrp-scalability, draft-ietf-teas-ns-ip-mpls]. Use this time to get as many reviews as you can.

Bo Wu: OK, thank you.

[Presentation: Profiles for Traffic Engineering (TE) Topology Data Model and Applicability to non-TE-centric Use Cases]

Italo Busi: Hello, I'm presenting an update of the draft on TE topology profiles on behalf of co-authors and contributors. The motivation is because we have multiple discussions in the IETF working groups but also outside the working groups about what we can do when we have a network which is considered which is a TE-aware network. It means that we need some information which is defined for TE, but it is not a TE-centric network. Somebody called it non-TE networks because we are not using this information to steer the traffic, but we are using this information to know the sources or to know the shared risk groups and so on. In this case, we need just a subset of the attributes in the topology, and the first glance when they see the topology, "Oh, it’s too complex!" and then we have to show every time that, "OK, but you can have a subset. You can implement a subset of the whole topology model." So what the motivation of this draft is to clarify to everybody that it is possible to profile the TE topology model [RFC 8795] to address specific use cases, which may be TE or non-TE. And these are the examples. They have not changed since the last version of the model, and there are a few examples just to show that they are not TE-specific, TE-centric specific.

What we updated: we changed the name of the UNI topology discovery as multi-domain link discovery. We got a comment offline during the last IETF because we are discovering basically the links at the edge of the domain, no matter whether they are toward the customer, the user, or toward another domain. We are re-emphasizing by quoting some text from RFC 9522 that the actually the boundary about what is TE and what is non-TE is really blurred, so this concept about non-TE is really difficult to to capture. And we added reference to existing public implementation. So some profiles of TE topology has been already implemented, and actually this idea is already included in the RFC 9656, which is the MPLS microwave topology model. Worth noting a microwave technology is not even a TE technology, not even a switching technology, but they they needed to have this this profile because they wanted to describe the capacity of the link, which is a part of the TE topology, or they want to describe the operational administrative state, and they need to navigate from client to server. They have an ethernet topology on top which is supported by a topology. So they need to have a client they have a multi-layer network and they need to navigate from the client to the server topology layer, and that was all the tool were available in the 8795 and few few minor updates and figures cleanup.

So what is related work? We have also submitted another draft to NMOP working group. The reason for the two drafts is because we are targeting two different objectives. With this draft, we want to make it clear to the overall world that the RFC 8795 can be profiled for both TE-centric and non-TE-centric applications, and this I think is a statement from this working group. The second one, in as addressed to NMOP, is basically saying how the TE topology profile can be profiled to support a specific application. We think this is more in the scope of NMOP working group because it's a very specific set of requirements, while ours is more generic. OK, this is a few highlights for people: the first event where we did a profile was in September 2017, a multi-vendor multi-domain interoperability in the optical layer, with two vendors in the optical domain and two vendors in the in the super domain controller talking everybody were able will talking to anybody. We didn't test every bits of the 8795 but just a profile. And the second was the microwave oops, the microwave plugtest; we tested a multi-vendor Ethernet over microwave topology, both of them were based on different profiles of the Ethernet TE topology.

And then open issue: OK, we have still an open issue about the how to report what we implement, or there was some people say, "OK, how we can manage the profile in a programmatic way?" We think we I noticed that when I worked with the especially with the microwave people, what helped them to understand what to do was just to provide them a version of the tree which were manually cut. We can do a programmatic cut of the profile based on the on the deviation, but I don't know what else. I would be good to have the feedbacks from those who are implementing what they really need besides the YANG model to implement a profile and to manage the profile in a programmatic way. Anyhow, we think that this is a tooling issue. It is not a YANG model issue or it is not an RFC 8795 related issue. So our proposal is to keep this outside the scope of the draft and work out maybe in Hackathon some tooling to help using these profiles, and I would like to get feedbacks on that so we can close the issue. And so the next step, we think the document is pretty stable. Besides this one, there are no other open issues. We can do some editorial cleanup and we can go to Working Group Last Call. And in parallel, trigger some work for the tooling. It is not, in my opinion, it is not an RFC issue; it is a tooling issue.

Pavan Beeram: I see Oscar. Oscar, go ahead.

Oscar Gonzalez de Dios: Yeah, so my my question is regarding this this tooling issue that, as you know, there is this new ASNM group that it is working on some tooling regarding YANG. So here, well, one of the possibilities also would be to consider this Trimmed YANG where there is a profile where you are just using a subset. So maybe also that's one of the things that could be also covered or taken to taken to there.

Italo Busi: Yeah, maybe it's a good idea. I think we can write a draft in ASNM about profiling YANG models. Because it's a generic problem with YANG.

Oscar Gonzalez de Dios: Exactly. Yes, exactly. So it's more related it seems to be more related, as you say, to the to the tooling and how to express to others, OK, so how the how the API look that would look like in a trimmed way. OK? Thank you, Italo.

Italo Busi: Yes, also a good idea to me. Yep.

Pavan Beeram: So I understand you've taken this problem to NETMOD earlier, but you...

Italo Busi: We I was not even able I talked to some people in NETMOD. The problem I have not yet written to NETMOD because it's difficult for me to write it down exactly what is the problem statement. What the people who are objecting against this this idea because they have no tool, what they need from the tool? And the first reaction and that's why in this I started to look at the deviation because the first reaction I'm getting from NETMOD people, "Why you don't use deviation?" So that's... so it will be good to have also input about the implementers about what do they know, what do they need.

Pavan Beeram: Just one clarification as a penholder for the charter. When we were saying tools in the ASNM charter, our focus was first that the ASNM will only catalog tools, so it's the work is still has to be done by people, and then not all tooling related to YANG. For instance, Pyang and all the other things that we do, that's not going to be in the scope. The scope is things which are related to service modeling and things which are related external to this. So just so that we set the this, maybe TE profiling comes into that, I am unsure right now, but let's have that discussion. Yeah.

Italo Busi: OK, OK. I think I'm thinking from what Oscar said we can write a a problem statement about profiling YANG model, which is not only service model; can be network model, device model as well. So I think the issue is quite generic and we can see what is the feedback from the experts.

Pavan Beeram: And that was my point; I think generic things let's keep that in NETMOD.

Italo Busi: OK, OK. So maybe we can go to NETMOD. OK.

Pavan Beeram: So with respect to this particular document, I guess the proposal about keeping that out of scope is still valid. I think that shouldn't block this document. But yeah, we need to have some discussion on how to get the tooling done.

Italo Busi: Yep.

Pavan Beeram: Any other questions for Italo? Thank you.

Is Dan King in the room? Oh, you made it back. OK, great.

[Presentation: Applicability of Abstraction and Control of Traffic Engineered Networks (ACTN) for Packet Optical Integration (POI) service assurance]

Dan King: Thank you. Hi guys. I will be talking momentarily about something, I believe it is the ACTN POI assurance document [draft-ietf-teas-actn-poi-assurance]. I hope. Those are the slides I wrote. Yes, here it is. Great. So, several authors; I was pushed forward to present. The architecture essentially builds on the reference network that we have in the ACTN POI document [draft-ietf-teas-actn-poi-applicability] that's progressing through the TEAS group at the moment. What this document is doing is kind of building on the ACTN reference architecture and identifying how you once you've deployed the services, how you actually monitor them and ensure that you meet the required SLA from a high-level perspective and implement the various features across the different layers to kind of build a picture on the health of the service. So this this is really important when you're building sort of these multi-domain networks, both horizontally and from a hierarchical perspective because there's lots of sort of inter-layer dependencies when you have packet running over optical and considering the different functional components, so the the CNC, the PNCs both at packet and optical and then the MDSC. We need to know what information to extract on the network so we can make informed decisions and recommendations in the event of failure. Now, obviously you will pre-compute, you know, several failure scenarios, have backup paths both at the optical maybe the packet layer, but you know it's really about identifying those kind of interactions and what tool tooling we have at the at the IETF. Because ideally we want to reuse the YANG models, the protocols, the procedures that have been defined not only within the TEAS working group but the ancillary work we have in CCAMP and IPPM and and others, you know BFD, etc.

So cool. Where are we at the moment? Well, in terms of documentation, we are into sort of the fifth version of the document, so 04. We have I think stabilized at least the key use cases that we cover. Obviously the architecture was pre-defined for us. The major kind of updates that we've had from the previous version is ensuring that the the workflows, so the life cycle of the service is very well defined. And we've put a lot of new text into a few of the latest sections in the document that talk about the interaction between the layers and the optical domain itself. We we we have a big section remaining now. We've seeded a few discussion points; so this is the sort of the packet discussion, so the VPN technology itself. So what we need to kind of start thinking about is from a packet layer perspective, what information needs to be extracted at the key demarcation points, both sort of at an interface level and maybe a system level for the domain. And then how we kind of correlate that and then use that at the MDSC to make sort of a series of decisions. Because at the optical layer, if there is a failure, then it's kind of binary. You will switch, the service should be transparent, that sort of optical service switch should be transparent to anything that's running on top because it happens within a very short period of time. But in the packet layer, when you start seeing degradation of service or some kind of catastrophic failure, the MDSC may want to make a recommendation to a lower layer PNC to say, "Hey, look, there is potentially an issue, I don't like this maybe I've got a TCA enabled, I don't like the fact that I've just triggered an alarm, you need to do something about it."

Now there is there are a couple of caveats with that. Is this a correct assumption to make? Should the MDSC be getting involved with this lower layer decision making both at packet, you know but potentially an optical path perspective? And you know, what kind of scenarios are we really talking about here? So we've got a couple of operators that are really helping with this discussion, but we do have to make some decisions. And and and it's maybe not going to solve all use cases, but we've got you know one or two use cases that we're kind of using as our our North North Star here. So we've got a diagram here. It's quite complicated; it's very difficult to translate that directly into ASCII art, so we haven't attempted that yet but we'll probably simplify it. But what we what we want to try to do during our you know they're not sort of interim sessions but during our weekly calls that we have for this particular document is just identify which domain elements, so sort of you know PE2, CE2 potentially are we going to extract information for from, provide to PNC on the right-hand side there, the red PNC. And then maybe either aggregate that information, abstract it, create a composite view that we then pass to the MDSC. On the left side of the network, you know potentially the optical layer's maybe single administrator but multi-domain. Maybe in the packet layer it's slightly different, maybe different packet vendors. Do we then collect information all the way on the left-hand side it's not there, but sort of the P1 domain so the packet layer domain it's blue when you look closely. And we certainly probably don't collect any information beyond that border router actually, but you can see two different OAM mechanisms; we've got BFD, STAMP, TWAMP as well. So there's multiple ways to kind of collect information; it's really deciding what to enable and when for which scenario, how to collect it, how the PNC will the P-PNC, the red one, will store it and then what to report to the to the MDSC.

So we will be spending quite a considerable amount of time I think talking through this particular section. It's probably going to sort of take a few iterations. We will report back I think to the working group wherever possible, maybe when there's there's already been I wouldn't say contentious discussion but there's already been some disagreement but different approaches and what to monitor and what mechanism to use for OAM. But once we kind of reach some consensus, we'll probably report back to the mailing list even before we submit the new version of the document. We typically tracked everything, all of the discussion in the various issue tracker within the GitHub itself. I think there's sort of 11, maybe 12 open issues at the moment, but this is kind of the most important one I think.

So yeah, I guess between now and Vienna, it's really about identifying that sort of key use case and sort of you know what needs to be push versus pull OAM, you know where if we do sort of use telemetry where is it enabled, what mechanisms are we going to use and so on and so forth. Then we will obviously look to try to get a new version submitted before Vienna, but anything that we need the working group to kind of help us with, I think we'll post to the list and point back to the GitHub. I think that's probably it actually.

Pavan Beeram: Any questions? No disagreement with strategy. Everybody is happy. Or no one cares. Also possible.

Dan King: Thank you.

Pavan Beeram: Thanks, Dan. Tony?

[Presentation: A Power Conserving Path Placement Strategy (PCPPS)]

Tony Li: Hello, I'm Tony Li. I'm talking on behalf of all my co-authors. We'll be talking about power conservation and how to do so with traffic engineering.

So we've got networks out there that have provisioning that is set up for their peak demand, and especially for eyeball networks, there are cases where demand falls off, as low as 15% of peak. And that means that 85% of their power is being wasted. This makes people unhappy because they are paying for power, more so since the price of gas went up in Europe and once again since the price of oil just went up. So how can we turn off parts of the network?

Well, first thing we'd like to do is to consolidate traffic. If we can push all the traffic onto a smaller number of links, then we can turn off the links we're not using, and even more importantly, we can turn off ASICs that we're not using. ASICs are the biggest power consumers in routers.

So what we're suggesting is an enhancement to CSPF. I think everybody here should know about CSPF in this working group, so I'm going to skip over that part and talk just about what we're doing to change it. What we're suggesting is that we also have a power metric for a link. This can be extracted from the traffic engineering database. We have proposed ISIS extensions that give you the power consumption of an interface [draft-many-lsr-power-group], and by looking at both ends of the link, you can come up with a total power for that link. This can then be converted by the path computation engine into a metric, and from there we can compute a path that avoids power.

Since this is being done by each ingress node in the network, and we have no real need to distribute this metric around, this is stuff that is does not need to be put on the wire and is not standardized. So that's out of scope for this moment. As I mentioned, all the power stuff is already in the IGP. We've got everything reasonably well documented, haven't had too many complaints. We do have lots and lots of data about how things are architected internally to the router. We abstract things through what we call a power group, and so you don't have to tell us about what how your router is architected, but you do have to tell us about how the power is arranged in your router.

We also need to know what can be turned off. For some things that's implied by the advertisements, but it's very helpful to know what can and can't be turned off. And power groups I just talked about. In strange cases like lags, an interface can belong to more than one power group. And power groups are not necessarily hierarchical, so the overall thing turns into a lattice. It is pretty important that if you are walking this lattice trying to figure out the power consumption of the link, you avoid anything that looks like a cycle. And and you shouldn't obviously be advertising a cycle, but just in case somebody is advertising garbage, be careful.

We also have to deal with what we call unidirectional sleeping bandwidth. Again, in the case of a lag, you may have members that are turned off. It is very helpful to know about the bandwidth that has been turned off because it can be turned back on. We do want to have the ability to turn things back up in the morning because we do need to re-energize the network. And so we also need to know what's been turned off. We also need to know about what links have been put to sleep, and we have to carry that around in ISIS in a very specific way so that it doesn't get confused with an actual working link. So there's an entire separate top-level TLV for that.

So we're ready to adopt this as working group document [draft-li-teas-pcpps], and this is helping to drive the IGP work, draft-many-lsr-power-group. Thank you very much. Are there any questions?

Pavan Beeram: We are well ahead of time in the schedule. We are doing great on the schedule, so in case anybody wants to suffer, go ahead.

Zafar Ali: Hi. So I'll let some operator requirement and comment about if they would like to mess around with routing to save power because they put links and entities in the network based on a careful planning, they are for protection, in case of failures and other reasons. But that's something that I cannot comment. But I would like you to go Tony, to go to slide number five, please. See there in the middle you say that algorithm used to compute path. So you say is like the definition for this metric and other things is out of scope. That's the very thing that should be defined for consistency across multiple vendors. Otherwise you have a very large skew on your metric that can that can impact the routing. And in the next slide which is six, you define power group hierarchy which is fine, but the only issue is that the green working group is chartered and I read their charter: define terms and definitions related to energy efficiency metrics, develop or select a framework for energy-efficient monitoring, energy-efficient capability discovery. It has been mentioned to you in in LSR working group that these kind of definition and what it is, the hierarchy, the correct hierarchy and all is something that green working group should be consulted. And your answer was: I don't like the way things are going in green working group. But that's not a good answer, honestly. So I can see that why you came to TEAS to bypass that dependency, but I don't...

Tony Li: No, it's not to bypass it. They are doing monitoring. They are not doing traffic engineering. We are doing traffic engineering. We do not have to deal with them.

Zafar Ali: Yeah, but if you define a set of power group hierarchy, a set of metric that go against this, I do not believe that you need you can skip that. But that's a different discussion.

Tony Li: But there are multiple ways to define a YANG model, and we're not going to.

Pavan Beeram: Zafar, I guess your question is more towards the LSR document. This document doesn't really define the power group here. The definition of the object is in LSR group. And my understanding is that there is some discussion ongoing discussion between the chairs and the ADs about coordination. I think that will get sorted at some point in time. But this particular draft that's being presented here is not does not define the what's being modeled in green.

Zafar Ali: There is a whole section on on on this power group. Anyway. It's fine. If the discussion is happening between LSR and the green, we let it go.

Tony Li: LSR power groups describes what we are doing in ISIS, and it's completely independent of anything happening in green. Green is talking about what they're doing; we're doing something completely separate. The only thing we have in common is the word "watt".

Zafar Ali: I'll let the discussion that's happening between LSR and the green go on. But the definition also needs to be very clear, very crisp because this is part of the T-metric that influence routing, so you have to define, you cannot say it's out of scope. Thank you.

Tony Li: I'm sorry, but that's absolutely not true. The metric is completely local to the path computation engine. You do whatever you want with the information. This is just like all the other constraint and policy information that the headend also has. You don't need to circulate it.

Dan King: Cool, thanks Tony. So full disclosure, haven't read the document, but I was paying attention to the presentation, and I really like the sound of this, especially given sort of recent geopolitical events. So I have kind of two two questions. And I'll ask them both and then you can choose to sort of answer one or ignore both. Have you done any sort of quantitative analysis on sort of a reference topology just to see the kind of savings that you would get, you know and and what the benefit would be to an operator who clearly has a power bill problem that will only increase sort of exponentially. And then the second question I have is really just linked to my presentation, which is well, what about, you know how does this factor into my overall network resilience? Because if I'm going to put potential parts of my network into a sort of standby state, do I need to start factoring this into my resilience scheme? You know how and and not all the FPGAs or ASICs are equal, so you know how quickly can I kind of spin this device up?

Tony Li: I'm going to start by answering your second question first. Yes, this has to go into your resilience design. The way we recommend doing that right now is to go ahead and figure out what you feel is a minimal backbone and not to enable power savings on that backbone. We expect that backbone is at least bi-connected. If you want to have resilience within addition to that portion of the network, we're happy to talk about that. We think that that's a very reasonable request, and again we want to have bi-connected with a certain amount of capacity left over for failover. Again, this is something that the headend can compute when it's doing path placement. What do we turn off, what capacity do we need for resilience? So lots of open work there, and it's a little bit tricky because everybody's idea of what's sufficient resilience is very personal.

To the first question: I've spent about a year modeling this stuff and running it through five different tier-one backbones. It I have depending on the backbone and the traffic load, I have been able to extract about 74% power savings up to 74% power savings at obviously minimum traffic time. Now obviously this was an eyeball network because it has very, very large traffic swings. There are other networks that are more traditional backbone inter-ISP networks where traffic is much more constant and they only exhibit small dips in traffic and obviously you can't exceed your power dip your power dip can't exceed your traffic dip, right? So we've only been able to extract a few percent on those cases, but it's still extremely useful in trying to get out whatever power you can get out. And the best part is that we are going to be able to automate this completely. So once configured appropriately, the operator should be able to set it and forget it.

Dan King: Cool. Thank you.

Andrew Stone: Hi. So I guess yeah actually similar question as Dan. So there is obviously that you're aware of too, that there's a draft in Spring related to doing this with segment routing [draft-ietf-spring-sr-pcpps]. That has not been presented yet to the TEAS working group. Do the chairs think it's worth bringing at least maybe one presentation? I can maybe do that at the next IETF, figured just check.

Pavan Beeram: I guess we can I'll talk to Oscar as well, we can maybe look at having a bigger slot. But yeah, I mean there's nothing stopping you from sending an email out to the list and asking for review in the meantime.

Oscar Gonzalez de Dios: Also because I think it would be good to have coordination, I guess, so not just not to avoid if we are going to work in the same topic even if it's with a different technology, I think it's better to be coordinated. OK? So it's better to make both working groups aware of the of the work.

Pavan Beeram: Andrew, when you came online I thought you were going to say I have an implementation ready, so...

Andrew Stone: Yeah, not yet, not yet. Yeah. So soon, soon we'll see. Looking forward. Yeah. That's too bad.

Zafar Ali: Yeah, Pavan. Can you hear me?

Pavan Beeram: Yes.

Zafar Ali: OK. So I have an implementation ready and shipping for many years because this is SR-TE does this multi-path or UCMP ECMP from day one, like for 15 years. So I do not understand the motivation for reinventing the wheel, and this is something that that all vendor support. But my more concern is about you have a slide where you displayed optimization signaling procedure and guidelines and all. I just want to make sure that those optimization remains within the scope of MP-TE and do not get creep into existing RSVP-TE implementations that are shipping for many, many years. So of course you have a scale issue with this; you're introducing new signaling procedure, completely new signaling procedure that are not hop-by-hop but they are like APIs to mid-nodes. So so those you can see why you need to optimize signaling procedures. But I just want to make sure that remains within the scope of MP-TE.

Pavan Beeram: Yeah, thanks for the comment, Zafar. In the interest of time, I'd like to keep it very short; we still have one more presentation to make. I mean bandwidth engineering with segment routing has challenges. I mean like I said, this is a bandwidth engineering construct. I know people I mean if it was really that simple, RSVP-TE would have been dead by now. The fact that it's still there and striving I think there's a lesson for everybody to take. But yeah, let's take that to take it to the list. With regards to optimizing the signaling procedures, I mean that's up that's up for the working group to decide whether there is some anything in there that is of generic relevance and can be brought back into traditional RSVP-TE as well.

Zafar Ali: So Pavan, I think you're mistaken that there is a challenge, and and I can point you to some presentation, even presentation, implementations that are out there for bandwidth accounting for SR, OK? But we can take offline.

Pavan Beeram: Thanks Zafar. Thank you.

[Presentation: Multipath Traffic Engineering]

Pavan Beeram: I'll be giving a quick update on the three MP-TE drafts that are targeted for TEAS working group. For those of you who just walked in, my name is Pavan Beeram and I'll be presenting this on behalf of the authors and contributors on these three documents. We've been busy implementing these drafts, so the updates themselves are fairly brief, except for some changes related to optimizing subgraph updates. The rest of the changes are mostly editorial.

This is the first of the three drafts [draft-ietf-teas-multipath-te]. This is the base draft that covers the overall architecture for MP-TE. Kireeti has the pen on this; Lou, Mazen, Andy are the other three co-authors. This draft introduces this new TE paradigm called DAG-based multipath traffic engineering, which provides the tools that you could use to enhance multipathing in traditional bandwidth-engineered networks and also bring traffic engineering to networks that are purpose-built for massive amounts of ECMPness. The primitive of interest in this paradigm is what we call a traffic-engineered DAG, as opposed to traditional traffic-engineered path that you would see in existing TE architectures. We do have bandwidth engineering tools like auto-bandwidth TE tunnels and container tunnels widely deployed today; you could view a DAG-based MP-TE tunnel as another tool in the bandwidth engineering toolkit.

And some of the key attributes associated with the construct that this paradigm offers are listed here. With the MP-TE tunnel, which is a construct that's used in this paradigm, you can leverage unequal cost load balancing not just at the ingress but also at every junction that makes up the DAG. The construct supports multiple ingresses and multiple egresses. The multipath spread is maximized at the time of the instantiation of the DAG itself. The amount of state that needs to be set up is significantly less compared to say a container tunnel where you would have to set up multiple paths and maintain those. The amount of churn that comes into play when there is a resource down or a resource-degraded event is also fairly is significantly less because the shape of the DAG is mostly static post-setup; so unless there is a permanent change to the topology or you're changing the static set of constraints, the shape of the DAG doesn't change, and the only things that you'd need to adjust over time are the bandwidth on the junction and the relative next-hop load share.

Two key constructs: one is the MP-TE tunnel and the other is the MP-TE junction. The MP-TE tunnel is what you configure at the tunnel originating node. It specifies a set of ingresses, a set of egresses, a set of constraints, and an optimization objective. The computation engine comes back with results in the form of a list of junction states, and we then use a signaling protocol to go provision these junction states on the junctions that make up the DAG. You could use RSVP, you could use PCEP, you could use BGP, you could use any other API using a standard data model. The junction state that's provisioned on each junction includes things like the amount of bandwidth that's coming in and going out of the junction; it includes the set of previous hops, the set of next hops, and for each next hop, it also includes the amount of load share that needs to be provisioned on that particular next hop.

In terms of actual changes in this revision, they're mostly editorial. We did publish a BGP signaling spec [draft-ietf-idr-multipath-te-bgp] which was presented in IDR earlier this week, so this architecture document just has a reference to that new work as well.

The next update is for the RSVP spec [draft-ietf-teas-rsvp-multipath-te]. The current focus is on realizing an RSVP MP-TE tunnel on traditional MPLS forwarding plane, but the same procedures with some minor tweaks can be used for realizing it on say a shared MPLS forwarding plane or native V4 or V6 or even SR V6 forwarding plane, and those details will be added in subsequent versions, either in the same document or a new document would be published to cover that. In terms of we did talk about optimizing the signaling procedures in previous meetings, so I'll not dwell too much into it. The one design guideline that I would like to draw your attention to for this particular revision is the one that's specified at the bottom: the goal here is to minimize trigger message processing. So what that means is we want to avoid unnecessary junction state updates; so if you have only a small subgraph that needs to be that's being updated, you only go and touch the topological elements that need to be updated and the rest of the DAG is left untouched. We haven't made any changes to the signaling messages in this revision, so I'll not dwell much in that into that.

In terms of the changes, like I said, we did introduce this notion of a hop version. So the idea is to use in-place update procedures when you have to when there is a need to update a subgraph, and all the edges that are that have been affected as part of the subgraph update are associated with a new hop version. And this has helped us minimize the amount of trigger messages that come into play when you're doing in-place updates.

Similar change was made to the YANG data model as well [draft-ietf-teas-multipath-te-yang]. As discussed earlier in previous meetings, there are two modules in this draft: one for managing the MP-TE tunnels, the other is for managing MP-TE junctions. The high-level structure for MP-TE tunnels is illustrated here. As you can see, tunnel entry has a list of tunnel instances, each tunnel instance is associated with a set of junctions, and you can see that the hops the previous hops and the next hops associated with each junction entry also have a version to go with it. This is the high-level structure for the MP-TE junction module. Again, this sits at the top level sits under the top-level TE container, and the each junction entry has an associated set of previous hops and next hops. Just as in the RSVP spec, the changes that we've made in this revision are to do with the introduction of the hop version.

This is my last slide. We do have a to-do list: for the architecture document, we do expect to add more details on how these tunnels are realized over native V4 or V6 forwarding plane, hopefully in the next revision. For the signaling spec, we intend to add several other signaling sequences, especially the in-place update procedures. Then at some point, we would also get to adding the graceful restart procedures. And for the data model spec, we intend to add a few more operational state fields, and that should make it into the draft in the next version. From I mean for us, the next big significant milestone is to come and share an implementation report. As I said, certain aspects of the implementation are sort of ready to be demoed. If anybody here is going to be in Paris for the Upper Side World Congress, please do get hold of Kireeti, and I think he would be happy to show the aspects of the implementation that are currently working. Once the implementation report is shared, hopefully in Vienna, we do believe we would be in a state at a stage where we could request these documents to be considered for working group adoption.

That's all I had. Any questions?

Oscar Gonzalez de Dios: Hi, Pavan. One of the questions I have is regarding you said that you were working in the implementation, but do you have done also not only a Juniper or HP only implementation, but also multi-vendor or you have done some multi-vendor tests or it is just a single vendor?

Pavan Beeram: For RSVP, no. For BGP, there is an open-source implementation being put together. But for RSVP, if you are aware of a good open-source RSVP implementation, let us know and we can make that happen.

Oscar Gonzalez de Dios: OK, OK, good. And the second is that do you have any particular order in what you would like to take the the document because you have mentioned the three of them but I guess you would like to start with architecture and then with the with the others?

Pavan Beeram: Yes. I mean I guess the proof of the pudding is in the performance results that we'd be able to we should be able to show in the implementation report. Once we get to that stage, I think we would have gained I would like to think that we would have gained enough confidence to get this progressed to the next stage. Thank you. Kireeti, you want to say something.

Kireeti Kompella: Yeah, Kireeti Kompella. One thing that we haven't done is to bring the BGP signaling draft here. Is that interesting to do, or because in the end we will be signaling the MP-TE tunnel just using BGP, so the primary place should be in IDR, but does it make sense to bring it here as well?

Pavan Beeram: I would let Oscar answer that. I mean I'd like to think that since we are already talking about the architecture and the various signaling aspects, it may be worth one presentation, but yeah I mean the bulk of the work needs to happen elsewhere.

Kireeti Kompella: Yes, yeah. OK. All right. Thank you.

Oscar Gonzalez de Dios: As mentioned, let's first start I would like to first start with the with architecture and then if required, I mean we can bring also the BGP and so on. I don't have yet a clear position on on that, but for me, first architecture; I mean that's the that's what is mandatory and then all signaling around and all YANG models around can come can come after. OK. All right. Thank you.

Andrew Stone: Hi. So I guess yeah, actually similar question as Kireeti. So there is obviously that you're aware of too, that there's a draft in Spring related to doing this with segment routing. That has not been presented yet to the TEAS working group. Do the chairs think it's worth bringing at least maybe one presentation? I can maybe do that at the next IETF, figured just check.

Pavan Beeram: I guess we can I'll talk to Oscar as well, we can maybe look at having a bigger slot. But yeah, I mean there's nothing stopping you from sending an email out to the list and asking for review in the meantime.

Oscar Gonzalez de Dios: Also because I think it would be good to have coordination, I guess, so not just not to avoid if we are going to work in the same topic even if it's with a different technology, I think it's better to be coordinated. OK? So it's better to make both working groups aware of the of the work.

Pavan Beeram: Andrew, when you came online I thought you were going to say I have an implementation ready, so...

Andrew Stone: Yeah, not yet, not yet. Yeah. So soon, soon we'll see. Looking forward. Yeah. That's too bad.

Zafar Ali: Yeah, Pavan. Can you hear me?

Pavan Beeram: Yes.

Zafar Ali: OK. So I have an implementation ready and shipping for many years because this is SR-TE does this multi-path or UCMP ECMP from day one, like for 15 years. So I do not understand the motivation for reinventing the wheel, and this is something that that all vendor support. But my more concern is about you have a slide where you displayed optimization signaling procedure and guidelines and all. I just want to make sure that those optimization remains within the scope of MP-TE and do not get creep into existing RSVP-TE implementations that are shipping for many, many years. So of course you have a scale issue with this; you're introducing new signaling procedure, completely new signaling procedure that are not hop-by-hop but they are like APIs to mid-nodes. So so those you can see why you need to optimize signaling procedures. But I just want to make sure that remains within the scope of MP-TE.

Pavan Beeram: Yeah, thanks for the comment, Zafar. In the interest of time, I'd like to keep it very short; we still have one more presentation to make. I mean bandwidth engineering with segment routing has challenges. I mean like I said, this is a bandwidth engineering construct. I know people I mean if it was really that simple, RSVP-TE would have been dead by now. The fact that it's still there and striving I think there's a lesson for everybody to take. But yeah, let's take that to take it to the list. With regards to optimizing the signaling procedures, I mean that's up that's up for the working group to decide whether there is some anything in there that is of generic relevance and can be brought back into traditional RSVP-TE as well.

Zafar Ali: So Pavan, I think you're mistaken that there is a challenge, and and I can point you to some presentation, even presentation, implementations that are out there for bandwidth accounting for SR, OK? But we can take offline.

Pavan Beeram: Thanks Zafar. Thank you.

[Presentation: 5QI to DiffServ DSCP Mapping Example for Enforcement of 5G End-to-End Network Slice QoS]

Luis Miguel Murillo: This is Luis again, a very brief update on this draft about the 5QI to DiffServ DSCP mapping, with the idea of enforcing 5G end-to-end network slices [draft-contreas-teas-5qi-to-dscp-mapping]. Briefly reminder of the scope: so basically we here try to to provide a methodology for doing that mapping; so in 3GPP in the access radio access network, we have a huge number of 5QI values, but at the end in the network we need to carry all the traffic through a few a few DSCP marking. So what we intend here is to to show a potential methodology for facilitating that that enforcement of a slicing behavior in the network. Important statements here: the draft only provide these examples for illustration purposes, so should not be considered as a kind of any deployment guidance, and we only focus on the methodology to follow, not in the specific values to be considered. And yeah, there would be many cases: slicing, O-RAN, and so many traffic together, so we need somehow to find a way of putting all the traffic together without impacting the different flows there.

Changes from 04 version to 06: we have the content-wise we think that the draft is very stable. So basically we concentrated on missing sections, security considerations, operational considerations as well. So for the security, basically means of trying highlighting the fact that we need to avoid this as a kind of attack so that malicious sources could inject or could impact on the on the grouping, so injecting malicious traffic and mapping in a specific grouping that could impact other flows there. And operational considerations essentially the fact that network operators need to take care at the time of planning the different flows and how to produce the mapping. Apart from that, we added an implementation status. We are doing some we are planning to do to perform some implementation of these groupings and probably for the next IETF or maybe if not in Vienna, then in San Francisco we will have some implementation ready that we can describe and well all what we intend to do will be released as open source. Finally, we updated the affiliation from Christoph, now updated.

Next steps: we have presented several times here, also in TSVWG. There were the discussion what could be the proper home, so by now we are keeping presenting here the draft. Next steps could be maybe liaison to 3GPP to check consistency on the approach. This approach has been already integrated in O-RAN; there is an specific document, specific specific concrete specification, the Xhaul packet-switched architecture that describe this same methodology. And once all of this is solved, essentially we would like to request working group adoption if this is the final home for the document.

Pavan Beeram: Thanks. We did I mean you've presented this before. The last time we couldn't find I mean there wasn't much support, but given how the discussion has spanned out over the last few IETFs, there was a discussion at TSVWG as well. I think we I mean we'll have a discussion again with our technical advisors as well and see if we should try and do a poll and gauge for more support. We can discuss that offline.

Luis Miguel Murillo: OK, wonderful. Thank you.

Pavan Beeram: OK, we are two minutes over. Apologies for running over. Great discussion. Thanks, everybody, for attending. See you all in Vienna.