Markdown Version

Session Date/Time: 20 Mar 2026 06:00

Bruno Rijsman: So welcome to Spring working group. We have a full agenda, so we'll start on time. Thank you for being with us for this last slot. This is the IETF Note Well. It's a reminder for rules, including about conduct, IPR, and process. You should read the document because you already acknowledged them and confirmed that you will comply with them. If you have any questions, you can ask chairs or AD about those documents. This is the IETF meeting tips. In short, if you are in the room, you should connect to the Meetecho tool to be able to join the queue and record the blue sheet. And if you are remote, please mute your audio if you don't want to speak. The minutes are collaborative, meaning you can help to write the minutes and you can correct your statements and name. This is the link.

So, in terms of document status, we moved to—we approved two documents which are currently in RFC Editor queue. One is draft-ietf-spring-sr-policy-yang. I'd like to thank Dhruv and the authors for the final polishing on the PCEP text. And the second document is "Distribute SRv6 Locator by DHCP" [Note: This refers to draft-ietf-dhc-srv6-ietf-dhc-srv6-locator-deployment], so congratulations.

We submitted another document to IESG, currently in AD evaluation. It's about draft-ietf-spring-resource-aware-segments. And finally, we are doing a last call on draft-ietf-spring-srv6-security. We did receive the review from SECINTOP and RTGWG directorate. We did the first last call for two weeks and then extended. We'd like to have more feedback on the document. So the plan is to have the authors update the document with the comments received so far, so from the working group and from the directorates, and then we'll issue another working group last call. It's an important document. It's a document which has been asked for by IESG so that SRv6 security is correctly handled. And it's also an important document for network operators to ensure that their deployments are secure.

This is an overview of our Spring working group documents as filled by authors. So thank you for the authors. I think we have 17. I guess the main point is coming from service programming, and the authors have asked to split the document into two: one for MPLS and one for SRv6. And we'll send an email on the list about that. And finally, this is our agenda.

Joel Halpern: Yeah, sure. So, as Bruno said, we're very happy that you're here today. It's the last session of the last day. It's not always that everyone stays here. I'm very happy to see a lot of faces that I know and a lot of new faces. I'm very happy that we have progressed many drafts. However, I am not happy at all with the participation that we've been getting from the working group lately. We have been working, as all of you know, on that security draft for the last year and a half, almost two years, and we received a total of two—two—I don't know how to say two in Chinese, but two—out of the working group. So I am, you know, disappointed that there's not enough engagement in the working group. That is an important document. A lot of people participated in the discussions. We had at least three different interim meetings, and in the end, we had two responses. If we go back to the—oh, not back, actually, go to the next slide. These are the documents that are—that have been adopted. As you can see there, many of the authors provided an update on the document, but about half didn't.

So, what I'm trying to get across here is we need working group participation. This is not up to us—Joel, Bruno, and me—to figure out what to do with the working group. This is about the working group reaching consensus and agreeing on things and having discussion. And to do that, we need traffic on the list. We need people to say they like things, to say why they don't like things, to make comments, to not rely just on the presentations that are made at the meeting. Some of these documents are very—well, I'm going to say all the documents are important, but some of them have external dependencies from other working groups, for example, the YANG models. And not only did we not get an update, but we haven't seen movement on some of these drafts for years sometimes. So we need to see engagement.

We need to see movement. We as the chairs can change the people working on the documents if we need the documents. If you don't want to work on the documents anymore, we can just, you know, forget them. Or, in the case of some of the important documents that have dependencies, we need to find other people who can do the work. And of course, finding other people is not enough. We need still everyone else to participate and to be engaged and to show as much interest as you're showing today in actually coming to the last meeting of the last day of a very long IETF week. So thank you for all the work that you've been doing, and I'm going to say thank you for all the work that you're going to do in the next few weeks and months and hopefully years of making comments and participating on the list. I know that all of you are not just looking at your email, but you're internally nodding and saying yes. So thank you.

So this is our agenda today. It's a full agenda. Any comment on the agenda before we move on to the first slot? If not, the first draft about draft-ietf-spring-srv6-path-segment. [Link to slides: Path Segment Identifier (PSID) in SRv6 (Segment Routing in IPv6)]

Guangming Zheng: Okay. Hello. Okay. Hello, everyone. I'm Guangming Zheng from Huawei Technologies, and today my presentation topic is the Path Segment Identifier in SRv6, the flag problem options analysis.

Firstly, let's look at the background and objectives. The problem is the SRv6 lacks a unified path segment identifier comparable to SR-MPLS P-SID, which is illustrated in RFC 9545. And using a variable-length segment list as a key in SRv6 is inefficient. So we need to define an efficient and unique 128-bit P-SID—which is Path Segment Identifier—to support some different use cases, like the bidirectional path binding, end-to-end path protection, and etc. And currently, the protocol design in our draft is like the following: First, the location is the P-SID must reside at segment list n, where n equals the last entry of the segment list, which is depicted in this picture. And the structure: the P-SID structure follows the standard SRv6 SID format, which is locator, function, and argument. And the function has and P-SID has its own function code. And now today, the main topic of my presentation is we want to discuss about the flag problem: how to efficiently signal to nodes that this specific entry is a path identifier and not a standard routing SID.

Okay. And first, I want to thank Bruno's deep discussion with us, and we finally come up with three high-level approaches for community review. The Option 1 is the dedicated P-flag option, and Option 2 is the generic G-flag, and Option 3 is the no-flag mechanism. And now let's look at each option in detail. First, the Option 1a is the dedicated P-flag. The basic mechanism is we define a new bit in the SRH flags field, and we have a P-flag in the SRH flags fields, like depicted in this picture. And the semantic is when P-flag equals one, it explicitly indicates that the segment list last entry carries a P-SID. And this mechanism, the pros are: it is simple and unambiguous. It has direct logic and easy to implement. But there are cons. The cons are resource consumption because it consumes one of only eight bits available SRH flag bits for a single function, which is—which was mentioned by Bruno in the mailing list discussion. So we want to modify—we want to modify this mechanism. And also, with one P-flag just for P-SID, it has limited extensibility and it's specific to P-SID and do not accommodate future metadata types.

So, next we have Option 1b, which is the P-flag for generic data, to solve the future extensibility problem in to some extent. The mechanism is we also—we still have a new bit in the SRH flags fields, which is P-flag. And the semantics is when P equals one, it explicitly indicates that the segment list last entry carries a 128-bit generic data. And P-SID is just one of the possible generic data here. And the pros are: it has future extensibility, and one flag just support multiple extensions and address the flag scarcity concern. And for the cons, it has—just like the Option 1b, it consumes one of only eight available SRH flag bits.

Okay. And next we have Option 2, the generic flag. Also, we need to define—request a generic flag bit in the SRH flags fields. And the semantics is when G equals one, it signals that the segment list last entry contains a 128-bit data with SID structure. And we use the op-code to define different use cases. For example, when the op-code is 01, it means it is the path segment ID, P-SID. And if the op-code is 02, that means it may be some in-situ OAM trace data here. And for this mechanism, the pros are: it has future extensibility because using one flag it can support multiple extensions and addressing the flag scarcity concern. But there are cons. It also consumes of only one of eight available SRH flag bits for a single function. And also, it is a bit more complex than the Option 1b because the SID structure is needed to be defined. Okay.

And next we will come to Option 3: the no new flag options. And here we have three sub-options here. Option 3a is using the O-flag. Since in the RFC 9259, we have already defined using existing OAM flag, and we want to just reuse that flag and to signal the P-SID presence. However, although we don't—the mechanism don't need to consume any SRH flags, but it has cons because O-flag implies slow path sample OAM treatment, but P-SID often requires fast path and per-packet handling of accurate end-to-end metrics. And such kind of mismatch in processing model risks other serving key use cases.

Okay. And then we have Option 3b. It is also a flagless mechanism and it is—it relies on the P-SID convention. The mechanism is that the SR endpoint nodes, it do not need to read the flag field and it just inspect the segment list last entry by convention without defining the flag. The pros are: there are no SRH flags consumption, but there are cons because firstly, without a flag, the SR nodes on the path need to read the 128-bit P-SID, which introduce more complexity than just reading one flag. And also, some potential—some potential risk is if without a flag indicating legal data at segment list last entry, the packet may be treated as illegal and discarded by some strict implementation.

Okay. And finally, let's go to the last option, Option 3c. It is also a flagless option with dedicated P-SID prefix. The mechanism is it—the SR endpoint nodes inspect segment list last entry and recognize P-SID by prefix match. Here, the P-SID just have a different structure like this picture. And for the prefix, it is a reserved well-known non-routable IPv6 prefix and the rest of the P-SID is the payload. And the pros for this mechanism is there's no SRH flag consumption. But there are cons. The cons is basically the same with Option 3b, which is one is without a flag, the SR nodes on the path need to read 128-bit P-SID, which introduce more complexity than just reading a flag. And the second is without a flag indicating legal data on segment list last entry, the packet may be treated as illegal and discarded. And also, there are other cons like with such kind of structured P-SID, then extra mechanism is needed to ensure the P-SID are unique across domains because of not using the egress node's locator. And also, the extra mechanism is needed to inform which SR endpoint node triggers corresponding operation.

Okay. Now finally, we will come to this comparable analysis matrix. Totally, we have six options. And now we will compare them in different dimensions. First is the flag consumption. Option 1a, 1b, and 2, they consume one bit of flags, while other options do not use any flag. And the next dimension is the processing model. Option 3a operates on the slow path, while other options operate on the fast path. So they are different. And for extensibility, Option 1a, 3a, 3b, and 3c has—have low extensibility. And for Option 1a, it has high extensibility because the P-flag supports generic data and the data is not—can be self-defined. And for Option 2, the G-flag supports some data apart from P-SID, but the data should be in the SID format. So the extensibility is 1b has high extensibility and Option 2 has median extensibility. And also, the processing cost—let's focus on the Option 3b and 3c. For these two mechanisms, they have high processing costs because they need to read the 128-bit SID instead of a single flag. So there may be higher processing cost for Option 3b and 3c.

Okay. And finally, and it is from the authors' perspective, we believe Option 1a is a simple and unambiguous way to implement, to realize this function. And Option 1b and Option 2, they offer the best long-term balance. It conserves scarce flag space and supports future extensions and maintains performance. So this is our authors' recommendation. And finally, we want to kindly ask the work group to share the views on which direction best meets operational and architectural needs. And we want your feedback and we will update the draft accordingly. Thank you.

Bruno Rijsman: Thank you for the analysis, Greg.

Greg Mirsky: Um, so I have a question: why you think that in SRv6 you need the P-SID? Because in SR-MPLS, as labels being disposed, so yes, the egress would not know on which of the path from ingress to egress the packet traveled. But in SRv6, so all the unique SIDs are in SRH. So why do you need P-SID in the SRv6?

Cheng Li: Can I answer question? Cheng Li from Huawei, as the author—actually, is the first author of this draft. Yes. Because Greg—Cheng Li—because if you use the reduced mode, definitely you cannot have the entire SID list. If you use the micro-SID container, you definitely cannot use—cannot find the entire segment list. So you can read the introduction section of this document, you can find more information. Yep. Thank you.

Greg Mirsky: Okay. So then my second question is that why do you need to put it in the segment list and not to use the TLV? Because if you are using compressed SIDs—micro-SID or G-SID—okay, so you definitely concerned about the space, and putting P-SID in 128 bits long, that seems to be a waste. So if you need it to be analyzed only by the egress, why not put it as a TLV?

Cheng Li: Actually, let me answer the question. Like, in the beginning, we do—we did consider to use the TLV, but think about the TLV. How many TLV we are using now? Maybe only one, like HMAC. Do you want to use the second TLV? No, because it's, you know, costs a lot...

Greg Mirsky: Why not? No, no. Again, as I—it's not analyzed here. So, again, in the—if you're using compressed SIDs, so you definitely concerned about the length of the SID list. And you are proposing to use 128 bits for the P-SID for some limited use cases.

Cheng Li: That's the reason why we proposed Option 2, so that the whole 128 bits can be used for multiple use cases instead of only path segment.

Greg Mirsky: As I understand, in-situ OAM has its own definition using IPv6 generic approach, not using SRH. So I think that that's misattributed.

Cheng Li: Yes, as you can see that we propose it as a future use case, that if someone is interested in this use case, we can discuss. But right now, we are only defined the flag for P-SID or the whole 128 bits so that we can have the space for future use case, maybe for more use case, like in-situ OAM.

Greg Mirsky: It appears that 128 bits for the P-SID is excessive. Okay.

Joel Halpern: I think the question has been heard and we need to move on. Pleiku, please.

Zafar Ali: Can you put—put back to the—the—yeah. That's Safar, then Cheng Li, then Zafar. Zafar. Safar is first? Zafar. It's not your turn yet. No, no. He was just answering a question. So it's Safar, then Cheng back, and then you.

Safar: Thank you. So my question is that what is so special about this path segment SID that it need a special treatment? Why can it not rely solely on the n.P-SID behavior code, which is like any other SID that we have defined thus far? So you have Option 3b in front of you, and you said high processing cost, but it is like cost for any SID processing. So I do not understand why we need a special flag or a special treatment for this SID.

Guangming Zheng: Yes, I think for the flag problem, I think compared to the Option 3b, I think if you did not have a flag—if you do not have a flag, I think there will be some efficiency degrade in this 3b mechanism because the SR nodes on the path need to read 128-bit P-SID, which introduce more complexity than just reading one flag. So I think introducing this—this bit of SID—this flag is—is beneficial.

Safar: So you said that SR nodes on the path needs to read this. Why nodes on the path needs to read this? It is the last—it is only the tail or the last SID, or the locator of the last SID in this policy, that needs to read this. Why other nodes on the path needs to read this?

Cheng Li: Cheng Li as author. Like, we do provide some text in the document. So Safar, if you have free time, you can read the document to end better—to understand better. So the main reason is that SRv6 has its, you know, unique advantage over against SR-MPLS, right? So when we add a P-SID in the SRH, definitely because it is a SRH, so the inter-domain—intermediate nodes can read the text. By this way, we can provide the possibility to support multiple use cases now and future. That is the advantage over or against SR-MPLS. So in the design section—phase, we did consider multiple solutions such as Option 3 and Option 2 or Option 1, and in the end, we design—we think Option 1 and—might be the easy way and we can provide some, you know, extensibility for the future. And after discussion with Bruno and other participants, we think Option B might be a, you know, better choice so that we can use this document or this extensibility for not only P-SID and other use cases, to not waste a single flag. So that's the answer. But we can continue the discussion offline. Yep. Thank you.

Safar: But the question is that why you even need a flag and why you have to do the processing on any of the transit node? And if you need to do something at the transit node, that would be you are running on an IPv6 fabric. So you use hop-by-hop header or you use the core IPv6 fabric facility. You don't hijack a flag to tell there is something in the packet that you need, and you're dealing with classic nodes, non-classic—like classic means IPv6 capable node only, SRv6 nodes. So anything we never have any thing where a node in the transit whose locator is not matching need to read something. So it's all funky. It's all not correct. So you need to—you need to justify why this—what is so special about this SID that you need this special operation or special flag?

Cheng Li: Well, it's not for the transit node, it's for the endpoint node to read the—the—the field. And we do—we did have some discussion with the sixman chairs, Bob Hinden and other chairs long time ago, and the conclusion was we need to get a consensus here and then come back to the sixman and in the working group last call to check again. Yep. That's the conclusion.

Safar: Yeah, I mean, I remember that and that was an individual draft, not a working group draft. Long time ago. Long time ago. And the same comments were there at that time as well. Um, I think you need to justify that, and—and why it cannot work on a locator? I receive my SID, it's just like any other SID. It matches my locator. It's my SID, it's my P-SID. I process it based on the pseudo-code and I process the next SID. And that's it. So I still don't understand why even you have to keep a special location or why can you not treat it as any other SID?

Joel Halpern: We can have more discussion offline because it's a long story. Sounds good. Thank you.

Zafar Ali: Can you go back to the slide, the second slide? Just one. Yeah, here. I just want to, maybe, make Zafar's—want to respond to the comment from the Greg. Maybe you know here they list the possible scenario for this—this P-SID field. You know, there are binding path—binding and end-to-end path protection. These scenarios need not only the—not only exit the—not limited to the egress point. The transit point node also need to—to act on this P-SID. So maybe I think it is reasonable to put the value in the segment list, not in the TLV values.

Jiayuan: Uh, this is Jiayuan from China Telecom. Thanks for sharing the idea. From my perspective, Option 2 seems better. This solution avoids dedicating scarce SRH flag bit to a single function and can also be reused for P-SID and potential future extensions. I think it achieves a better balance between the processing efficiency and operational complexity. Uh, yeah, that just my opinions. Thank you.

Bruno Rijsman: So thank you for the discussion, very interesting. I would encourage to further the discussion on the list because we had not a lot of discussion on the list, and it would be good to close all the points with a discussion on the list. I do have a comment on slide 4. We're kind of discovering that there are two implementations. There is no implementation section in the draft, although I asked it in July. And it is an issue because there is no code point allocated. There is no IANA allocation for the flag, and no early allocation for the flag, and no experimental code point for the flag. So that means that the two implementations are squatting on the flag, on the code point, which can create interop issues between vendors and in networks. So I'd like the working group in general to consider to not squat on code points and only implement and indicate code points in the draft if and only if it has been allocated by IANA. Thank you.

Balas, you're up next. [Link to slides: SRv6 for Redundancy Protection]

Balazs Varga: Yes, thanks. My name is Balazs Varga, and on behalf of the co-authors, I will present the update where we are currently with the draft dealing with redundancy protection. The redundancy protection is a generalized protection mechanism and it is targeted to achieve high reliability in segment routing networks. The mechanism uses a packet-level active-active forwarding, and the draft defines a new SID—a redundancy SID for that—and that is the SID which is pointing to the redundancy functionality of the node. What algorithms the node using as redundancy is out of scope in the document, so it is just providing the possibility to implement redundancy protection on the node. The R-SID has also a related redundancy policy, and this redundancy policy contains all the configuration regarding the redundancy functionality: what is the service protection action—you would like to do a replication or elimination or any combinations of them on the local node—and how these actions should be executed on the packets belonging to a given flow that is using this protection service.

In the draft, we have also placed an appendix for illustration. There is a quite complex scenario described in detail how this redundancy SID and the redundancy functionality can work. And this is practically one of the use cases what DetNet also intended to use. So it covers a wide range of use case scenarios and protection topologies. And there were discussions from the beginning between DetNet and Spring working group how to use this redundancy protection because in the DetNet workgroup, we have a PR-OF functionality called Packet Replication Elimination Functionality, and that is intending to use this function as well.

So the document is in quite stable state. There was only a single editorial fine-tuning. Before we have made this fine-tuning, we have sent to the list the proposed changes in the document, and that has affected only the description of the upper-layer header processing to make the text very similar to the RFC 8986 format. That was the only change what we have done on the document. So it does not affect any technical part; it is more an editorial change in the document. The document is fully in line with the terminology and the DetNet architecture what we are using. So we think as the author group that the technical content is quite stable, and we think that we would like to ask for a working group last call on this document. Thank you. Questions? Comments?

Joel Halpern: Does anyone have any comments? Questions? So this is one of those drafts that it is a working group document and that has been around for a while. There hasn't been a lot of traffic on the list. In fact, the document was expired for a while. So Balazs, as we had discussed before, we need to generate working group engagement on this, some interest, some, you know, comments, whatever. So we need to work on that before we can start a working group last call. In other words, we're going to put you on the list, but nothing is going to happen until we have working group engagement.

Balazs Varga: Okay. Thank you for that comment. So in the DetNet workgroup, we are definitely using this stuff, but yeah, let's have some discussion on that.

Joel Halpern: Liuyan? [Link to slides: SRv6 for Inter-Layer Network Programming]

Liuyan Han: Good afternoon, everyone. I will present this work on the SRv6 for Inter-layer network programming. In operator's network, we have multilayer network. Usually, the Layer 3 using IP technology, we also have the other Layer 2 and below technology, for example, optical technology in the network. SRv6 provides the ability for the network programming by encoding network instructions in the packet header. So we want a unified way for the network programming across the different layers. So this work propose a new SRv6 behavior for interlayer network programming.

The new behavior is called End.IL. It's defined for the interlayer programming across the IP and the Layer 2 or Layer 1 technology. For example, to instruct a network node to send a packet through a non-IP underlay links or connections to a remote node by using this SID. We think we have three characteristics by using the new behavior. The first is easy provisioning. It doesn't require IP address or routing protocol for the underlay connection or underlay connection—its interfaces. The second advantage is the guaranteeing performance since the packet can be directly steered to the underlay TDM or optical passes. These passes is using the dedicated resources, so it can provide more guaranteeing performance for the traffic. The third characteristic is the flexibility and scalability. We can use the underlay connections to be set up between any two remote nodes without to affect the current Layer 3 topology and the Layer 3 IGP path computation.

After the draft adoption, we update the draft according to the comments and discussions. The first is to update to clarify how the packet is encapsulated over the underlay connection. And the second is for the Layer 2, the approach for obtaining the destination MAC address is clarified in the draft. And we also add some tests on the use of SRv6 flavors with the current new behavior.

In order to verify this function in practice, we run a hackathon project at this meeting. In this meeting, we carry a topology we can see in this figure. A multi-vendor topology—we use the devices from three vendors from Huawei, ZTE, and FiberHome. And the links are connected, we use the End.IL behavior to direct the traffic to the underlay MTN, the metro transport network, and FG-MTN channels. And for the traffic flow, we configure the bandwidth to 600 megabits with the different packet sizes, and we also to configure some background flows over the flows. And we can see in the table that several traffic flows, they have different source and destination nodes. And we also capture the screen from the controller from different vendors' controllers and it shows the SID list and we use the End.IL SID in this traffic.

During the project, the results are well-verified the feasibility of this function since the traffic has been directly steered to the MTN channel. And we also test the transport performance for these channels and we can see even with the congestion by the background flows, there were no packet loss and the latency and jitter remained almost unchanged.

After the variation, next we will continue to improve the draft and welcome any comments and feedbacks for this work. And we also plan to work on the control plane protocol extensions for this interlayer programming in other related working groups. Thank you.

Joel Halpern: Any comments for this one? Thank you for let us know about the hackathon experiment. It's good to see implementations going. Okay. Thank you.

Susan? [Link to slides: Linking BGP SR-TE to Spring concepts]

Susan Hares: Thank you for letting me speak. Alvaro's talk at the beginning of this session was important for you to understand. There's a link between the BGP—between the definitions you do here and the drafts you want out in BGP. If you don't help with forming the concepts, the BGP mechanisms cannot adhere to them. So I have several drafts in BGP that have no firm link in Spring, and so also I can't really tell in many of the drafts where the link is. So I'm left with a problem that you don't want. I'm left with the problem that says I can't publish these drafts because I haven't done the cross-working group work. So help Alvaro, help me, help you get your drafts into an RFC and your work into the network.

So to make it easier, because believe it or not, I have on the docket 30 drafts right now that in various forms that have been asked to do BGP-SRT or BGP. And I need to find, in order to publish it with the IESG, I have to find where it fits a Spring construct. I have to find either an RFC or a working group or an ID draft that tells me what it is. I have to then go and look for a PCE in case the PCE work has been used, and I have to look for sid—for SRv6-ops. And I'm starting to attend these regularly. You will see me in the back scribbling notes. So to make it easier, because Alvaro, Bruno, I would like you to go fast in getting your additions to the network if there are implementations. So I tried to think, "What can I do? How can I make it easier for you that has an idea, an implementation?" So we're going to go to—perhaps Solution 1, which says we'll require in every IDR-SR draft a section which we'll call Cross Working Group. Okay? And that'll have a link to the Spring document, a link to the PCE document, a link to the SRv6-ops document if we can find it. And I'll continue to do the slower thing, which is I put reviews for BGP documents that specify SR or SR—SR I say as BGP with a prefix-SID attribute, SRT is BGP with the SR policy, and of course you know BGP-LS.

Okay. So this is a plea: help me help you. Why is that? Here's the problem. You—that all sounds like process, but how do we know—did you want to interrupt now? Please go ahead.

Dhruv Dhody: Yes, just on the last slide. Can you go back once? This is Dhruv. Basically, you were saying SRv6-ops, but do you mean SRv6-ops or SRv6-ops?

Susan Hares: Excuse me, I said sider and it's SRv6-ops.

Dhruv Dhody: Yeah, so but SRv6-ops is a little different because that's like after the standardization is done, then you come to the operational and write something. I'm confused that I don't think so we need to put this at the same level as Spring and PCE. SRv6-ops is mostly work done after we have standardized things.

Susan Hares: It's the same—yes, and if I'm not saying it right, I'll say all sorts of things, but the purpose is always: what do we care about as an IETF? Or what we should care about from the IDR viewpoint is what operators need. What they need goes to the top of the queue and other things fall behind. So if I hear in a SRv6-ops presentation from an operator that they need something, I'm going to listen really hard because if they need something, we should pay attention because they're trying to deploy this thing. Okay? So that's why if there's a link, if I listen in SRv6-ops like I did earlier this week and I listen and I hear, "Oh, this person needs this sort of tracking," well, that would make sense to make sure that we listen all the way back the chain. That we listen if that feature comes up in IDR, and we listen if that feature comes up here in Spring. Because those features matter, you know, it helps them make their network work. I think that's a basic fundamental principle of interoperability and running code. You want to find where the need is and make sure you meet it. Good question. A-plus. We have another question from Ketan.

Ketan Talaulikar: So actually not a question, but to answer the—Can you speak closer to the mic? Yeah. Can you hear me okay now? Not really. I'll have to look. Okay, fine. I'll put it in the chat.

Susan Hares: Oops. I've been working really hard. Every cycle, I've reviewed all the SR, all the SR TE, and all the BGP drafts. Whether they're working group drafts or for IDR, or whether they're just proposals, because I realize this is an important area and I've taken time to review all of them. And you'll find them all on the list. So I'm doing Solution 1 and Solution 2. Solution 2 is me giving you the information. Solution 1 is you giving me the information about cross-working group.

Okay. This is about why I'm taking these steps. The problem is we have to do this check first to be—make sure we're doing the right thing for the operators, we have to know the BGP mechanism aligns with the concept, we have to know whether it's going to help or hurt the SRv6 policy, and we have to follow it back to the people who are using it to get feedback, right? So these checks are all about: is it going to be used?

Okay. How do I find all of this out? Well, I track the tunnel en-caps and I track all the things that are there. I—so before we've been adopting it, we've been trying to find: is it useful? We do the same thing for the BGP-SR with the prefix.

Okay. So right there in the yellow, and I tried to do this so you could just watch it, you see where those are already existing working group drafts that are about to go toward working group last call. I don't know if all of those features are useful. I don't know if the Path MTU ID that's in the sub-TLV is useful. It would be wise to ask those questions so that we could do a good job about following through. There's also the MPLS label. All the things in red are proposals. Okay? There's a lot of proposals. If they all are beneficial to some operator, I'm happy with that because it's information we're passing that's being used. But if it's not useful or it isn't well-defined in Spring, then you're wasting your time if you're proposing it in IDR because I'm going to have to go back as the IDR chair checking all of this, go to Spring, I'm going to have to go to SRv6 and listen to make sure it works well.

Okay. Here are some more. You can pull down this list in the slides, but you can see how you have many of these things that conflict. So here's some thoughts I'd like you to think about to help me help you help your individual drafts come out. Are all the identifiers helpful? You know, we have names, bindings, NRP segment list ID, path segment ID information. Do they all make sense? Do they interoperate? How many of the metrics are useful? There's a whole list: weight, metric, B-list, segment list may have path segment and MTU. Are all of these necessary? How and why does color interact? There are five generations of color: there's TLV, sub-TLV, sub-sub-TLV, then there's extended community, then there are L-Rs with color. Is this all useful? It doesn't have to be useful in one place; it's just we're looking at the whole range of this and asking that question. And do any of the head-end actions interact or conflict? Now, they don't; they can all be there and they can all be wonderful. But I as an engineer and standardizing all of this and spending the time, should ask you that question and you should have an answer when I come to review your draft. Other drafts are much more complex. For example, I've talked to the—the segment list optimize people and said, "Do we have enough to make that decision at this time? Is this a good optimizer?" Might be. I'm not doing deployment. I've talked to the composite list people and said, "Do we have the right understanding of what composite list will do to BGP?" And we had wonderful discussions this week about that. And I think it helped them and it helped me understand what they're coming from and where the limit.

Okay. I got a lot of people looking at me like they haven't thought of this, but um, the chairs actually and—and the reviewers actually do. So, um, we will be putting a lot of these drafts through the SR—through the BGP directorate during their working group process and I want to make sure they have the tools to review your drafts and help you. The whole purpose behind IETF from the IDR viewpoint is that we get good specs out that help the operators do their jobs and make this technology a wild success. Questions?

Joel Halpern: Can you please say your name, since you didn't get on the queue?

Liuyan Han: My name is Liuyan Han from China Mobile. And thank you very much for giving the information and as a participant from operator, I think I and my colleagues are very willing to give us the—give the IETF our some use cases and some requirements and want to give the feedback of what technologies had been really deployed in our network. But I also want to give a—I'm not—I'm not sure if it's a proper suggestion, if we can give the technologies that were really deployed in the network to give them some higher priority in the IETF work. Thank you.

Susan Hares: Thank you. I must commend the operators in China and the operators working with SR. You are some of the best responders I've seen. So thank you to you and, um, you keep making those comments and keep realizing if you need something, be really—tell us, tell us again and again and we will work hard. That's—that's all I can say, but we promise to work hard to make sure you succeed. That's our goal. Ah, did I miss a question? Alvaro?

Alvaro Retana: Zafar has a question.

Zafar Ali: Zafar. Yeah, just one session. So is it the right time to select one expert from operator to help you gauge these draft? You know, I have experienced several times that you know the final aim of the RFC should be in the—in the network service provider network, but currently the output of the RFC cannot be controlled by—or cannot be selected by the operator expert. This is a big issue, I think.

Susan Hares: Yes, and Zafar's been very good about making lots of comments and staying in here and telling us what he needs. And we work hard to get there. It may take us time. Okay. So please listen to Alvaro and Bruno and Joel when they ask you to do the concepts, because right now we're stuck behind you doing what Alvaro asked. You're not going to get your BGP mechanisms out til you pay attention and—I'm sorry, I sound like I'm a "me too".

Joel Halpern: So, as Sue just said, listen to us. Right? So all of you, I know, have read the charter for Spring. Spring is chartered to define mechanisms to steer packets across the network. We're not chartered to change any protocol. So what that means is that the discussion on how things happen in the network happens here, the generic discussion. The specific implementation of where do you put the bit or is it a TLV or a sub-TLV or whatever, that happens somewhere else: in IDR, in LSR, in you know, wherever. There's a lot of work that we do with sixman as well. We have an active working relationship between the chairs with sixman, with IDR, with other work—working groups like SRv6-ops and LSR, where many times they come and ask us, "Is this something that Spring wants?" And the only way we have to answer is we look at our own documents and we see those procedures defined here. So some of the questions that Sue asked in her list, for example, all the colors in the different places. Should they match? That is something that is not always clearly defined in the Spring documents. So us as chairs can't give them—the other working groups—an answer that the working group hasn't agreed on. So this takes me back to the beginning of the session: please engage and participate and etc. Because even if there are documents that you came and talked about here, if there's no engagement from the working group, all we can say is, "Well, we don't know. The working group doesn't seem to care."

Susan Hares: I have a—I started this because I have a list starting with segment list ID which we got halfway through the call to and raise these questions that we want to go back to in IDR and go back to quickly, but if asking Spring takes a long time, it will take me a long time to get through all the adoptions. So anyway, this is a plea: help me so I can help you.

Joel Halpern: Thanks, Sue. Please engage, review other people's drafts, review so that people review your drafts, etc. Right? This is a working group. Okay. Thank you.

Fang, you're up. Now we're in the part of the agenda, Fang, you're up. Now we're in the part of the agenda where we label as "if time permits". So please try to skip to stay to your—to your time. And, if we get through everything, that would be great. If not, then we'll have to do it next time. Thank you. [Link to slides: SRv6 Path Verification]

Fan Yang: Hello, this is Fan Yang from China Mobile. The topic is the SRv6 path verification. The reason why we do—do this is because we are thinking that SRv6 not only matters the—the operator's network, but also matters the SRv6 utilization—adoption in the enterprise scenarios. So as we know that a lot of SRv6 security suggestions or issues—consideration—has been registered—has been put into the document. So here we come up with a solution to want to address part of that issue, give a solution on that.

So the—this—the brief history of this personal draft is from 122 meeting and we have proposed one solution with linear combination of the authentication—authenticode of different nodes—traversed node. So this we have a long time of discussion in the mailing list and improved that in the 123 meeting. And come up with a recursive authentication—that's kind of algorithm. And comments in that meeting is some people think this operator's network is good enough, is not needed for this kind of solutions. So we—we think that is also matters for the enterprise deployment cases because the security boundary is not clear like what we have in the operator's network, although I'm—I'm from China Mobile.

So the problem and solution recap: look at this picture. The problem is that we cannot strictly check if the SRv6 forwarding—the packet forwarded actually very—just as what it is in the—follows the SID list in the packet. That is because the today's solution only authenticate the SR SID list in the SRH header. It doesn't put any information when the packet is forwarded across the network. For example, there is a—there is some malicious user just want to do some of the illegal VPN access. The expected traffic is from PE1 to P1, P3, and to the PE2. The malicious user just inject the traffic from the P2 with the correct SRH, that is P1, P3, and PE2. We will not be able to identify that case.

So the solution here is have—have information injected on every node. That is we do authentication on the all—all—on every node with, for example, the authentication code on PE1 on the start of the SRv6 is empty. And on P1, the authentication will be—be done with two input: one is the authentication coming from the PE1 and the—and the authentication against the DA, which is coming from the SID list. So with that—with that information, that will be put into authenticode and go to next node, go to next node and do the same thing again along the path to till to the tail—tail node. So on the tail, we can—we can know if that path is strictly match the path list in the SRH SID list. That's the solution.

So the benefit we think that is there are several benefits that first one is of course we can check if the real forward path is—is just follow the SID list. And the second is we have constant authentication code—authenticode in a packet, regardless of the SID list length of SID list. And the third is I think the computational complexity is constant regardless of how many hops it will path, because all of the work has been distributed in every node. So and because of the computational complexity is constant, it will be hardware-friendly. It can be implemented by the hardware easily.

So that's a solution. So we think that is important for—for promoting the SRv6 in the many, many domains, so many areas. So we would like next step, we would like to ask working group—ask for the working group adoption if no objection. Any comments?

Joel Halpern: Yes, Karan. Go ahead.

Zheng Sun: Zheng Sun from ZTE. Thank you for your presentation. I think it is a useful method to address the security concerns such as the modification attacks and the packet insertion attacks. But I have two suggestions for you, for your consideration. First one is that you—you may need to make sure that the process and the algorithm is simple enough to be deployed across all the nodes to support the hop-by-hop verification. And the second one is that maybe you also need to consider the capability—negotiation and or the backward compatibility because maybe some—some of the nodes do not support the verification method. So just do two suggestions for your—for your consideration. Thanks.

Fan Yang: Yeah, thank you. I think that should be considered in this document.

Joel Halpern: Karan, before I go to you, this reminds me that we should be putting operational considerations in our drafts. So the operational consideration of the backwards compatibility is something that is going to be important. So not only for your draft but every other draft that we've been going to be working on. Thanks, Sun. Greg.

Greg Mirsky: Yes, hi. So it seems like what you are trying to solve is a proof of transit. Have you look at the work that being done in SFC on IOAM proof of transit?

Joel Halpern: So the question is that there is some work going on on proof of transit. Did you look at that?

Fan Yang: Yes. Yeah, I'm not—

Joel Halpern: It's work that went on in the past. It's not going on right now. But it is historical work on this topic.

Greg Mirsky: Yes, it's been done, but I think that you can look at it and see if there is anything that will be reusable and applicable to your work. Thank you.

Fan Yang: Yeah, yeah, yeah. So do you have any key—maybe we can have some—some mail—mail conversation to—I'm not aware of that at this moment.

Joel Halpern: It's work that happened in the past in the SFC working group for NSH proof of transit. So go take a look at the SFC documents. And Greg or Joel, if you have a pointer, can you send it to the list? That would be great.

Greg Mirsky: I put it—I put it in the chat.

Joel Halpern: Yep, he's put it in the chat. Thank you.

Anyone else? Okay. Thank you. So you're still on. [Link to slides: SID as source address in SRv6]

Fan Yang: So again, this is the second of the presentation. Topic is SID as source address in SRv6. Why we do do this is because several years ago we are working on the some enterprise solution, we want—we found that SRv6 cannot pass through the firewall with illegal—with legitimate SRv6 traffic. That has been dropped. So the initial solution is to try to have the another of encapsulation. That encapsulation is a firewall-friendly encapsulation like IPsec and L2TP, like that. So finally we find a solution in three years ago, and we have presented it in the IETF 116 in Spring. We defined two mechanism: one is how to—to make the SRv6 compatible with the firewall and another one is how to verify if there is some of—falsified destination VPN SID. In meeting of 123, comments are mainly for the—the verify of the destination SID—destination VPN SID is more kind of theoretical. So in this version, we accept that comments and remove the how to verify the falsified VPN SID in the draft and keep only the words how to make the SRv6 compatible with firewall.

So recap of the problem: think of there scenario of network in SRv6 network with stateful firewall in between. The problem is today we have the SRv6 packet source address is loopback address in one in as the source address. So on the left, the source address would be the loopback address of PE1, on the right side, source address should be the loopback address of PE2. So that create the problem asymmetric of the—asymmetric address, so that that cannot be be friendly to the session table in the firewall, stateful firewall.

So solution is to use SID as source address. So this has several benefits: one is the there—it is will be zero overhead because we just replace the source address with the service SID. And another one is it would be hardware compatible. There is no need to do any hardware change. And third one would be it would simplify the solution without to have additional encapsulation of like IPsec or L2TP.

So there are some changes. One change is we have spec—specified when to use the source SID as source address. This comes before—this is mainly specified that only the unicast traffic should be use a source SID as source address with all of the service SID except the multicast ones like End.DX and End.DT. The second update is we have spec—people just in before just asking how to determine which SID—service SID—SID we need to use. So we have—we have made make some clarification on this because the all of the service SID are allocated by the node itself, so it knows when the traffic coming—coming in node. So it knows it belong to which service, so it can determine which service SID it can use. So we have updated that. And update 3 is we just remove the how we just check if that destination VPN service SID is falsified or something like like.

So so we would like to have the working group adoption call as informational if no objection. Any question?

Joel Halpern: Safar, you're on the queue.

Zafar Ali: Ah, yes. So you have in your side SA equal to the VPN SID. What you need to consider is that if there's an ICMP error in the packet and then the packet come back to the source, then this would be processed in the context of VPN SID, which will be incorrect.

Fan Yang: Yeah, we have already considered the ICMP cases in the document.

Zafar Ali: Okay, so curious to find how you process it when—when the incoming packet DA is pointing to this, but—but it's fine. Thank you. We can talk offline.

Fan Yang: Yeah, if we are doing ICMP ping for the—for the tunnel, we will not—I think that's—I'm not sure. We have put some words on on that document. We can check that in the doc.

Zafar Ali: I'm not talking about ICMP ping or trace. I'm talking about if there is an ICMP error on the packet, data packet that you are sending, and then you have to handle that. It comes with VPN SID, so it gets processed incorrectly at the ingress node.

Fan Yang: Yeah, yeah, yeah. Thank you.

Zheng Sun: Zheng Sun from ZTE. I think it is a simple and workable solution. I'm not a firewall expert, but I used to use it a little bit. So about setting the SRv6 service SID as the source address, kind of related with the first comments. So another use case is it can be used for the ICMP error handling in SRv6 networks, and there's also individual draft. I think it's in sixman, so presented in this IETF. Also, it wants to set the source address as a VPN SID. So I think you may want to look at that. So yeah, but I think it's a valid point to work on. Yeah, that's all.

Fan Yang: Yeah, thank you, thank you.

Bruno Rijsman: One question: is it specific to VPNs or do you have a broader applicability?

Fan Yang: Today, I think that's only applicable to the SRv6 services like VPN services.

Bruno Rijsman: Have you presented it in BESS working group?

Fan Yang: Not yet.

Bruno Rijsman: What I would like to suggest to either present or send an email to BESS working group because SRv6 services has been defined in BESS primarily. So we'd like to have their feedback.

Fan Yang: Okay. Thank you.

Bruno Rijsman: I'm yet to read the draft and better understand what you're trying to do, but if you are to work on it, please extend it to non-VPN use cases. There's actually some use cases that might be interesting. Okay, make it generic, please.

Fan Yang: Okay. Thank you.

Joel Halpern: Thank you.

Fan Yang: Thanks everyone for comments.

Joel Halpern: Guozhen, please. [Link to slides: 4map6 Segments for IPv4 Service delivery over IPv6-only underlay networks]

Guozhen: Okay. Good afternoon, everyone. I'm Guozhen from China Telecom and my presentation is "4map6 segments for IPv4 service delivery over IPv6-only underlay networks" and my draft is 05.

The background of the working group framework about IPv6-only propose a framework for deploying IPv6-only as the underlay in multi-domain networks. In this framework, address mapping rule is used by the ingress PE to generate IPv6 source and destination address from the IPv4 source and destination address when it ingress as the given PE, and vice-versa. And as shown in figures, it's a multi-domain underlay network. And 4map6 segments is a new type of segment for segment routing. They run in PE nodes and provide support for implementing IPv4-IPv6 conversion function based on address mapping rules in multi-domain IPv6-only underlay network. In this draft, we defined a new SID. The SID consists of locator, function, arguments, and the address locator is encoded in the L most significant bits of the SID, followed by F bits of the function and A bits of the argument. As shown in picture, it is the SID architecture. The locator field has the pointing function. It is unique in the SR domain, and the function field identifies the behavior bound to the 4map6 SID, and argument field contains the behavior identity of vendor stateless encapsulation or translation is performed at point at the IPv4 address associated with the PE node. And in the Section 3 of this draft, we define a new BGP prefix SID attribute extension TLV in the SRv6 service TLVs to implement SID signaling for the 4map6 service of the SRv6 service. And TLV type used to identify different TLVs. It's about eight bits, and the TLV length, the total length of the TLV is 60 bits.

About the behavior in general, 4map6-capable nodes operate in pairs for a particular data stream. One node is the ingress PE, donated by PE1, and other is the egress PE, donated by PE2. Each PE maintains a mapping rule database like in the—the structure of map rule database as shown in figure, and the table entries in the MD database consists of V4 address prefix, IPv6 mapping prefix, and the encapsulation or translation processing way of IPv packets. As shown in the figure, before transmitting an IPv4 packet from PE1 to PE2, the address mapping rule corresponding its IPv4 destination address need to be transferred from PE2 to PE1.

And that's all my presentation and thank you for your listening. Due to the time limit, I need to catch my flight at five o'clock. But I highly, highly hope to receive more comments to update our draft. And thank you for your listening. We authors would like to request working group adoption of the draft. And if you have any comments or questions, please, please email me. Thank you.

Joel Halpern: Thank you. Go, run. Any questions or comments or anything else, please send them to the list. Yeah, that's all I'm going to say right now. Everything else we go to the list.

Xiaoming. [Link to slides: Encapsulation of BFD for SRv6 Policy]

Xiaoming: Hello, everyone. I'm Xiaoming from ZTE. This presentation is on encapsulation of BFD for SRv6 policy. This is the introduction on this draft. As we all know, BFD mechanism can be used for failure detection of SR policy. And this draft describes the encapsulation method of BFD for SRv6 policy. For encapsulating the BFD packets for the SRv6 data plane, two modes of encoding are defined in this document. Two modes are insert mode and encap mode.

This slide shows the encapsulation format for insert mode and encap mode. For insert mode, an SRH is inserted after IPv6 header of BFD packet. This mode reduce header overhead and reduce detection packet bandwidth. In this encoding mode, PSP is not supported. Another mode is encap mode. In this mode, an outer IPv6 header with an SRH is encapsulated, and this mode preserve the original complete BFD packet, only modify outer IPv6 header. For this encap mode, both IPv6 and IPv4 BFD packet can be encapsulated.

Joel Halpern: Give me a second. Joel, do you have a question about the slide?

Joel Halpern: Yes, specifically on the first half of this slide. And when you talk about insert mode, is the BFD packet that you are inserting this into originated by the same node that is inserting the SRH, or are you claiming you are inserting an SRH into a BFD packet that you received?

Xiaoming: You mean for the insert mode, is that the same node to encapsulate the BFD and data packet?

Joel Halpern: If the node which is producing the BFD packet and the node which is inserting the SRH packet—the SRH header—in your insert mode, are those the same node?

Xiaoming: I think that should be the same node.

Joel Halpern: Okay, you need to make that explicit in your draft, because if it is not the same node, this is a violation of the IPv6 standards.

Xiaoming: Okay, thank you very much. Thank you for your comments.

Okay, this slide shows how to ensure BFD packets reach tail-end of the SRv6 policy. In the SRH, the first element of the segment list contain the SRv6 SID or IPv6 address of the tail-end node. So when the last SID in the SRv6 policy is not—does not belong to the tail-end, then a additional tail-end IPv6 address must be contained in the SRH to make sure that BFD packet can reach the tail-end.

This slide shows special handling of the UDP checksum. On the left side, it's the insert mode. For insert mode, UDP checksum is calculated using the source address of IPv6 header and segment list 0 of the SRH as the destination address. On right side is the encap mode. For encap mode, the UDP checksum is calculated using the source and destination address of the inner IPv6 header.

Running code. Yes, we have already done the lab interop test for the BFD. Yeah, BFD is an important failure detection mechanism for SRv6 policy. We all know that. So in 2022, the interop test was hosted by China Mobile and many vendors have already joined this test. This is the draft history. This document was presented several times and there are some revisions to improve it. So next steps. Yeah, maybe consider working group adoption. Thank you.

Bruno Rijsman: Have you considered how it would work with G-SID or compressed SIDs?

Xiaoming: Yes, we have already considered that—G-SID, uSID, yes. I think—I'm not sure if current draft has already covered all the cases, but we have discussed that. If any specific missing part in this draft, please send us comments on the list.

Bruno Rijsman: I would think it would invalidate insert mode. It violates 8200.

Xiaoming: Understand. Thank you.

Safar: Yeah, I think, I mean, my only comment is that is—is like the way this is incapped—insert mode are the obvious thing. Do we really need to document it? If we have to document—if working group find that is fine, if working group feel like we need a document, that is fine. But yeah, this is—it would be done—these probe packet would be done this way. I think PM and all the other probing packet are crafted like this.

Xiaoming: Do you mean that only one mode is needed or?

Safar: No, no, no, no. I was not saying that. No. I was saying that yeah, this—this method that you're doing is—is how anybody would do it. So, do we—do we need a document for that? But if you feel—if working group feel like we need a document for that, that's fine.

Xiaoming: Yeah, I think the working group can discuss whether a document is needed for BFD encapsulation. And you know, this draft gives some recommendations on how to encapsulate BFD for the SRv6 policy. We think maybe that's valuable and helpful. But we can discuss, right.

Joel Halpern: Have you found interop issues or is it a local local choice on the sender?

Xiaoming: No, no interop issue.

Joel Halpern: Yeah, so Ketan put in the—in the chat that maybe if this is how people already do it already, that maybe this is something that belongs in SRv6-ops. Dhruv is not paying attention to me, so I'm going to say that we are going to talk to the SRv6-ops chairs and figure out, you know, as Zafar said, if we're going to need a document, maybe it might be something operational already. So we'll figure it out.

Xiaoming: Okay. Thank you very much.

Joel Halpern: I'm impressed in how no one moved because I said before that this was the last presentation and still no one moved. That makes me feel so good that you pay attention to the things that I say. If you heard anything, remember what Sue said: pay attention to us. Right? That's the most important part. So thank you for staying. Last session of the last day. Thank you to the commitment of the people who stayed here until she almost missed her flight. And we hope to see as many of you as possible in the next IETF in Yokohama. Thank you.

Bruno Rijsman: Thank you all.