Session Date/Time: 18 Mar 2026 06:00
OPSAWG Meeting - IETF 125
Joe Clark: Hello, everyone. Good afternoon. I hope everyone is doing okay, had a good lunch. Welcome to the OPSAWG (Ops Area Working Group) meeting here at IETF 125. I am Joe Clark. With me, as always, is my co-chair Benoît Claise. Say bonjour. And helping us with our minutes is our secretary Chongfeng Xie. Thank you, Chongfeng, for everything you do to help out Ops Area Working Group. Let's get right on into it. This, I'm sure you've seen. This covers all participation here at the IETF. We have been asked to give you a pause and let you read it and digest what it's saying here. If you need more details, there's a QR code.
Again, we want to be respectful. So when you do come to the mic, when you do make comments in Zulip or in chat, keep them respectful, keep them about the technology. And that brings us to the technology that we are using here to participate both in this room physically and remote. We use MeetEcho. Even if you are in this room, there are two forms of the MeetEcho tool. You probably have used them already at least once. One of which is the light tool, that's really designed for in-room participation. The other is the remote tool. I'm sure that the, I don't know, 24 or so people, if they're remote, they're using that. Regardless, make sure you put yourself in the queue. You raise your hand in the tool before you come to the mic. That serves two purposes. One, it helps us manage a consistent queue between the people who are remote and the people who are here. And two, it allows us to clearly see the name of the person who is about to speak. But it's still a good idea before you speak to state your name at the microphone. So hopefully you've had a chance to at least try out MeetEcho while you've been here. For those of you remote, and I see Michael, who one of our remote presenters has just joined, please make sure you have a good headset and that you, you hear clear audio. Let us know in chat if you can't hear clear audio. And please, if possible, use a wired headset if you're going to ask questions or present. But if it's a good Bluetooth headset, that can work too.
The agenda for this meeting was published. There's the link there. You can also see it on the agenda page at ietf.org. We already talked about MeetEcho and preparing. Well, we're in the meeting now, so there's not much additional preparing you can do. And if there are any issues, MeetEcho diligently watches the chat. So you can like @meetecho in the chat and get them, or there is a way to report general issues about IETF meetings at that link there.
So, as I mentioned, I'm Joe, this is Benoît, Chongfeng is our secretary. He's going to be helping with the minutes. But if you go to the chat, you'll see I pasted a link to the minutes. One thing Chongfeng asked to remind anyone who comes to the mic, please, if you could, once you sit down, go to the minutes page, make sure your comment is accurately reflected in our notes page. We will be going through several presentations today. All the slides have been uploaded. You can see them at the meeting materials page. The chat room is either built into the full MeetEcho tool or you can go to Zulip directly there at the link. And then that MeetEcho link there will take you to the full tool for being able to participate in the meeting. Just a quick note, if you are using the full MeetEcho tool in the room, please turn off your audio, mute the microphone, mute the speaker so that we don't have feedback.
So where are we—where do we stand from last meeting? Well, when last we met, we were pushing through TACACS+ over TLS 1.3. I am pleased to announce, we are pleased to announce that that has been minted as RFC 9887. So congratulations to the working group and authors. And some good news, if you pay attention to the list, the authors are getting ready to submit a new draft related to TACACS+ and SSH. So we look forward to discussion on that and seeing if that can become a working group document. We do have several documents in the RFC Editor queue, including one secure-tacacs-yang, which is very close to being submitted. I appreciate the authors and being responsive to the RFC Editor there. We also have ipfix-on-path-telemetry, prefix-lengths, and something that, again, I almost feel like this deserves a round of applause for all the work: oam-characterization that—hats off to all the contributors, the working group, the authors, the shepherd, for making sure that this made it over the line and now is in the RFC Editor queue. With the IESG, we have pcap-linktype and the two pcap other pcap documents were just brought back. They slightly expired, just a little bit dead, but they're alive again. And Michael Richardson added some additional fixes, so we hope that we can get pcap, the historical one. We need a shepherd write-up from Michael Tuxen on that, and we hope to get that into the IESG bucket. But right now pcap-linktype is moving along. Collected-data-manifest is in AD review. ipfix-gtpu is also now in AD review, and the scheduling-yang module just got RFC'd at Netmod, so we expect good things now for the ACL module. I don't know if our—Mahesh, if you want to say anything about the AD review? No? So, things are moving. In working group last call, but it kind of just fell over that bubble, was the discard-model. A lot of work has gone into that with the authors, reviews on the working group, the shepherd write-up was excellent. The authors submitted a revised ID. So we're looking to just send that or will—I think they just submitted a revised ID. We'll push that to IESG. So everything is set there. It was just, we had the a timing issue with the cutoff window. We have some other drafts, and you can see the status there at the link. Some of which will be presented today. Others didn't make the slot, but we're told that, I think it was, I'm missing the one that Chin commented on. There's going to be one, it just flew out of my head. We've asked the authors to provide an update on list so we can move it through. And I'm sure if I just looked at the drafts real quick, I would see which one that is. But definitely pay attention to the list. And that brings up a point that I would like Benoît to talk about. Well, actually, I'll get back to this, but I think it's a good lead-in to this next thing. Speaking about the list, speaking about reviews.
Benoît Claise: Right. So which is something that I've been mentioning in the past, but which is close to my heart, is that we don't get a lot of reviews in this working group. And maybe this is due to the nature of this working group, which became somehow a dispatch for everything which is Ops-related. A dispatch or a default bucket. So we have sometimes an IPFIX one, sometimes a TACACS one, you've seen all the list, sometimes OAM, hopefully it's done. So, and we consider this somehow as a problem, because for some of those documents, for the last call, we got Joe's review, Chongfeng's review, my review. Is this sufficient? Now, some of them are IPFIX, so they don't need to have a lot of reviews. Some of them, this is fine. But we don't see a lot of discussion on the mailing list. And what I want people to understand is that this is also the responsibility of the document author to try to get feedback. Right? It should not be only on the chairs and our secretary Chongfeng. Because we see documents sometimes and the authors are telling, "Oh, it's ready." And then there's last call and then there is not much feedback. So what shall we conclude? Two options. We let it sit, wait until we've got feedback—not fair. Or we put more on our shoulders. So so far we're doing the latter, but that's a—that's a problem. And along with that, we're going to put a timer for presentations. This means also presentation time and Q&A. So in a previous session, NMOP, we had some issues—not issues, but I mean people were presenting up to the time and we have to rush for the Q&A or for the feedback, which is somehow the most important part according to me. So pay attention to that point.
Joe Clark: Thank you, Benoît. And just back to this, I talked about the PCAP already. It was alt-mark, ready for working group last call. No, sorry, it was scheduling-oam. We expect an update from the authors to summarize the changes they made. And work was done—Thomas provided an update on gpon-gem. The 125, they did some work here at this hackathon. You want to say a sentence on that, Thomas? A sentence. Maybe two. Maybe two.
Thomas Graf: I will be quick. So there were slight modifications in the—in the document itself. Now it has a notion of ingress and egress. And yes, we already have an implementation and so far it looks good.
Joe Clark: And you plan to—what's the next step in the document?
Thomas Graf: So the next step is we request an early allocation for the code points and then basically we are ready for last call.
Joe Clark: Okay. Thank you.
Benoît Claise: So, on the alt-marking, I quickly discussed that with Giuseppe. I started to review it. This is good. Now, this document is actually the consequence of multiple documents: the alt-marking RFCs, then there is deployments, then there is a YANG module, and then there is IPFIX. And the question we have to ask ourselves, that's why it's a little bit longer, is: what do we do? Because if we just put an IPFIX information element called "period," because it's called "period" in the context of a YANG module which is alt-marking, then do we want to just have the two names the same? But in the context of IPFIX, it doesn't mean a lot if you export "period" in an template record. Right? So we have to discuss how we do the alignment of IPFIX data model with YANG model. So I'll take it to the list.
Qin Wu: Yeah, I have a connection problem, I cannot, you know, click the raise hand. And for OAM test scheduling, I just post a new update and made a, you know, summary. Actually, a quick summary here is we received comments from Daniel King and also Joe Clark from you. And so we also get together to address these comments. I think the fundamental issues relate to, for example, conflict reporting and how to provide fine-granularity reporting. So we come up with a proposal and also we discuss about, you know, sequence semantics. So with this issue, actually we come up with a proposal to address this. Yeah. Feel free to review and we also invite more people to take a look at this. Yeah.
Joe Clark: Thanks, Chin. Yeah, I will review. And just for everyone, in general, if someone makes a comment to your draft and you release a new version, follow up because that can generate more discussion. And to Benoît's earlier point, that's what we're looking for. Oh, looks like Kent is now in the queue.
Kent Watson: Kent Watson. It was reported in the WG Chairs forum that connection problems are occurring because lots of people have hotspots running on their cell phones. So please, if you're in the room, you have a hotspot running on your phone, turn it off.
Joe Clark: Yes, we strongly encourage: try out the Wi-Fi here. It's decent. This room did fill up a little bit and a lot of hotspots in here will cause interference and will cause bad connectivity. And I'm wireless, so if it gets bad up here, we're going to have a bad time overall. So please, if you can, turn off your MiFis or hotspots. Onto the agenda. You can see we've got a lot to discuss. I'm not going to read everything, but I want to know if there's any bashing to the agenda. No bashing, but I see a lot of IPFIX documents on the agenda, Benoît. Is there anything you'd like to say about that before we kick off?
Benoît Claise: That's a good question. Thank you. So let me say something about this. So yes, we see more and more of these IPFIX—this is like a small game, right? But we see a lot of IPFIX documents. And actually, if you look in IANA, the registration procedure is expert review. Which means that you could go directly to IANA to request your IPFIX information elements. So we're wondering, why do we have so many drafts? And somehow in the past, we said if you want a draft for IPFIX, it's because there is more than just the IPFIX information elements. It's because, for example, you have to explaining the use case. You're explaining which key fields or non-key fields are used for specific use cases. It's because there is a link to a YANG module, like Italo's alt-marking. It's because there is something more than just the IPFIX information elements. So if your draft is only about the IPFIX information elements, then maybe you don't need a draft. So the question we're going to ask for all the IPFIX presentation is: do you need a draft? And do you have implementations?
Thomas Graf: Yeah, Thomas Graf. Maybe one feedback. I remember when I was drafting RFC 9160 originally, I went first directly to IANA and requested those code points. And the answer I got is, "Ah, yeah, we can do that, but we would like to see a document and ideally an IETF document would be best." So that's why I ended up at OPSAWG. That might be the reason.
Joe Clark: That's—that's great feedback. And if you do that and you get that feedback, then that's a good thing. Okay, we need a document. If you don't, they think, like, great, then hey, you got your—you got your IE.
Thomas Graf: Exactly.
Joe Clark: All right. On to our first presentation. I see Jeff. Oh, sorry. I'm sorry, Jeff. I missed you real quick.
Jeff Haas: Yes, real quick. So Benoît, by comparison in GROW for BMP, the statistics there, same sort of thing. They're very boring, very easy on the protocol, but having the eyeballs actually review the semantics of the counters sometimes is handy. We don't have a good way to do that if we just simply ask for code points.
Joe Clark: Um, yeah, Jeff, there was a little bit of audio. If you wouldn't mind sending that to the list if there's something you want us to—to take action on or better understand. Okay.
Ana Minaburo: So, we are going to present some of the updates for this draft of applying COSE signatures for YANG data provenance (slides). First of all, I think in Montreal, they give—give us some suggestions for the correction of the JSON examples, as we put our signatures at the top level of the document and we change it to the end of the document—of the YANG data documents that are we are signing. And it's better for us as for the code development of the reference implementation is easier and it was the first approach we did it as—as—as it is now. And second, because of the question that Rob Wilton give us about for some future implementations in stream process—stream data, it could be easier to put the signature at the end of the content. As—but we are doing the coding having the data first in memory and then signing it. The most important update for the this version of the document is that we have added the CBOR implementation of—of how to enclose these signatures in—in data that is encoded in CBOR. And we added the SID file for the YANG module that proposes the definition of the signature, which is the ietf-yang-provenance. We added some examples for all the enclosing method and with two different flavors. First the—CBOR diagnostic notation, which is very similar to—to a JSON, but the signature would be instead of a base64 signature string, it would be in a hexadecimal bytes—it will be represented as that. And then we also put the examples including the SID in the instead SID numbers as an examples, changing the module—the module names. We have been this—we are going to ask IANA for the for if this is correct. We have been talking with several people in this—in this meeting, so we are in process of that. Then, we have all this draft evolution is aligned with the reference implementation. We presented in the hackathon this—this code. And what we presented is the whole workflow that we did as an example of in a using a message Kafka broker is now using byte serialization and the management of these bytes in—as CBOR objects. The canonicalizations of the signatures we have used a library which uses the deterministic encoded of CBOR, which is defining the RFC of 8949. And this is also implemented with the last improvement that we did in Montreal, which was signing, taking the name of the augmentation module of the signatures of the—of the YANG modules that we are going to augment with the—with the names of—of it. They already take it in—in the code. The as—I put an example of—of what it shows in the coding of the hackathon and how it is presented and it works correctly the reference implementation. This is the—the process. In two hackathons ago in Madrid, we did the whole workflow with—with Kafka and the in Montreal we presented with the schema validation that are working on it, our co-author which is Alex, is also connecting the two things. The schema validation is prior sending and consuming this message, and it gives another layer of—of a—another layer of security between after—before signing this—this data. And then—
Joe Clark: Anna, real quick. Carsten is in the queue. Do you want to take questions now or do you want to wait until the end?
Ana Minaburo: This is my—
Carsten Bormann: Wait until the end.
Ana Minaburo: Okay. And then what comes next is that most of the YANG that we talk about in the draft is—is done. So we wanted to ask for a revision of the YANG doctors and the security area. There is not in—I don't know if there's not updated in the slide, but we also requested for an IANA review and we have been talking with some emails with them. So we want to request this. I don't know if with the email, the mailing list maybe we can do that. And the next thing that we have to work on is to address the multiple signatories, which we have looked for because now we are using COSE Sign1, so we are trying to look for vienna to bring this with several keys and several sig—well, one signature with different several issuers or several—yes. And then, what we also have been looking is how to implement—add these signatures to other YANG models. For example, in a draft of security policies that is working by my colleague, we are adding the provenance signature as an optional leaf. Also there's have been some talks in the green frame—green framework drafts of the datasets that they knew—they need and the metrics that they work in—in green. And in the notification envelope that is the second example that we put in our draft. We have been also testing this with as well with the YANG push and the YANG schema registry that they are working on on NMOP, I think. And also what we are doing all this also to have the code and the reference implementation aligned with the draft. So we will still working on this. And that's it.
Joe Clark: All right. Carsten.
Carsten Bormann: Yeah, hi. Interesting work. I have a pretty deep question, but maybe it's important to find the right people who can help answering this question. You have a section 3.3 on canonicalization. And for JSON, you are using an independent ISE RFC 8785. I see that this document is trying to be standards track, so this will be a downref.
Joe Clark: Yeah, in—in the—in the light that we're just about out of time, Carsten, maybe this could be discussed on the list. But thanks for bringing that up.
Carsten Bormann: Yes. I just wanted to see who in this room actually cares about things like this, so we can discuss this with the right people.
Joe Clark: Yeah, I think even beyond this room. So if you could raise that in OPSAWG, that would be—that would be nice.
Carsten Bormann: Will do. Thank you.
Joe Clark: Thomas real quick.
Thomas Graf: Just a quick one. I suggest there is a section you're describing normative basically when a, basically the verification should happen. You might could put a reference to the message broker draft where we do the at the consumer where we say we do the validation and it could be like before actually after the deserialization and before the schema validation, that's where it could be applied.
Ana Minaburo: Okay. And we also do it before signing, not and the validation of the schema you say?
Thomas Graf: So basically at the message broker consumer we do schema validation. And I think the COSE validation could happen before, so basically after the serialization and before the schema validation as an example.
Ana Minaburo: Okay. I think you have—
Thomas Graf: I will put some text in the mailing list, yeah.
Ana Minaburo: Perfect. Okay.
Joe Clark: Thank you. As for the reviews, we will kick off the directorate reviews for you.
Ana Minaburo: Okay. Thank you.
Joe Clark: Who is next? I think I know who is next. You.
Benoît Claise: All right. So let's discuss the RFC 5706bis, which is "Guidelines for Considering Operations and Management in IETF Specifications" (slides). I'll put a timer. So this is joint work by a couple of us. I will be quick on this slide, which express that the initial RFC 5706 was published in 2009 and so the content was stale, it was targeting multiple audiences, the guidance was lost in a mass of technology background, etc., etc. So we decided—initially it was an AD-sponsored document—decided to have this new document. Let me explain the rationale. We don't want to delay thinking about how to deploy and operate your new protocol or protocol extension. This is the idea. So we want to encourage early on discussion. And for all of these points, by the way, we've been adding specific text in the latest versions of the draft. Now what's important to say is that the working group in the end decides if they want to have this operational consideration section. For example, if they don't need to have this section, this is fine, but we're asking to document the rationale why you don't need this at the time of designing your protocol or protocol extension. Also we stress in the text that we don't mandate a format, a solution, a content. We don't say to you, "You shall have a IPFIX related document in the OPSAWG or wherever you want or a data model," but think about it, this is most important. And we also stress, because we heard that feedback, that in some situation we have a base specification which exists already, for which there were no operational considerations. Fine. Now if you do an extension, you don't have to delay the extension to do first the operational consideration on the base document. So this is the high-level approach. We stress that in the text in multiple places and starting with the abstract. So let me read this because it contains a couple of points I mentioned before. "It introduces a requirement to include"—so first, a requirement, we're going to come back to that—"it introduces a requirement to include an operational consideration section in new RFCs in the IETF stream that define new protocols or protocol extensions or describe their use, including relevant YANG module, while providing an escape clause if no considerations are identified." So this is like something we stress in the abstract already, this escape clause. If you don't need to have this section—well actually, document why. Oh, sorry, I won't say that. If you don't want to have the operational consideration, you can just express why in the operational consideration section. So and for this, in the appendix we have some help for you. We've got a checklist of things that you should be thinking about. So operational fit, fault management, config management, performance management, security management. And these are all the questions in there that are pointing to specific and relevant sections in the draft.
So, the authors have been meeting on a weekly basis and we documented all of the issues in this GitHub and resolved all of the issues one by one. Since the last IETF we revised, we published four revisions. Now a point I want to stress: initially this document was AD-sponsored. In order to benefit for more reviews, it was decided that it was better to have this document in this working group. Now here is the small issue we have—small. So Joe and I were authors since day one. Now when it moved, and you see the two lines there in the figure, when it moves to the working group document here, well, we cannot be at the same time authors and chairs. That's why we have Alvaro here as document shepherd and delegate in this working group. And we're happy about that because Alvaro knows about routing and somehow routing was the first area to which we presented this document.
Since we know this document is impacting multiple areas, we've been requesting already 11 directorate reviews, 11 different directorate reviews, right? SecDir, RouteDir, OpsDir, etc. And we've been addressing the—the feedback. And that's why we've got four different revisions. I would say that in—in general, it's very supportive. Now there is one comment that I want to address, it will be for a later slide, is the compulsory aspect of that new section. Let's discuss that in a few minutes.
Something that I want to stress and that makes me happy is since we've been working on this document, this draft has been having a positive effect already. If I look into the IESG telechat from last month, already 57% of the documents had those this operational consideration section. Great. And I like the 57.14%. It's like the IESG is handling like thousands of documents in one telechat. Now, the last telechat we went to 71.43%, and thank you to the person who gave me the slides. So but the point is this: that we see already the effect of this document. And this for all the different areas: security, operations (but okay, here), routing, wits, int, etc. And also if this is like PS, experimental, or informational. So this is great.
So some more changes since the last four revisions. We update this RFC 2360 "Guide for Internet Standards Writers" which was defining a long time ago, "We must define a MIB," right? So okay, we had to update that old RFC. We have also been discussing SecOps and somehow this discussion spark this new draft that will be presented today by Michael, which will be dedicated to SecOps. Because if we have the entire SecOps draft that we want to have inside our draft, it start to be a little bit too big. And we believe—we were believing it would deserve its own draft.
So this draft might be perceived to provide instructions to the reviewers like the Ops Directorate, but actually we moved out that part to a specific wiki for the Ops Directorate.
So one thing I would like to discuss here and get your opinion. So while we address all the directorate reviews and most of them were positive, we had one telling us, "Well, actually we don't want to have a required new section in there." You can read in blue what's written: "Well, the if there is if a spec needs one, there is ample opportunity during the IETF and the IESG review to flag missing operations coverage and have it added." Then the author, we discussed this and we believe that the recommendation is still holds, that we would like to have the compulsory, the mandatory section. And the example I gave is like the IANA, right? You are required to have an IANA consideration section and sometimes you write "there is no action for IANA" and it takes like 30 seconds. All right. Now if you think about Ops, maybe you say, "We don't believe there is no need for Ops because blah." Or maybe this is too early or maybe it's going to be addressed. So if you remember one of the first slides, just justifying if you don't need it, why? What is the rationale? So the feedback was like, "This document intends to put an enormous burden on all IETF stream authors of technical documents." And somehow our guideline is still we believe that it should be required. Now what I would like to hear from the audience here is this: is there any objection to this conclusion? Feel free to speak up.
Mahesh Jethanandani: Mahesh speaking as a contributor. The IANA considerations and security considerations sections, even when they're there, usually just have a line that says, "There's no consideration to be had," without any justification. So could the operational considerations section also say, for example, "There are no operational considerations sections to be had," without requiring a justification for it? So it's the justification part I was wondering if we could soften it?
Benoît Claise: Right. So let's recall why we're doing this. So when I was an AD like a long time ago, then it was always difficult to arrive at the end of the process and say, "What about operation?" And say, "Oh, you know what? This is the end of the process. I just don't want to do it. Leave me alone." And I've been fighting to try to get people to think about it early, even sometimes when there is new charter, to say, "What are the implications of doing this?" And now we're slowly solving that. Now if we go back to, "Okay, by default there is nothing," how do we know the people have been thinking about that? Because actually this is a little bit of, you know, think early on about it.
Adrian Farrel: Adrian Farrel. I think two things. One to Mahesh: I challenge you to write a security section that says only "There are no security issues," because that I'm sure will get bounced back saying, "Well, show me that you've thought about it because the security directorate are going to actually come up with security issues." Similarly with Ops. And I believe what we are trying to do is as Benoît says, get people to in a way fear the—that those late reviews and start thinking about the problem earlier and maybe talking to people who understand the space. On which point, Benoît, I think you're asking this question in the wrong room. Because I would be really surprised if all these Ops guys said, "No, no, you—you don't need to consider operations." And it—we've got to find the people who are actually objecting to putting this section in and have this conversation with them.
Benoît Claise: You have—you have a very good point. Now at this point in time of the life cycle of this document, this a working group document in OPSAWG. I believe that what you're asking will be done during the IETF last call. So at this point in time, should I go and go to all the routing area meetings, security area meetings, int area to ask them or do we follow the process of the document? So if you have the—the willingness to do this and contact the broader community at this point in time, be my guest, Adrian. So I would like to proceed in steps like: do we agree about the requirement? There's someone—
Joe Clark: You want Dan's question? Dan.
Dan King: No, I was just going to—you were asking for advice and feedback and as somebody who's spent a good chunk of time, multiple years trying to get protocols deployed, I think this is a great step. And I think if you don't make it mandatory, people are just going not to do it. So I would fully support you continuing to have it compulsory. To Adrian's point, this room is not the right room to ask that question, but I would agree with it.
Benoît Claise: Thank you. So what I hear is that this room is not the right room, but to me it's the right room to start with, because if we had pushback here, maybe we would have a conclusion already. But I believe that we've got—Okay. Thank you. Let me go back to the last slide. Yes, the last slide is next steps. So let me summarize. We had the feedback from 11 directorate reviewers. We posted four versions. We still need, and I want to make sure it's mentioned, we still need to address a couple of points that were sent to the OPSAWG mailing list, like the one from Eliot comes to mind, but we were first trying to address the directorate ones. Now if we look at the list of open issues and since we posted a version this week, thank you Joe, we can close even the couple of issues that are remaining and right now I believe that we are left with eight issues minus a couple that will be closed, so we're almost done. After that, it's going to be, you know, time to go to IETF last call and ask the same question to the wider audience. Thank you.
Joe Clark: Any other questions for Benoît? Okay. Thank you very much. Where did we land? I think this next one—someone already read the agenda. SAV and IPFIX. Chao, is that you? There you go. Now we'll—
Chao Chen: Good afternoon. Thanks to the chairs to giving me this opportunity to deliver my presentation and I'm going to talk about export of source address validation information in IPFIX (slides). This is Chao Chen from Zhongguancun Laboratory and this is a collaborative joint work together. So firstly, I'd like to update the draft since the previous version. We mainly include the feedbacks from IANA. According to their suggestions, we added the corresponding sections of the draft itself as references to the new sub-registries. And we also add a statement reserving value 2 to 255 for these newly defined information elementaries. So this is—this is the main part. The motivation of this draft is simple but significant. So source address validation, short for SAV, is a fundamental defense against IP spoofing, but currently implementations lack operational visibilities. The SAV—the router deployed with SAV determine whether the packets is legitimate or spoofed according to their source prefix. But in the data plane, what happened is often hidden to the operators. So that in turn makes the operators reluctant to deploy SAV to protect their networks from source address spoofing attacks. So before we dive into any solutions, let's go through some background of SAV. So SAV is a simple check in the network routers. The routers basically check the incoming packets' source prefix or the interface, or vice versa, according to a—according to a allow list or a block list. And that basically form four canonical modes in general SAV capability framework. And each mode can be represented by a different type of SAV table. So let's look at SAV table for mode 1, for example. This is an interface-based prefix allow list. This is an interface-based prefix allow list mode and in this SAV table, this SAV table contains prefixes that is valid for specific interface. And the router check, when the packet comes to—comes on the interface, the router checks whether the source prefix matches any of the prefix in this allow list. And if yes, then the packet is permitted. If not, then the packet is detected as spoofed. And the routers may take actions to these packets, like dropping it or rate limiting or just sampling the packets. And this is when the operators want to get informed what happened in the data plane, like what SAV mode was triggered, why it was triggered, and how the packets was handled by the routers. And now the problems became how we can use IPFIX, these standard telemetry protocol, to—to represent SAV event. And we design four new information elements here. The first two work together to identify what SAV mode's triggered, and the fourth one, the last one, tells us how the routers handled the spoofed packets. And the third one carries the most detail, it tells the operator why the enforcement was triggered, so what specific SAV rules were matched in the validation process. And we—as we know from the previous slides, a SAV rule is a structured data containing ingress interface, source prefix, prefix length. So we encode the SAV rule, we use a structured data type to encode a SAV rule. So we choose these sub-template list from RFC 6313 to encode the SAV rule, and with each entry represent a complete SAV rule. So here's an example of an IPv4 prefix allow-list non-match SAV event. So the packet—the packet—the prefix of the packets doesn't match any of the prefixes that is valid in the allow list, so we export all of the SAV rules in this template—in this data records. And we also implemented an hackathon a couple of days ago to demonstrate how we can use the new IEs in IPFIX to help operators get deeper understanding of SAV event. Here we use two different templates and we exported the data records respectively. In the first record here we see on interface 5102, there are 255 packets are detected as spoofed. Why? Because their interface doesn't match any of the interface in—in the allow list, so we exported all of the SAV rules that was involved. And in these record, we see that—we—we can get who exactly these packets are. We know their source IP address, destination IP, and their transport port and the protocol that they use. So this helps—provides the operators with more detailed information about SAV enforcement. In conclusion, we address the issue of making SAV information visible with IPFIX. We designed four new information elements for SAV enforcement and specifically we used sub-template list structured data type for SAV rules. And we implemented hackathon demo and we believe that gives more deeper understanding for effective monitoring and security analysis. And that's it. Thank you.
Thomas Graf: Yeah, brief comment. Thanks a lot and this is great work. I was just thinking about 5706bis and I looked at the document references, so there is a SAV document where basically the capabilities are being described. There is an IPFIX document, there is a YANG document. I would love to see in this SAV document an operation consideration section describing how the configuration is done via YANG and how basically your document in IPFIX is being used to actually monitor that.
Chao Chen: All right. This draft is—is written complying with YANG document. Yeah, and I didn't show here, we have some statements in the draft and we also going to implement the end-to-end flow using—to connect with YANG document and with—with our IPFIX.
Thomas Graf: That's perfect. I just want to mention like in your document you have a reference to the capabilities document, right? Right. So maybe you can reach out to these authors, you know, and mention about operational consideration and maybe even propose a text in their document how your IPFIX document could be applied there.
Joe Clark: He's saying talk back to the SAV authors to see if you can influence their operational considerations based on your work.
Chao Chen: Oh okay. Right. We have co-authors of the SAV capabilities.
Thomas Graf: Even better. Perfect. Thank you.
Chongfeng Xie: Sorry, Chongfeng, as a challenge. I think this draft is very—very interesting. And I also notice that this draft is about verify the effectively of SAV mechanism, right? Okay. But I think that the effectively of SAV mechanism is mainly based on the mechanism—SAV technology itself, right? Maybe there one some case, right? If the SAV mechanism misconfig at the network level, maybe some the event can be missed. So the device will not import report this information to to the collector, right? So how how to deal with this case?
Chao Chen: You mean when SAV is not enforced, not in fact detected?
Chongfeng Xie: Is not effective. And there's no information to export to the operators.
Chao Chen: That's when IPFIX can be—can—can work to the operators because, you know, if—if we detect—we have detected some packets were dropped, but we don't know why they've been dropped. Is it because SAV is not effective or is it because the network error or policy drop? We can be—the operator can be informed if we don't have IPFIX to export the information, we don't know at all. We even don't know there's packets been dropping.
Chongfeng Xie: So how to deal with this situation?
Joe Clark: I think we probably have to take this to the list, Chongfeng.
Chao Chen: Maybe we can—we can come up with some use cases next time.
Joe Clark: Okay. I have one real quick chair's question: is your intent to do more revisions and then call for adoption or you think it's ready to consider for adoption now?
Chao Chen: Oh, I think we can get more use cases and maybe proceed to get adopted.
Joe Clark: Okay, perfect. Thank you.
Xin Xin: Good afternoon. I'm Xin Xin from China Unicom and I'm glad to present about the draft is requirements and information elements for application layer information export in IPFIX (slides). So this is the first presentation about the draft, so I will start with the motivation behind the draft and the problems we aim to solve. In current network for a long time, we have rely on the traffic information from the layer 3, the network layer, and the layer 4, the transport layer, to get the information. Those also can provide the basic information only about the routing, the ports, and the transmission status of the data packets. However, now the network gets faster and cover more arems and need more detailed the management. So those basic information cannot meet the requirements of the refined operation and intelligent decision making. And maybe we need to get the business attributes, user behaviors, and application intentions behind the traffic. So maybe the solution need about the to get more information. The application layer information in the IPFIX can enables accurate identification of traffic business types and user behaviors. Those can provide a critical foundation for network planning, resource scheduling, and user experience optimization. So we promote this draft to solve this problem and we defined the exports of the IPFIX application layers information specifically about the HTTP and HTTPS. We must note the HTTP and HTTPS information is exported by this draft, but other other information is not within the scope of this draft. Let's share the two use cases. The first is CDN content introduction of traffic scheduling optimization. There is a large amount of HTTP or HTTPS traffic in the backbone network, but now the traditional IPFIX can only state the IP, port, and the traffic size and then cannot to identify the business type and content. If we can get more application layer information, maybe we can to identify and analysis the types of and the frequencies of the resource access by users. Then we can do some scheduling and higher traffic and higher access proportion business to the to the local CDN nodes and to reduce across across traffic and improve user experiences and so on. And the second use case: there is a draft about the IPv6 network deployment monitoring and analysis and this draft suggests that in order to improve the end-to-end connectivity and service quality of the IPv6 networks, we need to identify where the bottleneck of IPv6 networks lies. Maybe there in the user terminals, network nodes, and other access the application. So maybe we need to get more application information based on IPv6, then we can do the IPv6 end-to-end identification and analysis. So based on those background, we propose to export the application layer information. We get some new information elements about the showing this. The four point 1 to 4 point 8 is related to the request packets and can be used for business identification and access behavior analysis. And next is are related to the response packets and can be used for service quality analysis and optimization. And next are HTTPS handshake related fields, enabling client features identification and detection of abnormal network behavior without decrypting traffic. So that's my—that's all, and welcome any comments and suggestion. Thank you.
Alex: Alex. I've checked quickly the draft and there are some of the proposals that are already existing on the IANA registry, so maybe I am missing something, but please, authors, check whether the proposals are already existing in the in the IANA registry and we are not allocating for already existing information elements.
Xin Xin: Okay, okay, I will check that and on—next.
Joe Clark: Yeah, we're out of time, but I will take some comments to the list. So thank you for the presentation.
Xin Xin: Okay, okay, thank you.
Shunsuke Song: Okay, hello everyone. This is Shunsuke Song from ZTE. My presentation title is about, you know, exporting ECN (Explicit Congestion Notification) information in IPFIX (slides). Before my presentation, I'd like to answer the question from the chairs, because this is my first presentation on this draft. So why you put forward this draft? Yeah, I surely I think the ECN information exported in IPFIX, the use case we based on the L4S service. And as we all know, L4S standards has been, you know, a bit matured and it has a broad deployments. So under the requirements, actually I think operators may have requirements to monitoring the network performance. As we all know, L4S needs to support low latency, low loss, and scalable throughput network performance. So and the next point is about, you know, IPFIX can provide a very flexible information export. It can provide a set of information elements to support the information, you know, access to the network. And the third one is that there is still the standard gap. So we'd like to draft this work and we would like to through this discussion to get your comments and questions to improve this draft. Okay. I don't know whether my question—my answer has answered the question from chairs. Let me introduce this draft. Okay, this draft provides a set of information elements of IPFIX to monitor the L4S ECN capability. I surely through this put, through this work, operators can have some benefits, such as to have network visibility to the underlay network and can have the network congestion status to provide or to improve their network or to optimize their network. And for this draft, you know, the monitoring logic, as we all know, the IPFIX may involve several aspects. First of all, it may involve the observation point and the second point is about the observed flow definition and some other, you know, other process communication to help to export information to the collector. So in the metering process, in this draft we use ECN field as flow key and define the L4S ECN flow as a set of packets which have common capabilities of flow and also have the ECN field such as ECT1. And in the exporting process, it encapsulates the IPFIX message and which may include the ECN related information, you know, such as the field status and may some other statistical data. And the exporter will export these related information to the collector and through analysis of the collector, it may provide, you know, the outcome to operators to help to investigate or to have assessment to the network performance. And for the ECN information elements design, through this slide, you know, on the right side you can find a list of information elements are designed: the IPv4 header ECN, IPv6 header ECN, and MPLS header ECN, you know, which the ECN status are carried in the IP header or or MPLS header. And the ECN field will have, you know, several values to display their status such as non-ECT, ECT0, ECT1, or CE. CE is for the congestionness periods in the network. And the EXP field, you know, is used in MPLS header to carry the ECN information, but in the existing RFC, I surely this EXP status may only have, you know, two status, which is about the CM or non-CM. So this information may be not enough to to display whether the L4S traffic is carried in the network. And we also have some two other information elements designed, you know, for the tunnel control plane. One is for IPsec and the second one is for L2TP tunnel. And for the TCP, I surely because ECN is an end-to-end technology, so it needs the TCP sender and receiver to carry the ECN capability support, you know, whether the ECN capability is supported or enabled. But this field actually has been supported in the RFC 9565. And after this draft posted, we have received many positive feedbacks, such as we received the question about, you know, to make clarifications on the observation location or observation point. Actually, yeah, we have made corresponding updates to the current version. And there's also the reference correction comments and we also made the corresponding updates. And the comments from Johen Pan, which involve, you know, how to distinguish—actually, it's related L4S traffic identification. And the second one is about how to reduce the overhead, you know, of network. And for the first question, actually I think it's not addressed in the current version, so we have plan to address it or incorporate it to the next version, you know, to end some filtering measures reference to the operation section. And there's also comments for the probability of CE marking and the ratio IEs had some understanding conflicts. We have made corresponding updates. And for the next steps of our draft, we would like to make further updates, you know, as just I introduced some feedback updates and some others. We also would like to receive your comments and questions and any type of feedback. Okay, thank you.
Benoît Claise: Can you please go back to slide 5? Because I should be missing something. So you want to export the bit 6 and 7 only, right? Now, if you look at the entire byte, this is in IPFIX the IP class of service. It's an unsigned8, right? And what I see you that you want to do is that you want to have an unsigned8 which is just the the bit 6 and 7, right? That's what you want to do. You're going to ask a router to look at the bytes to basically extract the first six because you don't care, and to put that in another byte to export it, right? So why don't you just ask less work from the from the router and export the field number 5, which is called IP class of service, and ask the collector to just only look at the bit 6 and 7? See what I mean? We have already one information element for the entire 8 bits, and you want to have your own with just bit 6 and 7.
Shunsuke Song: Yeah, bit 6 to 7 used for the ECN field. Two bits. Actually, it's code points. Yeah.
Benoît Claise: I get that, but does it make sense? Does it make sense because you could be exporting the 8 bits directly. I mean, I was working for a router vendor and sometimes we forget that router is still have to route packets, right? So you're going to ask them to just take an unsigned8, to remove some bits, and to just re-export it. But okay.
Shunsuke Song: Yeah, okay. I will talk with you offline. Thank you. Thank you.
Xiao Min: Yes, the following two drafts is written based on the a problem we face, but a different aspects in the same network, basically. And they are under implementation, so we want to see if we are doing it right or there's some better solutions. So the first is about "Export of Encapsulation Layer Information in IPFIX" (slides). So packets with multiple layer encapsulation becomes more and more common the network. Typical scenarios include IP-in-IP or even IP-in-IP-in-IP. So when you monitoring these packets, you may have some different requirements. You may want to export all of the information of all the headers, or only part of the information of part of the headers. So the gap is when receiving a IPFIX message with a certain information element, for example, if we receive a source IPv6 address, the collector is not able to tell which encapsulation layer this information IE belongs to. So there's no layer information. So solution option 1 is presented last IETF. So you can define new information elements for each layer, just for the encapsulation layer indication. For example, for the layer top, for the information element, so encapsulationLayerTop, it can be used to indicate that the information elements following immediately after it, till the next encapsulation layer IE, belongs to the outermost network encapsulation layer. And also you can define encapsulationLayer2 or layer 3 or something else. So the pros is that it seems quite straightforward and workable. But the cons, as we received last presentation, so the first is the semantic of one IPFIX IE relies on the content of another. And also there are scalability problems. So if you have more than three layers information, then you have to need more information elements because you have to indicate, for example, the fourth layer. So option 2: so you only need one information element. So use a uniform structured data encoding. So this information element use the abstract data type subTemplateList. So this information element indicates that the header of fields of different encapsulation layers will be exported. And each top level element in a subTemplateListMultiLayerInformation carries a template ID and length and zero or more data records corresponding the template ID. So the template IDs from the top or bottom carried in the information element correspond to encapsulation layers of the packet flow, starting from the outermost layer. And so the right part is an example. So the first you need a template for this new information element. And then if you want for a IP-in-IP-in-IP packet, if you only want to export the outermost IPv6 header and the third IPv6 header, you need a template ID for the outer header and a template for the third header. Then in the data set, you will include the template ID of the top layer and third layer. So the template ID of the second layer, you just set to zero to indicate absence of the information of the second layer. So the pros is that it seems more generic and standard way, but you can see it's kind of complex and there's seem no wide implementation or we could say there seems no implementation of the abstract data type subTemplateList. So option 2. And option 3: always export the whole encapsulation. So based on RFC 7011: so if a information element is required more than once in a template, the different occurrence of this information element should follow the logical order of their treatment by the metering process. So using this option: so for a packet with three IPv6 headers encapsulated, even if the monitor only wants to collect DA of the innermost header, then the exporter need to export all three DAs of each layer in order from the top to the innermost. So you won't misunderstood it. The pros is that no new information elements required and technically it's more easy to implement. But the cons is so you can see you have the redundancy in the IPFIX messages. So as we listed the requirement BCD: so if you only need part of them but then in this case you export all of the information of all the layers. And for this option, and one thing to considered is that should the "should" in RFC 7011 be updated to "must"? But we may discuss it later. So a welcome feedback, comments and cooperation. So which option is better? So any other better options? So we wouldn't say we have implementation because we don't know how to implement.
Thomas Graf: I hope I don't say the same thing. So my suggestion is actually option 1 and 3. So basically option 1 should be the case basically if not all for all the layers are the data being exported. Because that would be already covered in option 3. In only a subset of the data is being exported and the metering process knows about all the different layer, then option 1 would make perfectly sense, so both 1 and 3 in my opinion. And also in 3, you bring up a very good point. There are many implementation following that "should" and actually that "should" is not really helpful for an operator and it should be a "must".
Benoît Claise: So the last time I advised you to go with the IPFIX structured data, right? Which is your option 2. And I discussed this with one of the IPFIX doctors and we arrived to the conclusion after some time that even if you use this, it doesn't guarantee that you have the ordered. Because there is the order of the export and the order of the metering. So we still need to do a bit of homework there, but this solution doesn't guarantee anyway that you have ordered the data. So I need to do my homework and go back to the list.
Xiao Min: Okay, thank you. So move on. Yes, the following two drafts is written based on the a problem we face but a different aspects in the same network basically and they are under implementation. So we want to see if we are doing it right or there's some better solutions. So the first is about "Export of BGP VPN Information in IPFIX" (slides). So background: BGP VPN, so widely deployed. For MPLS VPN, BGP is—so you associate a particular MPLS label with the BGP service and advertise it via BGP VPN route and the next hop normally is set as the egress PE address. And for SRv6 VPN, kind of similar, but the VPN service is related with SR service SID and also the next hop is set as the egress PE address. So when we are monitoring the traffic flows on the ingress PE in a network with BGP VPN deployed. So we want to know which egress PE is the flow forwarded to. Existing—we checked the existing information elements and think that they might not be enough. So if you want to get the next hop address advertised by the egress PE via BGP VPN route. So the existing information element which are bgpNextHopIPv4Address and IPv6Address defines the IP address of the next adjacent BGP next hop. But when there's more than one type of BGP route in use in the network, for example, a BGP VPN route is used together with BGP-LU, it is not clear which type of the BGP route the BGP next hop carried in the existing information element belongs to. And so, and for the SRv6 VPN especially, if we want to get the SRv6 locators of the service SID on the egress PE. Existing information elements, we have srSegmentIPv6 and srHSegmentIPv6LocatorLength, so although they enable the calculation of the SRv6 locator, but there's no mechanisms yet to solely export the segmentList0, which is the location where the SRv6 VPN SID is placed in the SRH. So we couldn't get that information as well. So new information elements to solve this problem. So the first two kind of more generic to obtain the egress PE information. So the first is bgpVpnNextHopIPv4Address and the second is IPv6Address. So it can be used to carry the next hop address carried in the BGP VPN route, which is normally the address on the egress PE. But of there's some limitations, but so in the multi-AS backbones, if the Inter-AS option A or option B are used, you can't get the egress PE address, you can only get the address on the ASBR, but it seems we will accept it as it is. So for SRv6 VPN especially, so another choice is to export the locator information of the SRv6 service SID, because normally the locator of the SRv6 service SID or the so-called VPN SID is well designed and on each egress PE. So then you can have information element 3: srv6ServiceSid and information element 4: srv6ServiceSidLocatorLength. So when then you use them together then you can get the locator information or you want to know about the detailed SRv6 service SID information. They can fulfill that. So we presented this draft on the BESS session this morning and gather one feedback about the co-existence between the new information element about the bgpVpnNextHopAddress and the existing bgpNextHopAddress and will consider more about the co-existence part and welcome feedback and comments. That's all.
Joe Clark: Xiao, due to time, I would say pose all your questions to the list and let's try to jump up some conversation there. And we'll move on to your last presentation.
Xiao Min: So this one, last one is quite simple. I put the easiest one at the last (slides). So the same network but different aspects. So "Export of IGP Flexible Algorithm Information in IPFIX". So an IGP Flex-Algo allows IGP to compute constraint-based paths. So using IGP Flex-Algo you can divide a physical network into different planes or to implemented network slicing. So you can have the plan for low latency, another plan for the maximum bandwidth or something else. So IGP Flex-Algo can be used with SR-MPLS, SRv6 and pure IPv6 prefixes and they are all in the existing RFCs and working group drafts. And it is also, I believe that IGP Flex-Algo is also widely deployed. So our network has deployed IGP Flex-Algo. So when monitoring a traffic flow in this network, so the question is which Flex-Algo is the SR-MPLS SID, SRv6 or the SRv6 locator or the IP prefix belongs to? Because they are used with different IGP Flex-Algo, you belongs to the different logical planes, different parts. So only one new draft—only one new information element, the Flex-Algo, describes the Flex-Algo number related with SR-MPLS SID or SRv6 locator or an IP prefix. So operational considerations: so when monitoring SR-MPLS flows, this IE indicates IGP Flex-Algo of the active SR-MPLS SID. So when you monitor SRv6 flows, it indicates the IGP Flex-Algo related with the locator of the active SRv6 SID. So of course you can use it in the pure IP flows, so it indicates IGP Flex-Algo related with the destination IP address of the or the IP next hop address and they won't be exist—the three use cases won't be exist at the same time, so they can. That's all.
Thomas Graf: Just a quick comment. So that you have references basically just for the SRv6 related and MPLS related things, but I think this Flex-Algo is generally relating to the OSPF, IS-IS Flex-Algo extension. So my suggestion is in the document itself, and I will write it in the mailing list, put specifically like those two references which are mentioned in the IANA registry, so it's clear to what it is pointing.
Xiao Min: Okay, thank you.
Thomas Graf: And just to the document before, one comment. I was just checking like from vendor implementations and I see currently it's being abused, they are using the VPNv4, VPNv6 next hop, they're actually using the IPFIX entity for IPv4 and IPv6 next hop. So what you're proposing makes perfect sense and I'm looking very forward for that adoption of the documents.
Xiao Min: Thank you, thank you for the comments.
Joe Clark: I think the queue is clear. Thank you very much, Xiao. Take a breath, you've earned it. And thank you for giving us a minute or so back. We are going to move on, I think our remote presenter is next, if I'm not mistaken. Hey, Michael. Say some—
Michael Richardson: Can you hear me?
Joe Clark: Ah, we can, great. I'm going to give you slide control. I've got a little bit of back-feed from the room, but we'll manage I think. Okay, you should be able to control the slides.
Michael Richardson: Yep. Got that. Thank you. Hi everyone, I'm Michael. I'm here to present this new draft on security operations fundamentals and guidance (slides). So Benoît mentioned this work earlier and so thank you for allocating some time today to talk about the motivation, the goals, and some of the content of the draft. So the idea behind this draft is it's meant to be an informational draft to increase the understanding amongst IETF protocol designers of security operations. So this is guidance, this is from a background of working with security operations and bringing that expertise to support this community that might be less familiar. It's not a mandate to include text, as Benoît discussed some of that about 5706bis, it's just meant to prompt protocol designers to consider impacts and mitigate and document these if they can. It's about giving designers the information they need and the tools they need to be able to make those informed choices about what to include. This document is very much inspired by the work that's gone into 5706bis, so thank you to the authors of that work. As mentioned earlier today, there's some text in there on security operations, but I acknowledge that this is a kind of larger topic, so fits in a separate draft as well. And thirdly, I hope that we can kind of improve the security landscape by helping designers consider security operators and their effort to mitigate cyber threats. I do want to highlight that this isn't security considerations, this isn't looking at the security of the protocols or how they are, you know, configured, designed, chosen, that kind of stuff. It's about looking at the practicalities of what security operators do and how what they require and the things that might impact them, and so hence I think it fits well with this—with this working group.
So when I first started writing this and I was discussing with some people at IETF, I got a comment that was roughly, "But what do you mean by SecOps? What do you mean by security operations?" And I think that comment does two things. It made me think two things. Firstly, it highlights the value of this informational draft, but it also means I should start this presentation with some definitions. So just to be clear about what I'm talking about: security operators are responsible for detecting malicious activity and responding to threats and defending their networks and systems from cyber attacks. So those who work in security operations may have different roles or job titles, including stuff like cyber security analyst, incident response, security engineer, security operations manager, etc. In the industry, you know, that's a there's lots of different people that work on this, but this document we just use the term security operator to kind of capture all of those roles. Security operations are commonly run from a SOC, a security operations center, and that's a centralized team that includes both cyber security analysts and operational engineers, and together they protect and defend the network. I think the common comparison would be between a NOC and a SOC. So NOC ensures network availability and performance, so the focus is on stability, while a SOC would protect against malicious activity, so provide that kind of security side of things. And so the term SecOps is commonly used to define an approach that combines operational and security teams with shared tools and processes to ensure the protection and reliable operation of networks. There can be tension sometimes between security and operations where priorities differ, and joining these functions and co-locating them helps to ensure that both security and operational priorities are considered holistically. And I think this is particularly relevant for protocol design and guidance that IETF produces. And so that's the—that's the motivation behind this draft.
So some of those definitions can maybe feel a little abstract. So I think what's more informative is looking at what security operators actually do, and that's what thedraft tries to cover and tries to give a picture of. I'll caveat that some of these responsibilities may be different depending on the system, the network, the enterprise, organization, the set-up. Um, but we just try to capture kind of a common set of things. So the first of which is a focus on threat intelligence. So threat intelligence here is a term used to refer to the knowledge of a cyber attacker's activities, so that might be the techniques or the tools that they use, an understanding of the indicators of such malicious activity, um, but also perhaps even the motivation behind some groups. Uh and that can inform how you—how you defend against them. Security operators can produce their own threat intelligence and build that—build that information, but they can also consume it from other sources and share it around to stay ahead of new attacker techniques. Um, and they also ensure that that data that they have is deployed across the network to support detection of malicious activity. Um, I'll talk a bit more about that later, but I think that's a good example of the joining of the security knowledge and the operational practices and combining them. So the second responsibility and thing that security operators focus on is to conduct security monitoring. So this is monitoring all parts of the environment that they're managing, whether that's infrastructure, looking at the traffic, the endpoints, the data flows, the log sources, um having that complete picture in order to establish a baseline of normal activity. Uh and once you've got that baseline, it's easier to identify deviations that may look suspicious and then after investigation, work out if they are malicious. Um, security operators aren't just kind of passively, it's also kind of being more proactive and not just reactive. Um, so threat hunting is a term used to do targeted analysis of the network and investigate for previously unknown indicators of malicious activity, so going out and searching for it essentially. And thirdly, security operators are responsible for responding to cyber incidents. Um, so that would include an incident response function, so that's investigate that suspicious activity to identify if it is malicious, uh and if so return your system/network to its safe state, deal with the problem. Um, and part of that incidents response is being in a position to do that, so that might include developing own tooling that'll enable you to jump into action, and again that relies on engineering and operational experts working with security experts.
So in order to provide guidance to protocol designers, I wanted to highlight what security operators need to do their jobs. We just talked about what they—what they do, but what—what are they reliant upon? Um, so in the draft there's a focus on asset management. You can't protect and manage what you don't know about, so security operators use tooling uh to maintain an accurate source of information about the assets that they're responsible for. Um, so we use the term "shadow IT" often to refer to assets that aren't accounted for, so that could be devices that are not officially onboarded into the network, or if they are misconfigured, um but it also includes things like services and tools and accounts that have access to the system. Again, that's a combination of operational and engineering focus, not—not just on security, um but of course it has big security impacts. Security operators rely on indicators of compromise um to identify and defend against malicious activity. Um, so very briefly, indicators of compromise or IOCs are observable artifacts relating to cyber threat actor or their activities, such as their techniques and procedures that they use, or tooling, or attack infrastructure, so that might be common files or hashes of files that they use, IP addresses of command and control infrastructure, uh or deployed tooling, that kind of stuff. There's a bit more information in RFC 9424 if you're interested in finding out a bit more. Um, but again it's not just understanding the information and understanding the threat, it's about deploying these indicators of compromise and manage them across the network. Security operators also need digital forensics from a variety of places, whether that's the network, endpoints, hosts, applications. So that might be things like details of authentication or authorization events, but it might also be details of network traffic or endpoint detection events.
So again, to provide guidance to protocol designers, I wanted to highlight some of the tooling as well in the draft that security operators rely upon. Uh there are lots of terms that are kind of thrown around, like EDR, NDR, XDR, uh SIEM, SOAR, all that kind of stuff. Um, so I wanted to give a bit more of detail about what they are for and how they are used operationally. I'm not going to go into that right now, um but there's some detail in the—in the draft there if you're interested.
So as mentioned at the start, uh this is not mandated, this is not a requirement to add text on this, but it's a prompt of things to consider um and mitigate when you're designing a protocol, and if you can't mitigate it, then—then document so that it's clear um for—for operators. Um, again as inspired by 5706bis, I thought it was important to provide a list of specific guidance to focus protocol designers um on. Um, so there's a few categories of that and a bit more information in the draft. Um, I think the summary of that section is when protocols are developed, consideration should be given to the current techniques employed by security operators, and where possible, those practices and techniques, it'd be great if they could remain consistent, um and if they can't, then having some documentation to ensure that security operators are not adversely affected or can adapt their approaches would be really helpful.
So this is a 00 draft. Um, we've been working on it for a little while but published it back in February. So I'm really keen to get comments from this community to start—start building on the draft. Thank you to Jeff and Nalini who have already provided feedback on the list. Um, so I've got a couple of suggestions from them to expand on the privacy consideration section and to add guidance to define detectable error conditions. Um, so I'll look to incorporate those in the next version. Um, I guess just wanted to open it up, I'm really keen to hear from the working group about what you think about the content at the moment, what you think the next step should be, um keen to hear that now or—or on the list. Thank you.
Minsong: Hi, Minsong from Huawei. Um great work for given initial guidance. I think it may be useful to give specific example to illustrate when the incident should be handled only by SOC or it should be handled by SOC and NOC both. I think that will helps. Thank you.
Michael Richardson: Okay, thank you. Yeah, that's good advice to to kind of distinguish when they're separate and joined and try and identify that. Um, I'll—I'll look to include that in the next version. Thank you.
Joe Clark: Um, thank you, Michael. We're running a little short on time, so we're going to move on to the last presentation. I do have some comments that I will send to the list on this, though. Thank you.
Michael Richardson: Thank you.
Mahesh Jethanandani: Okay, this is going to be a continuation of the discussion from the previous couple of meetings. Um instead of okay, let's just get the admin some of the admin stuff out of the way. There is a draft which is published, it's experimental in nature. It has the stated reasons for why it's experimental and what the plan is. Uh go read it, I'm not going to give you updates on exactly what the document contains. Instead, I'll just say if this is the right amount of content, I would suggest that maybe it should be considered for work group adoption.
So with that, I'm going to jump directly into review comments that I received on the draft. And um I want to make it a interactive session, so feel free to start asking questions, I'm not going to wait and if we run out of time we stop and we continue on the mailing list.
All right. The first question: is this the right place to discuss this document? The history is this was AD sponsored document to begin with, but we felt for a much like 5706, this would be a good place to have the discussion and get feedback rather than doing an AD sponsor. So for now at least it's a OPSAWG work group document. Any hope there are no objections. If you do, just come to the mic or state your objection.
Should the YANG module share the repo with the draft? It's not Personally, I don't have a stipulation. Um I am more in the spirit of not trying to make too many of these as requirements. So I do that personally for my own drafts, but if others feel no, feel free to do it the way you feel is right.
On the question of tree diagrams, I think what the general consensus is that the tree diagrams do help to explain the module, and since most of the text for the YANG module is in what I call the document, it at least it should be included there. Now you can generate it somewhere wherever else you want to generate it, but it should be included in the draft itself. Um at least that's the suggestion that I would give, but if others feel against it, just raise your hand.
All right, GitHub versus GitLab. Um personally, it's a work group decision. I feel that RFC editors have already started using GitHub, and GitHub does seem to have a better tool set, whether it's the CI/CD pipeline that they support. But if someone is willing to work on a GitLab that has similar support, so be it. I'm going to make that a work group question or decision.
Uh the fifth question/comment was stronger guidance on tagging mechanism. And I want to pause here because I think this is an important part of Velos. Overall, I would say I agree with the premise that yes, we need stronger guidance on how we're going to tag particular versions of the module. So at a very simple level, yes, I can say a should should be a must in the document. But ultimately the question would come down to what is the option we are recommending? Right now what I have in the document is a Git tagging mechanism to tag a particular version of the module when it is published as part of the document, oh sorry, when it is referenced from the document. Now um there is a um there is a more secure option that gives you a hash that tracks that particular tag. Again, I would say that I would open that for discussion if that is needed. Overall, I do agree with this premise, but I think the devil is in the details. And I'm more than happy to discuss that um if need be.
Review question number six: why two years? I think the feedback from the Any-Ops workshop was it takes too long to develop a module in IETF, and that's why a time frame was suggested. Now if time is not the right criteria to determine whether we are doing it the right way, maybe there should be some other criteria. Is it the ease of making changes or updates? I don't know if um is that the criteria or is timing really the criteria for Velos? And I'll pause since I see Jeff in the line.
Jeff Haas: Yeah, so commented for the takeoff on this one, you know, since we both know together is not necessarily the time that this is going through living as a draft that's slowing this down. This is participation. So the question you should be asking here is, you know, does the split that we're talking about help with participation? I think in some extent it does, but that's not necessarily given.
Mahesh Jethanandani: Um good observation. I would tend to agree that yep, things would move faster if we got review comments. Um the other thing I did I would agree that the criteria that I set, two years, may not apply equally for new YANG modules versus bis versions of the modules. So I will admit that new YANG modules do take a little longer than bis versions to—to churn. So maybe there is a criteria that we say bis version should be faster, maybe a year, but new YANG modules should churn out within a two-year uh period. Any disagreement? I don't see any. Ah maybe—
Joe Clark: As a contributor, I don't necessarily disagree. Um I guess first of all, GitLab versus GitHub. Both do CI/CD really well. GitHub seems to be the canonical thing most I- that I've used. Question 5, which was on uh oh, I strongly agree should be a must. Michael Richardson's suggestion of the hash, that—that is nice, I haven't really thought much about it. But on six and overall, I wonder what the magic is here. Like your experiment is around time, but if it's a—if it's an easy-to-agree-upon module, did Velos help that, or was it just because just like some of these IPFIX things, they move quickly because there's not a lot of contention there? So I—I wonder why is Git in and of itself, why is this approach going to be generally quicker to produce some of these modules? And to Jeff's point, is it because it's better for participation? These are the things I would like to capture as part of the experiment: what was the overall experience with module development because of this?
Mahesh Jethanandani: Okay. I have my opinion, but I'll let-
Benoît Claise: I—I think that Joe, yeah, everything that you framed exactly I would say something that we need to agree on about this experiment: what are the goals that we are, I would say, targeting for having this effort? Is it to have something quicker? Is it to have it better? Is it to have to ease something so that we have other profiles that can contribute to the work we are doing here? And separate, I would say, the skills from people that are usually run drafts and people who write code and so on. So I think there's a a bunch of criteria there that we need to take into account, and I think that's the the goals of—of—of the experiment may not be doing it faster, but differently in but in a we need to find the balance there, but we need to to frame these goals, I would say, in this experiment. And actually this gives me to the comment I wanted to make before here, because when I am seeing the two years, it depends on for what is it for—is it for running this experiment or for the outcome of the experiment itself? And this is also something I think that we need to discuss as part of this of this project. I am not expecting us to have, I would say, the answer for all of this today. Uh we just need to start this, sit together, have people who we onboard and trying to play with this process and try, I would say, to to tweak it. Start with something and then iterate still we have something which is really, I would say, useful for the for the community and declare whether it's a success or a failure, it depends on on the on the other part. Um just for um another mention here is that Mahesh um he's presenting as an individual contributor here. I will be the one who will be, I would say, shepherding this this this work. I started it earlier, this one, but I think that Mahesh because I would say his hat about the management and so on, he has the he has the energy to to bring this. So that's um just a clarification that I wanted to to make here.
Mahesh Jethanandani: So Joe, to go back to your question, I think the premise was that YANG generally tends to be is more code than is prose, and in that sense easier to manage in a source con- mechanism, source control mechanism.
Joe Clark: Totally agree with that. I just don't know if that in and of itself will get a YANG module to a point of standardization, of implementation any faster. But I totally agree with what you just said there.
Mahesh Jethanandani: Uh sorry.
Italo Busi: Italo Busi, Huawei. I would like to focus on what is the pain point as an YANG author or some drafts. The the what is scaring me in the current process and is making me very crazy is the bis. Let me be very honest. Uh the bis is a big issue because sometimes I especially on the types. But sometimes I need to add one leaf or one data type in a in a hundred pages RFC. And I have to do a bis for that. It's a huge process. And after I finish this work, I receive a lot of comments on the 99 pages which have not been changed. And this making the bis very slow. And what I see people is "No, no, no way to do a bis." And honestly speaking, having done the process, "No more bis," because is that's in my opinion is very is too slow and it is unnecessarily too slow adding an attribute to an RFC. So I see this process very good for a bis, because I want to add an attribute on a on a YANG model, I can write a one page RFC, I want to add this attribute, and then somebody put this attribute into the YANG. That's for a bis. For the single for the first version of the YANG, I'm not sure because I tend to agree with you, Joe. You need to get consensus, you need to get all the technical work being done. The fast the the work goes as fast as the people who contribute to the first YANG model. So maybe this is not changing too much. There is some advantage I can see is that since people is very scared by the bis, maybe is too scared to publish a document until they are sure that everything is done. So you can also say, "Oh okay, there are many issues but we are sure we have a good confidence that we can add in a backward compatible way. Let's go and publish a first version of the YANG which is not fully complete, but at least it start people it help people to start deploying and developing it. And then we know that the bis will not be a painful process." So I see advantages for this. But on one side I see the risk. We need to make sure that the YANG model that we produce are still good quality and reflect the good consensus of the people.
Benoît Claise: Yeah, just on this one I will just clarify is that um this work, I would say, is anchored on the requirement we are having in the NMOP working group. And in the NMOP working group, there is a requirement which is do it well do it quick but well. I think that's the that are the two parts that we need really to to find the balance. Not to be perfect, but we need to to provide something that, let's say, answers the main, I would say, function that we are targeting and then build on that one. But you are right about the bis. And this is something that we need to, I would say, cover in the description of the experiment and the one that we are targeting. As soon as we are clarifying the target, I think the more we will get, I would say, other authors of the draft to join us and we need also to reach out, I would say, the other working groups. But Mahesh will touch on on that one.
Mahesh Jethanandani: Okay, so I'm going to, in the interest of time, just skip over and get to the last slide. As far as next steps are concerned, um we I'm going to send out information on regular meetings we're going to set up. And we are essentially looking for volunteers who want to work on this experiment. Um primarily either on a new YANG module or a bis version. We need at least one of each to see what it takes to go through the Velos process. Thank you.
Joe Clark: Thank you, Mahesh. Um yeah, let him know and I'd be happy to join the meetings. I think this is good this could have a good potential. Um thank you, everyone, for attending OPSAWG at IETF 125. I hope you have a great rest of IETF and a safe trip home. Thanks.
All: Thank you. [Applause]
Michael Tuxen: Thank you.