**Session Date/Time:** 16 Mar 2026 03:30 **Benoit Andrew:** Okay, it seems to be resolved. Yeah, yeah, okay, it seems much better. Okay, then try to... okay, then... well, let's see. We'll see if this works better. We're not there yet. **Andre Lomonosov:** Welcome, welcome. We are starting the session, but we are running... we are having some network problems, so I'm running on IETF network and Benoit is running on my phone, so we'll see which one works better. So, Benoit. **Benoit Andrew:** Thank you. So, welcome to the DNSOP session 1 on Monday. So we start with the chairs' slides and present the agenda and give an update on what we have done. Benoit, Andre on my left. Shumon and Peter are our secretaries. Shumon is remote, Peter is over there. Our Area Director is Matt; he's in the room, yeah, over there. Thanks. And, okay, that's important: Paul offered to take notes. Thank you, Paul. This is an IETF meeting and the Note Well applies here to the working group, to this session. So we assume you're aware of the Note Well and you've read it. Part of the Note Well is the IETF Code of Conduct, of course—Code of Conduct guidelines. Well, I won't go over the different points here, but we work as individuals and we go respectfully with each other. So be kind and respectful to each other. If there are any concerns, please reach out to Andre, myself, or to the Ombuds team. We take this seriously. Good. So meeting tips: so everybody in the room, I ask to register themselves by the QR code at the front or just use the one on the screen. It's important. So use your telephone or the full client on your laptop. That and also, of course, for the remote people, but they are obviously registered. It's important for the IETF process. Not everybody might remember the blue sheet, but we need the people in the room that attend the session. Good. Next is the agenda for today. We will give an update on the current work. Then, oh yeah, the hackathon updates—we will give that on Thursday, very brief. And we will go over the current working group business, then for consideration and, time permitting, presentations. Thanks. **Andre Lomonosov:** Yeah, document update. So we will give an update of all the working group documents where they stand. Good. So we have published three RFCs in the past four months. They're all related to each other. Thank you, Wes and the other authors. Here are two drafts in the RFC Editor's queue: Operational Guidelines for DNS Transport and the Clarification of CDS and DNSKEY. The Operational Guidelines just entered the Editor's queue and the CDS consistency is for 60-plus days, but no problems expected, just making progress. Good. Submitted for IESG publication: not yet. There will be... this document will be soon submitted to the IESG. We had working group consensus on the document `draft-ietf-dnsop-ds-automation`. Waiting for the Shepherd write-up and will then submit it to the IESG, to our AD. Another document in Working Group Last Call is Structured DNS Error. In February, we had an... well, let's start previous year: we had an IETF Last Call for this document and it was sent back by the IESG to the DNSOP working group. There was some feedback we had to incorporate; it was more than just editorial changes, so the IESG sent it back to the DNSOP working group. We had an interim in February on the document together with the DNS Censorship or Censorship Transparency draft by Mark. The goal of the interim was both include and discuss—well, first discuss and then include—the community feedback during IETF Last Call and also make sure there were no conflicts between Structured DNS Error and the draft by Mark, the... it's just recently renamed to Censorship Transparency draft. So during the call or the interim, it was clear there were no conflicts. Community feedback was included. There was some additional feedback, so the authors were working, included all the latest comments in February, submitted a Last Revision, and we started a Working Group Last Call. The Working Group Last Call ended actually this week, but I want to extend it for this week until next week, early next week. I really want to ask you to read the document and give your feedback if it's ready or not. In the interim, people were positive, so please, people in the interim, just send an email to the mailing list that you think the document is ready to proceed. Thanks. Oh yeah, and we got a very good DNSDIR review, so thank you DNSDIR team for being on top of this. Good. Other documents that are waiting for our go-ahead: it's `draft-ietf-dnsop-ns-revalidation`. There was a Working Group Last Call end of November. The chairs are waiting for feedback by the authors, feedback on the feedback of the working group, and then we decide for consensus or not and how to go forward. `draft-ietf-dnsop-domain-verification-techniques`: there was a lot of feedback. Two new sections were added in the past four months: Threat Model and Supporting Multiple Accounts and Multiple Intermediaries. We think it's ready for Working Group Last Call. I talked with one of the authors; he thinks also it's ready, but we have to decide and discuss with all the authors and go forward. And we really want to finish this document. So if you have in the meantime some comments, look at the latest revision, share your ideas, your comments on the mailing list because we really want to push forward on this document. I think it's now going on—well, going on, it's—the document is about four or five years; we really want to finish it. Another document that needs some attention is `draft-ietf-dnsop-ds-automation`. We are in discussion with the authors how to go forward. It's... yeah. Good. There we go. Is this... next slide. All right. Can you go forward otherwise? Yeah, oh, there it goes. Yeah. Other documents in the working group that just continue to work are these. I won't number them, I won't discuss them. One thing... well, there are two new documents: Delegation Management (02, three?), RFC 9364bis, and `draft-ietf-dnsop-delext`, which is a document from DELEG or from—it's related to work in DELEG, but the work belongs in DNSOP, and so we accepted the document as a working group document. Good. There we go. Right. So one other new document: `draft-ietf-dnsop-dry-run-dnssec`, and the other document `draft-ietf-dnsop-integration` needs more working group discussion and feedback. We asked that on the mailing list. We got a DNSDIR review. We think there's more input from the working group necessary to make—to go forward with the document. So again, have a look at the document, give your feedback, and I also will ask that on the mailing list. Yeah. Okay, that works. Right. So altogether, before we go to the agenda, I think we cleared up quite a lot of work in the past months. We finished some work, pushed it to the IESG, we adopted quite a number of new drafts, so I think we're in good shape now with the working group. But if you think things could be sped up, speed up, going faster or different, please reach out to the chairs. For today, the agenda: so a regular working group document update will be given by Johan on `draft-ietf-dnsop-delegation-mgmt-via-ddns`. Next are... all right, thanks. Yeah, that works. Two drafts for consideration: Clarification to the DNS Ranking Data by Willem and Considerations for Protective DNS Server Operators by Mingxuan Liu. And then time permitting: Avoid Large Records with Wildcard Owner Names. The author is Peng, Peng Zhou; presentation will be by Bashan Zhou. Thank you. Any comments, questions about the agenda? Otherwise, we'll go start with the presentations. **Andre Lomonosov:** Okay, so Benoit asked for the feedback if you think there's an improvement, but we would also like to hear if you like the increased velocity of the DNSOP—like we chew through more drafts and... so things are moving now, I hope. So if you have any feedback and you like what we are doing now, we would also like to hear that. Thanks. **Benoit Andrew:** Okay, thank you. Then we'll start with the first presentation by Johan Stenstam. Okay, I can't do anything. No? All slots are... maybe you need to start. Stop slides. **Johan Stenstam:** Okay, am I close enough to the microphone? Oh, thanks. Yes, thank you. And that's forward. Okay. So this is a presentation of a document that has been kicked around for I think close to two years, but now it's changed from an individual submission by us to a working group document. So we need some a little bit of recap and statement of where we are. After adoption, we resubmitted with no essentially no changes. And then before this IETF, we, well, made changes because it had been not exactly dormant, but we had worked more on the code than on the actual document for a while and we needed to bring the document up to spec to where we were. So we've changed a couple of sections, we rearranged stuff, and we tried to make the document essentially a better document by clarifying stuff that wasn't sufficiently clear. I don't think we're done, but at least we're moving in the direction of improving the document. So a number of things have been improved. But what is this about? Well, it's essentially about the alternative of doing automated delegation management with a push model where the child pushes the change to the parent through a signed dynamic update, as opposed to automatic delegation management through a pull model, which is what we do with various scanners. Obviously, push has advantages from an efficiency point of view, and it has new issues from an authentication point of view. So there are pros and cons, but if we get it to work, it's obviously an efficient and good model. It also has the very attractive benefit of working as well for unsigned child zones as for signed child zones, which is something we cannot do with parent-side scanners because they rely on the DNSSEC signature. So the child discovers where to send that update through a lookup of the DSYNC record in the parent, now standardized. It signs the update when it has the need to change something and sends the update to wherever the parent wants to have the update delivered. And then in the recipient end in the parent, the parent applies well, obviously signature verification, but after the signature verification, it does exactly the same set of policy verification that it would have done had we done this the other way around through a CDS or CSYNC scanner. So there is no change to the parent-side policies. They are exactly the same regardless of the push or pull model. Likewise, there is no change to what happens after the update has been accepted, so to speak. It goes into some sort of whatever the parent wants to do with changes it discovers in children, be it a registry database or some other system. There is no requirement from the draft's point of view or from the prototype software point of view that this has to go straight into the parent zone, because that typically is not what you want. And then we get to the interesting part. The interesting part is how do we gain trust in this key that the child is using to sign the zone? This is probably where we've spent most time. I mean, dynamic updates as they are are not difficult. The question is how to trust them and how to gain trust in the key that signed them. And right now we have a whole bunch of models for how to make the parent trust the child key. The initial model was let's just publish the key at the apex of the child zone and the parent can look it up there. And if the zone happens to be DNSSEC signed, we're done; everything is fine. If the child zone is not DNSSEC signed, we're not fine because then presumably that published key, which is not signed, could be spoofed. And then we're essentially in the same space as we have with CDS bootstrapping, where RFC 80—867, I think, suggests that we query multiple times from multiple vantage points, etc., etc., etc., and eventually we gain trust in this key. So we can do the same thing in the unsigned case. However, there is also another model, as advocated by RFC 9615, which is let's publish the key as in—let's publish the CDS in the RFC 1915 CDS bootstrapping model under a name which is in a signed zone. And presumably in most cases, provider zones are signed. So if we publish the key somewhere under the name server name, that would be in a signed zone and suddenly we could do DNSSEC verification immediately and we're done. As soon as we can find the key in a signed zone and DNSSEC verification works, the initially hard problem becomes an easy problem. So we support both, and obviously we can also do various manual things. So in the end, the parent informs the child of what of these mechanisms for bootstrapping the key it supports. And it does that by publishing an SVCB record at the target of the DSYNC record. So this is the same mechanism that we use for generalized notifications. We tell the consumer of the information in the DSYNC what mechanism the parent that publishes the DSYNC supports. And in this case, we're proposing a new SVCB parameter called "bootstrap" that is used to announce what bootstrapping mechanisms are supported. Another change is that just like the information from the child to the parent is well, crucial and important and has to be signed and has to be verified, it's also the case that the information back from the parent to the child is in some cases crucial and important and ought to be signed. So we've switched to a model where we actually publish a key also for the update recipient end, so that the update recipient is able to sign the responses. That's typically an easier problem because the parent zone in most cases is signed, so we can just publish the key and the child will verify the key and everything is good. So it's an easier problem than in the child-to-parent direction. But even given that, one of the hard problems, or rather one of the crucial problems with this kind of system, is that if I make a change to the parent-side delegation information very infrequently—for an unsigned zone, perhaps I only change my name servers once every two years or something; it's not a frequent operation—what if it doesn't work? How will I discover when it stops working? It worked two years ago, and now suddenly it doesn't work; something has broken in the meantime. And that is one of the reasons why we have the EDNS0 key-state option, where we are able to communicate between the child and the parent. And for instance, we have several operations here, but one of the operations is that we can inquire what the state of the key is from the parent's point of view. So the child is able to say, "What is the state? This is the key I'm using. Is this a key that you are still trusting, everything is fine, or has something broken?" And I can do this before I have an urgent need to actually make the change. So I can do this whenever I want; I can just ask the parent, and the parent will say something like, "Yes, this key is trusted; everything is good. If you send a dynamic update, it will work." Or the parent can say, "No," as in whatever: algorithm is not accepted, or I'm in the process of bootstrapping trust for this key, or whatever has happened. And then the child obviously has the opportunity of re-bootstrapping. So this is a parallel process that you can do separate from actually making changes to the delegation information to make sure that when you have to make changes to the delegation information, it works. You don't want to discover after two years that, oops, this was a good theory, but in practice it doesn't work when I need it. So this is more or less the complete flow: The child publishes the key at various places. He sends a query to the parent asking whether the key is trusted or not, what the state is. And then he gets the response back: "Sorry, I don't know that key." He sends a self-signed update to the parent, which is the bootstrapping starting point. And the parent does the verifications over DNS, DNSSEC verification, whatever. It promotes the key to trusted, and now the child knows that the key is trusted and we can send updates for whatever we want to update, be it NS records or DS records or something. The prototype implements essentially all of the things in the draft, and it works. We have made a lot of work on it recently, just again to keep the draft and the implementation in sync. That said, there are always more things to do, and there are a couple of things in our backlog that we intend to fix as soon as possible, which is probably within before summer. And that brings us to the draft. I think this is close to being technically complete. I mean, we've hashed it around several times. We have an implementation, the implementation works, we've found a bunch of issues that needed refinement. We've spent a lot of work on the key bootstrapping stuff, and we think that we now have a really good solution for that. So it's sort of technically quite done. But that said, I'm not sure that it's quite done from an editorial point of view. There has been lots of changes recently, and changes to a draft obviously introduce new problems in a draft. So we need to work on that, improve the readability, make it more precise in the specification part to make it easier to implement, etc. So obviously we absolutely welcome all kinds of feedback and input on how we can improve the draft. Any questions? Peter. **Peter Thomassen:** Peter Thomassen. On slide 5, you said this is where the child authenticates the parent. Maybe we can go to slide 5. You said the parent is usually signed, and that's why the child can just query the key record and verify it. Now, this procedure is supposed to work with unsigned children, you said. **Johan Stenstam:** Yes. **Peter Thomassen:** And then you might want to have sub-delegations, so the parent actually might not be signed. **Johan Stenstam:** No, that's true. **Peter Thomassen:** So what do you suggest in that case? **Johan Stenstam:** There are essentially two methods, and those are the same two methods that we have in the child case. You can do RFC 867 style and poke at it from different vantage points, or you can have the parent also publish underneath the underscore signal magic name thing that RFC 6517 proposes. So the problem is essentially the same. The only thing that differs is that the probability of the parent being signed is higher than the probability of the child being signed. Otherwise, it's the same problem. **Peter Thomassen:** Okay. Then maybe there should be one joint description that applies to the parent and the child. **Johan Stenstam:** I agree. That's a good point. **Peter Thomassen:** And then of course you only use the name-server-based signaling. **Johan Stenstam:** Yeah, no, I agree with that. That's a good point. More questions? No questions remote. Okay. So we ask the working group, of course, to read the latest revision and give comments on the mailing list. Thank you. **Willem Toorop:** Yes. So this work has been presented exactly one year ago by Kazunori Fujiwara, the co-author of this draft. And there was a fair bit of feedback on the DNSOP list as well, but we let it slip a bit. So we're trying to restart it now. So the idea of this draft is to, I think we can still say, obsolete Section 5.4.1 Ranking Data of RFC 2181 and replace that with directives whereby the source of the data determines for what purpose it may be used. So what's the problem with Section 5.4.1 of RFC 2181? It sort of assumes a name server where everything is mixed and everything is in a single database with just one entry for every name, where the data associated with that name is replaced depending on where the source of that data. And it also assumes for authoritative name server function, for example, that all data from zone files and zone transfers are mixed, merged together. And this is no longer current practice. So on the right side is one of the scenarios that is supported by RFC 2181 where everything is in a single database. And also the list only specifies priority of data and not validity. It doesn't say this data in this package should be ignored. It assumes that all resource records in responses or wherever it comes from is all stored in the cache just with different priorities or just replacing existing entries if the source indicates more priority. And so this is also no longer current practice. Actually, unnecessary data should be discarded, packages should be laundered if they are the results from a resolution process. So we propose to instead replace that section with a list of directives. Authoritative servers must not merge zone data, so for example when returning referrals, the glue must come from the same zone and not from other zones which are accidentally also served by the same authoritative name server. Name resolution results—answers, NXDOMAIN, no-data—must be authoritative responses with data from authoritative servers that have authority through delegation. Non-authoritative responses like referral responses from authoritative servers must only be used to query the delegated authoritative server during name resolution. So we should still have some work to take out "the" word "the" in this, so for name resolution, the non-authoritative glue can only be used to get to the authoritative server, so to say, but towards the stub, only the authoritative data must be returned. Name servers and IP addresses of authoritative name servers for zones that are built in or loaded from hint files must only be used for priming. And this is new in version 1: Name servers with multiple functions that act as a authoritative and recursive resolver and or forwarder depending on the namespace to which the query belongs or the server IP address or the query IP address in split-horizon configurations or the recursion-desired bit, you know, is it addressing the authoritative function or the recursive resolver function. For those name servers with multiple functions, the data handled by each function must be completely separate and may not be mixed. We think there are some additional considerations to be added to that draft. So recursive resolvers should only accept the following data from authoritative servers: From referral responses, only accept the NS and DS RRSets in the authority section and only related glue in the additional section. From NXDOMAIN and no-data responses, only accept the SOA RRs and the NSEC and NSEC3 (we need to add that too) in the authority section. And the answer section only accepts the data that match the query plus the signatures. And any data for the additional section only if it relates to the queried-for name. And should not accept any other info or actually discard it. And so this is also a bit more so. The draft still needs discussion, I guess, and so we currently have this as additional consideration: The additional section returned as the result of name resolution must be exactly the same as the additional section that came from the authoritative response from the authoritative server or a separate authoritative response resulting from name resolution. But I guess what should also be added here is with this non-valid data discarded. So that's it. I would really like to start this work. And yeah, I'm open to feedback now. **Jim Reid:** Jim Reid. Willem, this is great work. I think you and Fujiwara deserve a great deal of credit for getting this started. The whole issue of ranking DNS data is something that's been left dangling for a long, long time, so more power to your elbow. I think this is a great idea. One concern I have with the existing draft, though, is that we seem to be mainly focusing on what authoritative servers are supposed to do when they're handing out data. I think we need stronger guidelines on what recursive resolver servers are going to do with the answers that they get when it comes to ranking the data. There is some text in there, but my impression reading the document is there's more focus on the authoritative side than on the recursive side, and perhaps that can be fixed in the next release. **Willem Toorop:** Yes. Okay. **Andre Lomonosov:** Andre, with my ISC implementer's hat. I think this is useful, but—and I would be willing to provide feedback—but my question is, with DELEG being around the corner, would it make sense to just like await a little bit and incorporate DELEG into this as well? Because to handle DELEG, we would basically have to update the document. **Willem Toorop:** Yes. Yes, I think that would be possible as well. This could fit this additional consideration, for example. **Andre Lomonosov:** Thanks a lot for doing this, and I would certainly be willing to support and do reviews. There's one thing I think we discussed already that I have a problem with, and this is this one: because resolvers usually answer from cache and they don't store any additional data in the cache. So I don't think the "must" there is something you could get resolvers to do agree on in the real world. **Willem Toorop:** Yes, yes. Let's see how we can formulate this so it works, so that not the additional data that should not be there will not be returned to the stub resolver. Okay. Thanks. **Jim Reid:** Again, I just want to pick up on what Andre just mentioned a few moments ago. My preference would be: get this document out the door as soon as we possibly can and not wait for DELEG to be finished. Once DELEG is finished, then we can perhaps have another document that describes the criteria that apply for dealing with DELEG-style delegations. I think we need to get this work progressed as quickly as we can, which is easy for me to say and I know it's going to make work for Andre and his colleagues, but hey, that's the nature of the business you're in. Says me, talk is cheap. **Benoit Andrew:** Okay. Thank you. Thank you all for your comments and your feedback. I do hear positive feedback. Also on the mailing list, I've seen positive feedback yesterday. So we will schedule your request for the adoption, but first I have to ask Peter. He always keeps track of all the drafts that are to be scheduled for Working Group Call for Adoption. Yeah. Thank you. Next up. Thank you. **Mingxuan Liu:** Hello everyone, I'm Mingxuan from Zhongguancun Laboratory. Today I'll present our draft of Considerations for the Protective DNS Server Operators. First, we want to illustrate that why do we need protective DNS. Sending DNS requests for domain names is the start of the navigation of the internet. And unfortunately, domain names are also frequently abused for various unexpected actions, such as malware communication. Therefore, blocking DNS resolution of unexpected domain names can effectively curb cyberattacks. To block resolutions of these unexpected domain names, protective DNS was proposed. It is deployed on recursive resolvers. In theory, when a PDNS server receives a query for domain names that appear on its blocklist, it prevents access by rewriting the DNS response to return some secure results, such as resolve the domain names to a reserved IP address. And considering these lightweight defense benefits, though it has not been proposed for long, it has already gained some support from dozens of famous DNS vendors, such as Cloudflare. In addition, countries including the US, Canada, and Europe are also releasing the initiatives for deploying this national PDNS infrastructure. Although the increasing demand for PDNS, due to lack of guidance, significant discrepancies exist in the current PDNS ecosystem, even some potential security risks. Therefore, our draft aims to provide specific operational and security considerations for protective DNS providers to make their service more usable and secure. In our draft, we present operational considerations from four aspects: First is the blocklist selection. The vendors should define the types of domain names to be blocked based on their own intended use cases, and they should verify the correctness of these domains to avoid the impact of false positives. And they should select an appropriate blocklist source and deployment approach based on their own operational contexts, such as the device resources, constraints, and network access patterns. And then is the rewriting policy construction. Based on their selected blocklist, they should select an appropriate rewriting approach based on their applicational requirements, such as using the their controlled secure IP addresses or just return some empty answer section. And they should also consider the impact of the rewritten records of the TTL configurations because for the cache. And then they should also consider the performance impact of PDNS because there is some extended time for matching in the PDNS blocklist. And they should offer the explanation of their blocking actions caused by PDNS because PDNS action is totally black box for the end users. And more importantly, the deployment and implementations of PDNS need to avoid potential security risks from five aspects: First, to address the shortcomings of the rewriting policy flows, the redundant RDATA and missing RCODE types should be avoided to prevent the bypassing actions. And the rewriting policy coverage should be always maintained, such as for the encrypted DNS and IPv6. And second, if PDNS operators use the third-party network resources as the rewriting policy, such as the cloud resources, they need to regularly check the effectiveness of the resources to avoid the risks caused by the dangling risks. Then excessive block actions should be approached with caution. On the one hand, when building the blocklist, avoid using some over-generalizing target domains, such as just using the keywords or using the wildcard domains. And on the other hand, it needs to be considered that aggressive blocking could lead to a denial of response threat. And finally, they should also consider the interaction with data integrity protection, such as for the DNSSEC compatible, because this is some changes in the recursive results. And they also should prepare some fallback mechanisms and for any possible fault during the PDNS operation. And now our draft has been updated from the version 0 to version 1 from some recommendations from the DNS community. And also we are still highly welcome some feedback from the DNS community and the PDNS vendors. Thanks for listening. Any question? **Andre Lomonosov:** So, hi, this is Andre, ISC with my chairs hat. Before I say what I have to say, so it's up for the working group if we take this work. But I think this is not a good work to be an RFC because it's—the landscape changes a lot, and very quickly. And before the DNSOP stops with this and it goes through all the processes, the DNS landscape will be different. So I would suggest what Peter was saying a couple of meetings back: There's now a DNS-OARC best practices initiative that might be better suited for documents that change a lot, because the DNSOP is not known for fast work. So, but it's up for the working group to decide whether there's enough people wanting to work on this. But my opinion is this is like a complex document and I don't see a reason why this should be an RFC. **Mingxuan Liu:** Okay, thank you. But I want to add some comments. We already measurement in the whole network for the PDNS operators and we find that nowadays the PDNS providers is more and more in the network, but there is more the operational flows and even some security risks we already found that. So I think it can be a just a draft to recommend the PDNS vendors how to deploy the PDNS service. Thank you. **Stéphane Bortzmeyer:** Stéphane Bortzmeyer. I agree that it's not a good idea to have such a document for DNSOP because there are many things that are about political issues, because protective DNS can also be used for other things such as censorship, which raise a lot of issue. There are technical questions about the DNS which are in the DNSOP domain, but there are also things that are completely out of scope, such as management of the blocklist. I don't think that the IETF has anything to do or to say about this. Also, the current tone of the draft is very like an advertisement, simply calling that "Protective DNS," while the official DNS terminology RFC talks about "policy resolver" if I'm correct. "Protective DNS" seems an endorsement of this practice, which is very questionable. So I don't—I agree it's not a good idea for DNSOP today to work on this document. What could be interesting for DNSOP would be a document limited to the DNS technical aspect of such lying resolvers. **Mingxuan Liu:** Thank you. I don't know I followed all the content, but I think that our draft gives some technical recommendations, am I right? And just due to the time limitation and we don't represent all the draft document content. And if you have the question, we could communicate offline. Okay, thanks. **Ben Schwartz:** Hi. Yeah, I don't think we should move forward in this case because I don't think that there's any real possibility of getting IETF consensus for a document like this. Any RFC, any document that we adopt in the working group for publication ultimately needs to not only reach consensus within the working group but reach consensus from the entire IETF community. And I really don't think there's any likelihood of that with this material. I also want to point out that the draft as currently specified conflicts with a number of other existing DNS specifications. It conflicts with the Extended DNS Errors draft, it conflicts with DNSSEC, it conflicts with the guidance for resolver operations generally. So I would want to see—I think, you know, any version of this that we could adopt would have to be pretty substantially changed so that it no longer, for example, recommends synthesizing wrong answers in the RDATA for A queries. So I think that—I agree that there are venues out there where advice of this kind would be appropriate, but I don't think that the IETF Standards Track is the right track. And notably, the current document describes itself as Standards Track. **Benoit Andrew:** Thank you. Thank you, Ben. And yeah, you might also send an email to the mailing list or directly to Mingxuan. Yeah. So for the other—for Vittorio and Gianpaolo, can you be just state yes or no because we have nearly no time. Are you supporting the document or not and very brief. **Vittorio Bertola:** I would need to say more than this, sorry. I just wanted to thank the person for bringing this document. I think it's not very kind of us to meet them saying just go away, go away, since they are newcomers to the IETF. So I agree this is not fit for the standards track of the IETF, but I'm happy to work with you to make it either an informational RFC or somewhere else in another venue. Thank you. **Benoit Andrew:** Thank you, Vittorio. Yeah. Thank you. **Gianpaolo Fasoli:** So very quickly, cannot be a yes or no, but I think it's very interesting the part of be to add the transparency to the protective DNS, so have the reason—so specifying that a protective DNS solution has to show the reason of the blocking so that the customer can protest. Thank you. **Benoit Andrew:** Thank you. Okay, then we have the last presentation. Thank you. **Bashan Zhou:** So thanks for the opportunity. I'm Bashan Zhou from CNNIC. I'm presenting this on behalf of Peng, and this draft is about Avoid Large Wildcard Records. The problem is fairly simple and there's no camel involved. DNS does not have explicit size limits for TXT records, so operators can publish large TXT records under wildcard names. Because of wildcard names, queries can bypass resolver cache and keep triggering large responses, and those that can lead to high bandwidth cost and raises operational concerns. So the goal of this draft is not to define strict limits for TXT records, but to provide operational guidance on how limits should be applied, and specifically for the wildcard owner name. So here is a simple example. So we put a set of TXT records under a wildcard names and we send a small query and we got a 34 answers back, and the message size is hitting the limits and that's pretty much the most that you can get from a query. And the whole thing creates a huge amplification factors. So it's quite easy to generate such query and those query can be sent through public resolvers or can be sent from compromised host. Or they can be triggered from a web page using JavaScript or DOH and DOH. And so if a resolver—if a DNS server keeps receiving queries and that requires large responses, then there will be a lot of TCP connection overhead and there will be—and the bandwidth cost can become quite large. Resolvers may also consume a lot of memory if those large responses are cached with long TTLs. So all of this can result in a slower response times in shared hosting environment, and that's really bad for DNS hosting provider. And what they can do in practice, well, we bought a few domain names from those different providers and we tried out their service and it turns out that most of them they don't have limits on TXT records and most of them they do have limits on A records and AAAA records. I didn't list all of them here. So we try to let them know, we reached out some of them about this risk and some of them actually made changes such as setting limits, returning a smaller response, or applying rate limiting for query over TCP. And others don't seem to care, or they don't see it as a problem. So at the same time, we try to propose something, we try to propose something that is concrete or more deterministic. And there's two questions for TXT records and probably for other records: What's the minimum size requirement and how large is too large? We don't know, and probably they don't know either, even though they already set the value there. So in this draft, we try to start with providing a operational guidance rather than setting strict limits. So DNS hosting provider, they should have—or they can have something to follow in the first place, and the exact limits can depends on different situation then. So we suggest DNS hosting provider, they should avoid large TXT records under wildcard names, or more generally, they should avoid large records under wildcard names. And this can be done either by refusing to accept oversized records from their user, or by returning smaller response sizes—smaller or by returning smaller responses if they don't really want to give their user explicit limits on any records. So next, here I just want to briefly mention there's some established practice and discussions, they all try to reduce the response size. And this draft follows the similar idea, and but we are more on the wildcard owner names. So what do you think this idea? Is a good proposal or a bad proposal? And let us know, please, on the mail list. Thanks. **Benoit Andrew:** Thank you. Okay. Gianpaolo, your hand's still raised, was this for this session? Okay. **Andre Lomonosov:** So I'm the guy that says no. So this is a question for the working group, but is it worth—is just this idea like limit TXT worth an RFC? It might be—there's Fujiwara-san's draft on the limits in DNS, it might be just folded into a paragraph in that draft and that might be better use of our time. **Warren Kumari:** Warren Kumari. Yeah, I mean, I guess it could fold in somewhere else. But it also feels like this is DNS operations, and so we should be able to do like really short one-page RFCs being like, "Here's some bad ideas, here's ways to make them less bad." Ideally, like it should be easy to do things fast. Not saying it is, but... **Johan Stenstam:** Johan Stenstam. Agreeing with Warren. I think we really must have a mechanism to deal with simple things fast. Adopting something should not necessarily be a three-year process. **Warren Kumari:** I did. You're giving a bad example—no, oh yeah, you did. Sorry, sorry. So I mean, because we don't seem to be able to do things short and quickly, there's talk about starting up competing series in other venues, right? Like DNS-OARC and stuff. I think that's a failure that we... yeah. **Benoit Andrew:** No, thank you. And it's good that you mention that. Some documents should proceed fast, and we also discussed that with our AD and between ourselves. So that's also one of the things we are really on top: that relatively—I shouldn't say simple, but drafts that can be pushed forward fast and the more drafts that need more attention and discussion in the working group. Find this balance. Yeah. **Andre Lomonosov:** But that depends on the working group members, not on the chairs. So if you want the drafts to go fast, there needs to be people who review the drafts, provide comments, and push things forward. We can do as much, but we are not the kindergarten teachers. So we are DNSOP working group chairs and it's up for the working group to process the drafts fast. **Benoit Andrew:** So okay, thank you. So back to your draft. So we did hear some positive feedback, some considerations. So I think it's good to have continue discussion on the mailing list. I don't see people here in the queue. Thank you for your presentation and please ask for comments or I also ask for comments and feedback to the mailing list on the draft. And from there we will continue. Sure. Thank you. We're almost there. I want to give two announcements. Tomorrow morning 8 o'clock, there's the Post-Quantum DNSSEC side meeting. Peter is chairing that, Peter Thomassen. This afternoon at 2 o'clock, there will be the Ops Area meeting on DNS at the IETF. So Wes Hardaker will give a presentation. He did a community consultation about work—the DNS work in the IETF and specifically also the workload in DNSOP and how to manage that and maybe split, etc. That will be at 2 o'clock at the Ops Area meeting. Anything else? I look at Matt. Anything else? No. Then can we close? Andre, we close this session. Thank you. Thanks for your attention. And see you on Thursday at session 2 DNSOP. --- **Session Date/Time:** 19 Mar 2026 08:30 **André Surikov:** This is DNSOP, Session 2. We will start in a minute. **André Surikov:** All right. Welcome to Session 2 of DNS Operations Working Group. I'm André, this is Benno. We have Peter conveniently standing, our secretary. Shivane is our second secretary who is not here. **André Surikov:** Okay, sorry. I heard an echo, even this far away. Matt is our chair, and Jim, who is also standing right now and coming with a coffee, is our technical advisor. Paul will be doing minutes again. Thank you very much for doing this. **André Surikov:** So, this is Note Well. And I have somewhere... It is a reminder about the processes and policies, including about conduct, privacy, and intellectual property rights. You agree to follow when you participate in the IETF. Please read it carefully. Behave respectfully. We take this seriously. You are encouraged to read the source documents to which the Note Well refers, and if you have any questions, please talk to the working group chairs or area directors. And I should give you enough time to read this. **André Surikov:** All right. So, meeting tips. Make sure you sign into the session via DataTracker or via the QR code in the session. Use Meetecho, usually the Meetecho Lite, to join the queue, show hands. And keep audio and video off if you are not using the onsite version. For remote participants, make sure your audio and video is off unless you are presenting, and use of a headset is strongly recommended. Please disable your Wi-Fi hotspots if you have them enabled. This has been a problem at this IETF. And state your name each time you begin speaking in the queue, even if it's like repeated appearance in the queue. **André Surikov:** So, today's agenda. First, we will talk about the root cache and local root drafts. And the discussion will be after these two presentations. We will be locking the queue for the first presentation, and we will have a discussion after Wes is done with the presentation. Then there are some drafts for consideration: DNS Filtering Transparency. Well, these are also for consideration because they are not working group business yet. Two other drafts for consideration: DNS Filtering Transparency and Ordering of RRsets in DNS Message Sections. We also have two work items for the DNS Dispatch: the Dynamic DNS Update Protocol and the AI Discovery using DNS. And for time permitting, there's a draft on DNS Extensions to Energy Efficiency as a Service. **André Surikov:** So, we also had a hackathon. I did work on parent-centric delegations in BIND. This is ongoing work, so I just helped the other people from my team. And Willem worked on ETAG support for fetching the root zone over HTTP. They were both merge requests. There was also Johan and Stefan at the table, but I didn't get any update from them. Without further ado, we can continue with the first presentation. **Paul Hoffman:** Hi, my name is Paul Hoffman. I'm given 15 minutes on the agenda. I'm probably going to take less, because this is actually the two presentations together, and I have one sort of... If you look at the slides, you'll see my last slide is a little bit provocative, which normally would cause people to run up to the mic line. And as André said, we're not doing that because the provocative questions actually apply to both. So I'm going to go through this, Wes will go through his, and then I'll probably come up for the Q&A. **Paul Hoffman:** So this is an overview of a document that I've published called Root Cache. You've seen lots of discussion on the mailing list so far about Local Root. This is similar but different. We always love crap like that in the IETF, especially in the DNS. So I'm just going to cover this and then the discussion will fly later. **Paul Hoffman:** So, what Root Cache is, is a possible successor to 8806. Both Root Cache and Local Root are meant to be updates, replacements, whatever of RFC 8806. This is a brief description of RFC 8806, which says you get a copy of the root zone and then you serve it locally. What Root Cache is proposing instead is you get a copy of the root zone and you just stuff it into your cache. By the way, this is all for resolvers, has nothing to do with authoritative servers. So the idea being that instead of you serving it as if you were authoritative, which is what 8806 has, Root Cache says just stick it in your in your cache, treat it as if you had made a zillion queries and you filled up the cache. So that's the main difference between what my document has as a proposal and RFC 8806. **Paul Hoffman:** So here are the major differences. There's a bunch of minor ones, but the major one is that with 8806, if it—there's many reasons why you might not get a good copy of the root zone. You sort of went crazy and you're asking in the wrong place, someone's DDoS-ing you, whatever. After a while, if you end up having the, you know, like a clearly old copy of the root zone, you need to stop acting as if you're authoritative for it. So that's the way 8806 has it. It's got some good words in it about that. With Root Cache, if you have the same failure, it doesn't matter. You just aren't stuffing new things into your cache, you keep going. So that's the main reason that I'm proposing this at all is the failure cases for Root Cache take less engineering to do correctly relative to 8806. And again, it's just because you've got failures, whatever, you don't have—you haven't gotten a good one in a while, your HTTP client went crazy, whatever, you just aren't getting anything to do it. You aren't—you aren't getting any of the benefits, but there is no other changes to the protocol, whereas 8806 makes you have something that says, if you're about to be serving something you really shouldn't be serving, you have to turn that off and then go back to normal service. **Paul Hoffman:** So, and again, Wes will be up in a moment. Similarities between what Root Cache is proposing and what Local Root is proposing is: now it's the modern world, we've got ZoneMD. Both proposals require that when you get the root zone, you actually also have to verify it with ZoneMD and you have to do DNSSEC validation on any of the signed parts. And in both of them, you know, the web exists, HTTPS exists, we're just starting to notice that now. So that, you know, it is very likely that even though it was a little bit mentioned in 8806 that you might do this—of course, with all these CDNs out there, you should just be pulling it from something where no matter how much you hit it, they don't notice because the root zone is smaller than many advertisements. **Paul Hoffman:** So the next two slides are differences between what I'm proposing here and what Wes will be proposing in a moment, which—again, I'm just going to say this—Local Root allows a resolver to run as authoritative for the root, Root Cache is for cache filling only. Now, to be clear, Local—the current version of Local Root says when you get the root zone, you can either do it 8806-like or you can do cache stuffing like what I'm proposing. So Local Root proposes both that you can do either. Root Cache is just the latter. And again, the reason I'm proposing that is it feels operationally safer to me in failure cases. **Paul Hoffman:** Local Root is creating a new IANA registry for where you might get the root, and my Root Cache thing is: you know what? We all get all of our configuration from our—from the implementers anyways, we just trust them. No extra registry, nothing like that, just it'll come. And if you want more, you know, someone will have a GitHub repo listing all the places, whatever. Then it's much more informal. The root—the Root Cache proposal is much more informal about it. Local Root has this thing where IANA has to make a registry and such like that. **Paul Hoffman:** Local Root reduces the role of root server operators as critical infrastructure. And it says it in many places. Warren and I just had a very good discussion about it 20 minutes ago and such like that. So the motivations behind Local Root are different than the motivations behind Root Cache. Root Cache doesn't do anything about reducing the role of the current root server operators as critical infrastructure, because it assumes that there might be mistakes and you might be falling over to it anyways. Also Local Root has as a motivation, you know, longer than desired round trip times to the closest DNS resolver, which is stuff that we said in 8806 and—I know this might like surprise a lot of people, but I now am of the belief that our emphasis on that in 7706 and 8806 was wrong. Like, like it was a lot of hand-waving. We've got absolutely no measurements to indicate that the lower time would—would affect anybody. So I've taken that out of Root Cache as a motivation. And Local Root still has it as a motivation, partially because of that first bullet as well. You don't want to be relying on critical infrastructure for things that you know have certain round trip times. **Paul Hoffman:** And so this is my last slide, and this is the one where I think it might get people excited and want to run to the mic. Don't run to the mic yet, Wes gets to do his. So really, how much—and this was brought up on the mailing list, I think actually, Florian, I don't remember if it was you or if it was Philip, but somebody said why are we bothering with this? This—okay, it wasn't you, okay. You know, why are we bothering with this? The amount of extra privacy you get is so small, and DNSOP has plenty else to be doing. Why do this? And Jim, can you take yourself out of the queue? Thank you. And then there's also the question of what should—like if we do one of the two, what should it be? The Local Root folks are saying that this should be a best current practice. I'm leaving it as "Nah, it doesn't matter," because we don't have a lot of practice on it. And if the working group adopts one or the other, the working group gets to say in the document, what are the motivations? So I've got a set of motivations in mind, the Local Root folks have a set of motivations in theirs, but at the end of the day, it's the working group that gets to say what are the motivations for the people who are going to read this document five or ten years from now. And therefore it comes down to: is this worth spending time in the working group? **Paul Hoffman:** Okay, so I'm going to hand you the clicker, I assume that—I don't know if you need the clicker for Wes, or if he has his own virtual clicker. So I'm going to sit down, Wes will do his and I'll come back. Okay, thanks. **Wes Hardaker:** Oh, while they're pulling up the slides, hello friends. It's good to see you all. I wish I was there in person, but I'm not. So this is about Local Root as—as Paul just talked about. Essentially the way to think about it—I assume you guys are going to hand me slide control at some point. The thing to think about is that the four documents that I'm going to outline are sort of a superset of Paul's. They're actually very aligned in a bunch of places. I think motivation-wise, which Paul just spoke to, will get into, you know, greater—thank you—will get it into, you know, greater discussion about... to me the motivation doesn't matter. I think we probably—it seems like we're in agreement that something should be done. And to be perfectly honest, as long as one of these documents is published, I'll actually be happy. So, you know, I mean... **Wes Hardaker:** So my work, by the way, is—there's actually a bunch of co-authors on it: Geoff Huston and Jim Reid, as well as Warren who was on the original 8806 as well, have all, you know, helped to contribute in a lot of discussion and ideas that went into these. So why are we doing this, right? And as Paul indicated, you know, I put into these slides critical infrastructure related stuff. That doesn't need to be a motivation at all. But to me, it kind of is because it—when I was rewriting a lot of this, I came to the question in my head of what would it take? You know, what would it take so that my local resolver was not dependent on as much critical infrastructure? Paul phrased it as, as, you know, reducing the dependency—reducing the critical infrastructure of the RSS. I actually view it as the inverse: it's reducing your dependency on—on the critical infrastructure. So it's critical infrastructure for positive answers, certainly. There's always a large debate about negative answers. But in the end, my question was: does it have to be? And, you know, certainly not if everybody has a copy of the root. **Wes Hardaker:** So how do we do this? We define implementations and deployment semantics. One of the big differences that I finally sort of came upon when I was trying to phrase what we were going to do is: it really shouldn't matter how you do it, right? That's an implementation dependent thing. So in the document, it actually talks about doing essentially what Root Cache that Paul just talked about is already doing. But, you know, there's existing things that do pre-fetch. Some of the earliest Local Root implementations did split-view, which it actually would answer authoritatively. We'll get back to the authoritative bit—we're not going into that today. That's little tiny detail semantics that we don't need to worry about. It's more, you know, what do we want to progress with at this point? But we do talk about sort of one pseudo-way of implementing things just to give guidance for those that—that want to do something. But we're not spelling out details, we're instead spelling out requirements. **Wes Hardaker:** So the requirements sort of boil down into, or the things that you would have to do to implement, you know, the Local Root sort of full concept is: you have to identify where to get the root zone from, you have to, you know, fetch and re-fetch on a regular basis the—the root zone once you had a list of where to get it from, and you have to integrate and serve the data. And I broke those down into three separate parts to try and make it logically cleaner. So the first one, where do you get the root zone data from? They're currently outlined in the document. What I—what I recognize is the same way that—that certificate authorities have a wide set of places that, you know, people might trust, in the same way that, you know, root.hints can sometimes come from the operating system and sometimes come from built-in configuration, we shouldn't try and overly prescribe it because even as Paul said, you know, software will do the right thing, operating systems may do the right thing, but operators might want to have more control as well. So I basically said, you know, you can get it from the operator configuration, you can get it from built-in compiled or OS distributed sources, or you can get it from IANA. And so I did kind of come to the conclusion that it would be nice if IANA had sort of an—a list of everywhere you might want to get it from, with no requirement that you—you know, you try all of them, you can pick and choose, a starting hint, if you will. **Wes Hardaker:** So when you then, you know, have the list of where to fetch it from, you would go through a number of steps to try and fetch it. You try and fetch it. If you get it, great, you use it, right? We'll come back to security in a minute. If you fail, you go down to the next item in the list and try and fetch it. And, you know, if you—if you fail, you keep iterating until, um, you try all sources. And if you try all sources and you can't get a current copy, then you fall down to using regular DNS. So the root server system still is required. It—it can't go away, but it does become more of an emergency backup for you. And so, you know, Paul's document sort of said the same thing: if you fail to get a copy of the zone, you've got to fall back to regular DNS. **Wes Hardaker:** Once you have it, of course, you then have to re-fetch it, you've got to keep it up to date. I used timers that are based on actually some discussions with other people and I think earlier stuff on the mailing list, um, trying to use timers straight from the existing standards rather than reinventing timers, so use the zone SOA record and things like that. Won't get into the nitty-gritty here. And but the other thing we recommend is use very efficient checking mechanisms. So you might do an SOA query if you're using a DNS source, you might do an HTTP HEAD or caching semantics if you are using an HTTP source. **Wes Hardaker:** Um, and then for integrating the data, as I indicated before, my goal was to allow all the existing implementations to keep doing what they're doing and new ones can pick and choose. As long as they meet the behavior that they are functionally equivalent to, you know, how a resolver operates today. We got into a little bit of a TTL discussion on the mailing list. To me, that doesn't matter, right? If you're getting the answers for the data you need, then and, you know, you have a local copy, great. If you don't have a local copy, great. They should appear to be sort of a similar. So 8806 is fine, which, you know, I think the conclusion from 8806 is nobody implemented it that way, which is one of the reasons there's new proposals coming out in the first place. But you could do, you know, combined—you could do pre-caching or Root Cache as Paul called his, whatever. I don't care actually too much. The one thing you do want to probably do is NSEC aggressive caching because you—otherwise you're going to leak all the negative answers. So even if you're not a validating resolver—and there's nothing in this that says you have to be a validating resolver—if for queries, you should probably at least use the NSEC records to figure out where stuff doesn't exist. **Wes Hardaker:** Um, so then there's a bunch of implementation requirements that are sort of, okay, what would it take for you to behave like a—a regular resolver, you know, would just with a copy of the data? So it must be functionally indistinguishable. I actually don't think there's a must in there that, you know, defines it that way, that's sort of semantics. Um, security-wise, you must have a copy of the IANA trust anchor, you must verify the ZoneMD record, and you must verify the DNSSEC signatures on the ZoneMD record. Nowhere does the document say, and then from then on out, you have to validate every record. It's up to local configuration about whether you actually validate for everything else. Um, so there's—there's a minimal set of DNSSEC that you'd have to implement to make sure that your data is not been modified before using it. You must ensure freshness, that should be not a big surprise, and then you should fall back to non-local root-based DNS on any sort of failures. I do give the option of doing a SERVFAIL as well. Again, it's implementation specific in my opinion. **Wes Hardaker:** There are currently four documents, and the only reason for dividing it into four was allowing the—the separation of discussion that we will end up having as well as allowing them to progress at different rates. I actually don't care if they're four, I just divided it into four because I thought it was a cleaner separation and allowed them to progress independently. Having said that, you know, if they all want to merge back to one because that's what the working group wants, I'm good with that. So, a couple of things: so two are about IANA registries and one is about an XFR scheme. We'll come back to that in a minute. **Wes Hardaker:** So the two about the IANA registries basically ask IANA to publish a list and start the discussion about what that list format might look like. I put in a straight text list knowing that we're going to fight about that if we get there. My guess is, you know, JSON or signed JSON or something like that might be what we want to pick in the long run. And, you know, obviously we should ask IANA about their opinion. I've had a bunch of discussions with Kim. He's fine with hosting that and he did say some guidance would be, you know, good to hear about what types of things the list should include, which is what the second document is about. And his guidance to me was: make it minimal guidance, but not overly prescriptive. We don't want to make it onerous for—for decisions to be made. And that has a lot of discussion to happen, if we go down that route. **Wes Hardaker:** And then finally the last document is about XFR schemes, because if we have a list of URLs, which I think is the right way to go personally, that includes HTTP-based URLs, it should also—we should have URLs for how to, you know, where—what's the target of an AXFR, for example. There isn't one like that, so you can see that the basic format—the first line under the basic format is—is an example that's out of the document, basically saying do ZOT for, you know, ns.zone.example at the port slash and then the zone name is very simple. Again, detailed semantics left to be handled after adoption. **Wes Hardaker:** Finally, there's been discussion about AXFR versus HTTP. Should we do everything in-protocol or should we do everything out-of-protocol? HTTPS? In my opinion, that's implementation and operator dependent, as long as it works. I have personal preferences toward HTTP because of the existing CDNs and there is already likely a couple of CDNs willing to serve the root zone based on conversations. Then there's also questions of names versus IPs for bootstrapping. But again, the reality is different software today supports different things. Why is that a bad thing? Um, why should they all be forced to use DNS or why should they all be forced to use HTTP? And then, you know, you know, the IETF of course could mandate one or the other, but to me it's implementation and operator consistent. So in the end, you know, if the client gets the answers they need, you know, and there's less dependencies on external stuff, then that's a win. You know, that's about it. **Wes Hardaker:** So, you know, my question for today—and I'm sure Paul's question for today is: you know, should we adopt it? Today's presentation is again much, much higher level. Um, I, you know, I don't think we ought to argue about semantics now about whether the AA bit should be allowed or not. That's sort of a post-adoption type of thing. The only other things... Oh, so the Paul's discussion about reducing your dependency on the RSS, I talked about that already, it's not going away. The lower latency one is interesting. I actually left that text in from 8806. Paul hinted at status. I think both Paul and I have come to be aligned on a proposed status as probably the right way forward based on conversations. Originally, one of the document names says BCP in it, but the world has already sort of aligned behind proposed would be a better way to go forward. So with that, I think we'll go to questions and Paul can come back up or I don't know how the chairs want to handle everything. **André Surikov:** The queue is open. So, Joe. **Joe Abley:** Hi there. Can you hear me? **Wes Hardaker:** Yep. **Joe Abley:** All right. Awesome. Um, so I was—I was egged on in the chat to say something. So, um, I—I should preface this, I've said this to the authors of various of these proposals: I think I don't think this does any harm. I think it's fine. It's stuff that people have been doing for a long time. But I would like to see a problem statement for this, because I think as—as was perhaps more obvious in—in Paul's slides, the problem statements seem to be a bit woolly, and I'm a little bit unconvinced. I don't think the privacy argument is very strong. I don't think the single point of failure argument is very strong. We're talking about a root server system that's rarely queried and is kind of the best provisioned DNS service there is ever, with 100% uptime. So I—I'm a bit curious as to why we put a lot of effort into something that, a, people have already been doing for decades, and, b, that doesn't seem to have a very clear problem statement. I—I would like to see those things clearer. But like I say, I'm not objecting to the idea of doing this or the idea of spending time on it, I'm just a bit curious. **Wes Hardaker:** No, that's fair. I will note that your—you have the pen to one of those documents and you're welcome to edit it to add your own perspective. The—the one thing I'll mention about the—the RSS, um, infrastructure dependency: the RSS has never gone down. Your access to it has gone down probably at times. So there—there is some difference there, because individual resolvers may lose connectivity when, you know, the—the RSS itself is fine. **Dwayne Wessels:** Yeah, Dwayne Wessels from Verisign. So I certainly not here to object to Local Root or local cache because we've had those RFCs for many, many years. But what I'm very, very concerned about is making it default in—in resolver software or making, you know, making it much more widely deployed, because I think that could lead to um—preventing evolution of the root zone. Ossification, I think is the word you're looking for. Ossification, yes. Um, excuse me. Um, and I—and I think to the extent that, you know, Local Root or whatever gets more and more widely deployed, that increases the chance of ossification. Um, it would be interest—it is interesting to do sort of thought experiments with how you might evolve the root zone today if Local Root had already been widely deployed years ago, especially in the context of things like DELEG, right? Because the DELEG draft even says things like you must ensure that all authoritative name servers are DELEG-aware before you deploy it to the zone and so on. Um, I think this is a—at least as described so far, it's a one-way street. Once you turn this thing on, it sort of runs amuck and—and you have no way to sort of reign it back in. And it might be worth thinking about some kind of mechanism where you could do that if it became necessary too. Thanks. **Paul Hoffman:** So when we had dinner the other night, Dwayne, and you had said the second one, I actually realized, oh, that's really interesting because that means if someone is running 8806 or, you know, doing it as a local resolver, that has to act as DELEG-aware as well at the point that DELEG comes in. And of course a lot of those won't and such like that. So I—I hear the ossification argument. **Wes Hardaker:** So to the chairs, I think I'll try and not talk most of the time and let the queue run rather than respond to each one. **Paul Hoffman:** I guess that means I shouldn't either. That's very unnatural for me. **Ralf Weber:** I'm okay with you guys answering. So, Ralf Weber, Akamai. Um, so there are already implementations of this, at least how I understand it, and they've been running fine. However, discussing this this week with people that kind of run root servers, there are some intricacies to that, I mean, maybe we have not thought enough. So what I would think, we should make any or all combination of these documents working group documents and try to work through these. **Ted Hardie:** Ted Hardie. I appreciate the comments so far. I will say that we have been doing things like this in the world for a very long time. I mean, I remember running something similar more than 25 years ago when I was in Equinix as a very small child. Um, so I think it's always useful to update the advice to the people who are likely doing things like this of how to do it as safely as possible and to avoid as many pitfalls as possible. Whether that's a standards track update to 8806 or BCP, it doesn't bash me much. I—I will say that I—I found the ossification argument a little surprising. But if there is an ossification risk, I think Root Cache as opposed to Local Root has better protection against ossification because it just says: whatever you get, you shove in your cache the way you normally would. Yeah. Inadvertently. Inadvertently, because it says whatever you get, you shove in your cache the way you normally would. Um, so it's not asking you to behave as if you were a root server, it's asking you to behave as if you know something you know. And I think that's entirely sensible and I—it would be surprising if people didn't do something like this already, hint-hint. **Peter Thomassen:** Hi, Peter Thomassen. And I think this ossification problem is already seen in the discussion about ETAGs and expire interpretation and refresh. And I don't know what happens if in 10 years for some reason we'd like to have some way of talking to the root service with an additional EDNS option for some feature that we can't think of today, but unfortunately it won't work if you get it through some CDN. Um, yeah. So I think that's—that's something that is unforeseen. And um, it was said that it's better if the clients can get the root information with less dependencies. I don't see how that's really less dependencies if we add a mechanism that then also needs to be considered in any future evolution. And then finally, um, I believe on the list, root server people have said a few times that they are not really concerned about traffic, um, and yeah, so I think I agree with the people who would like to see a problem statement. **Wes Hardaker:** For the record, I don't think anybody has said this is to solve traffic problems. No, I don't believe anybody, root server operators or either of the authors, has said that. **Jim Reid:** Jim Reid. I agree with the comments that Peter has just made, and I think we need to focus on trying to get some clarity around some of the use cases and requirement issues and a problem statement. That needs to be worked on quite clearly. Um, the only other comment I'd like to make here is if we're going to make a decision at some point in the future between Local Cache and Local Root, I think we need to be picked between one of them. I don't think it would be a good idea to try and do both. **Mark Nottingham:** Mark Nottingham, doing my best impression of Microsoft Clippy. Hello. This working group appears to be considering adoption of a draft that uses HTTP. Oh, I'm sorry, I'm trying to close the window. Yeah. I didn't say it was just the good parts of Clippy, it could be all the parts. Um, consider getting an early review from the HTTP Directorate. Um, it may seem like it's obvious what you're doing, but there are a lot of new things in HTTP, there are a lot of new nuances. We can help you make it better. Thank you. **Paul Hoffman:** He didn't say it on the mic, but he said working groups are disappointed when they don't, and that is very true. **André Surikov:** Okay, Rob. Oddly, we can't hear you, Robert. Okay, we will wait a little bit and I'll read it. He's having mic trouble, he'll text. He'll text. Okay, great. So for the discussion we still have four minutes, so I lock the queue, but there are some still some questions. So, Ray, please go ahead. **Ray Bellis:** Yeah, hi. I'm Ray Bellis, ISC, and full disclosure, I am a root server operator, so take that as you will. Um, I've seen commentary in the chat where people, well, certainly some people have said they consider the current root server model unsustainable. I'd like to understand on what basis they think that's the case, because as an RSO, we find it completely sustainable. Um, none of the other RSSs I've spoken to, with the exception of the guys from B-Root who are proponents of this proposal, seem to have any concerns about the scalability of their root systems. It's never come up. **Wes Hardaker:** Ray, please don't put words in my mouth. Specifically, I have never said that the RSS is not sustainable or stable. Never. **Ray Bellis:** I say that only in the sense that you're the authors of this document, but some—some of your co-authors have said that. And I don't understand on what basis they're—they're making that claim because it's not something that I feel the—the RSOs have any concerns about whatsoever. **Paul Hoffman:** I'm going to step in here, sort of being the hippy peace person. I think this comes back to what a few people said is we need to start with a problem statement. And if that's part of the problem statement, great, if it's not, great. Um, but... **Ray Bellis:** Yes, I agree entirely. Problem statement is definitely required. Uh, I actually have no problem with, um, an 8806-bis that, um, clarifies that because the technical implementation suggestions in 7706 and subsequently 8806 were too prescriptive in how the method works. I think that's fine. Uh, I agree entirely with Dwayne that this is not something that should become default behavior for anybody, or certainly not in configurations. **Peter Thomassen:** Hi. Um, I want to point out that one of the documents, which is specifically the URL definitions for zone transfers and stuff, are basically unrelated to the question on hand. And they seem useful because right now if you define where to get the zone, every implementation has different syntax for that and stuff like that. So perhaps that can be, like, singled out and worked on separately because that's basically unrelated. Local Root might use it if we have it, but in no way depends on URI schemas for zone transfers. So perhaps that can be progressed completely separately and I can see value in that even without any Local Root or anything else. It might be useful in other contexts. **André Surikov:** I think it is a useful suggestion and should be discussed on the mailing list. We have Victor, I think we still have time, so Victor, go. **Wes Hardaker:** Yeah, that is one of the reasons I separated that out, by the way, was because I figured that would be the—the easiest one to get through. **Victor Dukhovni:** So, um, I'm just back from real-world crypto and getting involved in Merkle tree certificates in sort of in the web and other spaces. Uh, and uh, one of the possible outcomes, you know, some years down the line is that we don't find any uh post-quantum algorithms for DNSSEC uh that you know work with traditional signing approaches and are small enough and usable. And the only path forward if quantum computers still look realistic might be something like Merkle tree signing, which might affect how the root is operated. If, for example, one wants not only Merkle tree signing but, you know, transparency and witnesses and all of that, then the way the root zone is signed and delivered may change. Uh, so some thought may be appropriate as to whether that has any influence on designs in this space. So it may be too early to know about the ossification question if things like that, you know, affect the—the architecture. Um, otherwise, yeah, I'm not seeing that DELEG is particularly an obstacle for the root zone, but maybe I just don't understand DELEG well enough. **Wes Hardaker:** Can I ask one clarifying question, Victor? So if there was an alternate signing mechanism, I believe one of the benefits of having the copy of the entire zone is that you actually don't need the signatures except for the one on the MD—on the ZoneMD record. **Victor Dukhovni:** Uh, to import the zone, yes. But you will, of course, need them uh to be uh serving the root zone to your clients, right, if they are also validating, right? So your downstream... **Paul Hoffman:** You would need to pull it more often possibly with some of the ways they're talking about Merkle trees. **Victor Dukhovni:** Right. Because with Merkle trees, the—the signatures will tend to get recycled in various ways that are interesting and new. Uh, and again there may be multiple signers in some scenarios and and unclear where you collect all the various signatures from. All of that is still being designed. Um, so that—that's some uncertainty as to how this will work, let's say five to ten years down the line, if Merkle trees are the only way forward uh in DNSSEC. Uh, TBD. **André Surikov:** Rob. **Rob:** Oh yeah, microphone check. **André Surikov:** Yes. Yep. We can hear you. **Rob:** Thanks very much. Uh, appreciate that. Um, as I posted in the chat, uh, a couple of points. First of all, I agree with the points made by many that we need a—a problem statement here because, as I've said to the authors on more than one occasion, this does seem to be a solution in search of a problem. Um, secondly though, I have absolutely no difficulty with, I agree with the idea that resolvers should have an option or multiple options to figure out how they obtain root zone data, and if they cache it locally or do their own local authoritative, I don't have any issue with that. I am concerned, however, that—that we'd be pushing something that said that they must do it or, to use the words of the draft, that they should do it. I don't see a justification for those statements, but that comes back to what is the problem that people are trying to solve. And that leads me to my last point, which is—and this is not, obviously I'm not a technical person, I'm a lawyer—those who are saying that this is some kind of a solution to geopolitical stress, um, and I—and I know that at least one of the authors has said that publicly, and I don't want to put words in anybody's mouth, so I won't say which one. Um, I—I fear that that's a rather naive view of geopolitical stress. If you think that the problem that you're trying to solve is geopolitical, number one, I question whether this is the right form to solve that problem. But secondly, more importantly, I—I don't think that this solves whatever problem you're trying to define in that space. Thank you. **Paul Hoffman:** Um, if I could just respond to that briefly. Um, yeah, I—I was chairing HTTP when we adopted 451, and personally, I—I very much agreed with your assessment at the time. It felt more like nerds trying to make themselves feel better than actually improving the world. Um, but I was uh in the rough on that one. Uh, I don't think that this is the case here. This is very targeted and uh uh it's—it's not so much about keeping a or exposing a database, it's about uh making sure that people understand why DNS requests are failing. **André Surikov:** Thank you. You want some closing words now? No? Okay. Can I sit? You're allowed to sit. Yeah. Yeah. So, um, we definitely will look for volu—well, the requirements document or requirements specification. We need volunteers, so maybe that could be the set of the authors or anyone else. Uh, we will follow up or contact us to—to coordinate that. Okay. **André Surikov:** Are you ready? **Mark Nottingham:** Hello. Uh, this is mostly a, I think, a reprise and an update on uh what we discussed at the last interim, uh, what was this now, about a month ago. Uh, so this is the draft which is currently called draft-nottingham-dnsop-censorship-transparency. Uh, we're now calling it DNS Filtering Transparency. There's been a bit of back and forth about the title, which we'll get to in a minute. **Mark Nottingham:** Uh, so problem statement briefly. Uh, so uh DNS-based uh uh legally mandated censorship is becoming more prevalent. We're seeing it happen in a number of jurisdictions now. Uh, often there's a—a now a court order to block certain names from being resolved by certain resolvers. Um, currently the problem is that when you do that, it's indistinguishable from a technical error, and it looks like a resolution failure in browsers and applications using the DNS. So um now all of a sudden you've got a degraded user experience, people are confused, they're trying to fix it, they don't know what's going on, you get support load for various actors in the ecosystem depending on who the users think has caused this problem, and they're not really aware of the real cause of the issue. Um, and so one could argue that's a, you know, democratic deficit that, you know, you if if something's being legally blocked, you should know why it's being blocked. Uh, so the general solution space we've been talking about for a while now is how do we surface the nature of the error to users uh in a way that they know what's going on so we don't have this confusion about the nature of the error. **Mark Nottingham:** Uh, complicating all of this is trust. Uh, many DNS resolvers are interposed by an untrusted third party. So you've got coffee shops and the airport Wi-Fi and whatever else. And even when the resolver is trusted, the nature of that trust is somewhat limited. You know, I get on an airplane, I'm implicitly trusting the pilot with my life, but I'm not trusting them with my wallet. So, you know, trust is contextual. Um, and you know, when you when you add another party to an interaction, then you have lots of different considerations you need to take into account. Uh, and this would be effectively doing that. So um this is kind of a third rail issue for a lot of implementers and especially web browsers: displaying uh text from a third party on the network path without any authentication of that party is a real problem, and so some sort of guide rails and limitations and mitigations are necessary for that. **Mark Nottingham:** So initially when we started talking about this, what was it, middle of last year or so? Um, we we talked about, you know, okay, let's convey a link for more information about some sort of filtering incident to show to users, pop that up with the appropriate context in the web browser. Um, and that was uh defined as an extension to the Structured DNS Error draft. Um, and so to mitigate that trust issue, we focused on uh the identity of the resolver as the choke point used to uh to mitigate that. Um, and so resolvers had to register in an IANA registry to be uh eligible to show up in browsers, and they'd published a URL template, and then anybody can register one of those resolvers, but applications may or may not decide to do something with that information. Um, so the issue that came up in discussion was that has the potential for privileging some resolvers over others, and people didn't like that property. **Mark Nottingham:** So we went away and had a think about it and pressing the button... There we go. And came up with a slightly different focus. Uh, we shifted to registering not the resolvers that are used in this thing, but what they're pointing to. So what we're calling in the draft a Filtering Incident Database. So it's a website that has uh resources on it that describe what's happened, like a list of legal demands, for example, uh that result in filtering. And the most common widely known example of this is the Lumen Database, uh which is uh used to be Chilling Effects. It was set up by Wendy Seltzer, who is now coincidentally our LLC chair. I think now? No, Trust—Trust chair, sorry, yeah. Uh, and it is uh widely regarded in the industry. It's now homed at uh I believe Harvard Law School. Uh, so then you still use a URL template in the current approach to limit what can be linked to, but uh any, you know, website potentially could be registered there as one of those databases. Applications can still decide which databases they'll show to users, but any resolver can use one of the registered databases and say, "Hey, uh there was a filtering incident and uh these are the details of what just happened." And so in practice, uh uh the idea is that, you know, there's a court order in some jurisdiction to say no, you have to block this DNS name. Uh, you you get the information in the DNS response in the structured error, and then the browser can say, "Okay, well you you've been filtered. There's more information here," and that would lead to, for example, a web page on uh the Lumen Database with a copy of that court order. **Mark Nottingham:** Um, public resolver errors was the original name of the draft. That was really misleading, so we got rid of that and changed it to something else, and now we're having a discussion of filtering versus censorship. Uh, we can still talk about that and and the scope of the use cases here. And I think that's um I forget what's on my next slide, but uh in the hallway discussions I've had with folks this year... sorry, this week, it does feel like a year. Uh, people have expressed interest in other use cases and and expanding kind of the scope here, and I think uh without wanting to put uh words in implementers' mouths, there's interest in figuring out, you know, how this can be reused in a responsible way. I think the focus on public resolvers is the initial area we've been looking at both because there's a certain amount of urgency there, there's a lot of um activity shall we say around public resolvers and legal demands, but in in principle as long as these uh issues that we have around trust can be mitigated, it can be used in other cases. And so um it may be that, you know, they'll open up the number of databases that they uh uh trust. **Mark Nottingham:** Um, implementation status. So the Chrome team uh has merged their initial implementation. That is uh doesn't have a user experience yet, uh and so they're looking at—aiming for the next release of Chrome which is in N number of weeks, where N is a relatively small number, uh M148. Uh, that will be behind a flag, it won't be actually deploying it, it's for testing purposes. Um, and Cloudflare already has a test uh uh site up for this where we can play with it. Um, I know other browsers have uh expressed interest and have uh talked of implementation, but this is the most concrete thing I can report right now. **Mark Nottingham:** Um, yeah, and regarding structured—the Structured DNS Error draft, we're just using the JSON mechanism, um uh uh there's no incompatibility. I think we still have kind of a little bit of feedback about that draft which we can talk through, but generally speaking it should layer on to it pretty easily. Yeah. So adopt? **André Surikov:** Thank you. Any questions? Here or... Yeah? Yeah? Any questions? Yeah, there are questions. Please, um, Florian. Oh, sorry. Yeah. Florian. Florian Obser, RIPE NCC. Um, this just came to my mind. What are you doing when the filtering database gets, you know, filtered? **Mark Nottingham:** We can't solve all the problems. I thought that's why we were here. Yeah. **Paul Hoffman:** Paul Hoffman, really briefly. Um, just to be clear, this is not just the large public resolvers who have this issue. I know that that's near and dearest to your heart, um, but for any of you who read TorrentFreak, which is a newsletter for the people in the torrenting world, you will know that in many countries they actually don't even go after the big public resolvers because they have bigger lawyers. They go after a lot of the smaller ones on exactly this issue. **Vittorio Bertola:** Vittorio Bertola. So, well, from the point of view of a DNS software vendor and maybe also putting my shoes into the—my foot in the shoes of a DNS operator, I think we—we can only take what browsers are willing to implement, but I think this is a productive solution. So I think this is a step forward and I support adopting it. And of course the success and and whether this meets the use cases we have in mind will depend a lot on the actual implementation by browsers and their policy choices. I would if possible keep the protocol as broad as possible so that it can support in principle any use cases, not just low mandated filtering, but I mean malware filtering, whatever. And I hope that this really gets adopted widely because users really need some better transparency on what—what's happening. **Ralf Weber:** Ralf Weber, Akamai. So on adoption, I wish the working group would it—will adopt this. I'm kind of like uh would like that. And uh I think having going from registering resolvers to registering these incident databases is a really smart way. I like that. And I mean I would again like to extend it to security use cases, but I mean we can talk about that and when we adopt the draft and work on it. Thanks. **André Surikov:** Remote, Robert. **Robert:** Um, yes, hello. Uh, I—I just want to comment briefly that I think anytime you you try to map sort of a uh a social problem or legal problem onto technical architecture, that mapping always becomes difficult or or or cloudy. I think it's a very, very difficult task to get right in the way that you hope to. I think a cautionary tale comes from the attempted launch and use of the 451 code in HTTP land. Uh, I mean from a legal perspective, I've seen that code, in my opinion, abused by sites where they claim they're being censored, but in fact what they're doing is refusing to comply with human rights law. Um, so we end up in this weird situation where they're they're broadcasting a censorship message when in fact what they're doing is, in my opinion, abusing human rights. I'm talking here about people who geofence on the basis of avoiding GDPR compliance. So I—I would urge caution uh in terms of how this would actually be implemented or the the value going forward of the of the database. **Mark Nottingham:** Um, if I could just respond to that briefly. Um, yeah, I—I was chairing HTTP when we adopted 451, and personally, I—I very much agreed with your assessment at the time. It felt more like nerds trying to make themselves feel better than actually improving the world. Um, but I was uh in the rough on that one. Uh, I don't think that this is the case here. This is very targeted and uh uh it's—it's not so much about keeping a or exposing a database, it's about uh making sure that people understand why DNS requests are failing. **Giampaolo Scalone:** Giampaolo Scalone, Vodafone. So we run DNS services for some hundred million customers of different jurisdiction. And we have different laws that are very varied and very sparse. Also we have customer-facing product for parental controls, and there transparency is essential. So I think the definition of public resolver of the first draft was not going in the direct—in a direct solution, in the correct solution. Now having a different approach where it allows somehow to address the issue for the resolver user by—used by the customers is the right direction, is the direction of transparency. We have seen, for example, in Italy with Piracy Shield, where a law that is sovereign but is not well-written brings to the fact that there are some blocks, even overblocking, but there is not the possibility from the user, the citizen, to do some informed protest because they are not aware. And in many countries, the public resolver is not used by the majority of customers, but they remain in majority with the operator DNS. So a method to have the possibility to show in transparency the reason of blocking and to do protest eventually if there is a false positive or a wrong reason is really needed. **Mark Nottingham:** So, what you would say you support adoption? **Giampaolo Scalone:** So I'm supportive of the fact that this has to be broad, taking care of security, but as to find a way to allow for the safe, let's say, safe resolvers that are used by majority of customers the possibility to show, so supportive if it is open also to ISPs. **André Surikov:** Okay, thank you. What I've heard was support for this, so we will take this to the mailing list and do it formally on the mailing list. Okay, thank you. **André Surikov:** Joe, you're up. So you want to run your slides yourself? **Joe Abley:** If it's if it's obvious how to do it, that would be good. But if it's not, there's only three slides, so. **André Surikov:** You're now in control. **Joe Abley:** Very good. All right. Um, I'll do this backwards. So on earlier this year, 8th of January, we did a we released a code change at Cloudflare on 1.1.1.1 that changed um the ordering of the resource records that are packed into an answer section for in particular for responses that include CNAME processing or were constructed with CNAME processing. And this caused people in the world to have problems. And there's an example there on the screen. Um, and so we wrote up this draft. There's also a blog that we put on the Cloudflare blog describing how we think that the spec is a little bit ambiguous or at least it's not clear in the ordering. It's an old document, we know this. So 1034, 1035, we think... we didn't... didn't say we shouldn't do what we did, but it caused a problem anyway. So we feel like we should write down what the practical requirement is for how you build this kind of answer section. So we wrote this 00. We actually adapted a document that I'd written 10 years earlier um in response to another example of exactly the same problem. And uh that 00 proposed that the answer section must be an ordered list. It doesn't say that any other section needed to be an ordered list. And it's clarified the language in 1034, 1035 to say that when you are building that list, you append rather than just include. **Joe Abley:** So we sent that to the mailing list and we got some feedback. And the main feedback we got is although it seems very attractive to make just a very general statement about the entire answer section being an ordered list and how all examples of add to the answer section should be treated as append, um that in fact is far too broad. That—that causes all kinds of implementations not to be correct uh when there's no actual reason to call them incorrect. People have been doing it this way since forever. So in fact, we should be more specific and we should just talk about CNAME processing and we should talk about the ordering of records in the case where constructing the answer section involves CNAME processing. Um and there was also a comment around DNAME processing. 6672 also has kind of ambiguous language that we think could be could be cleared up. So we got some good feedback on the mailing list about what we could do in this document to actually make this implementable and to reduce its scope to the actual problem that's been observed. **Joe Abley:** So we think that this is worth writing down. Um, we have three examples that we know about that were somewhat publicized at the time. The—the exercise in January that we—exercise, the—the outage that we caused in January uh resulted in all kinds of weird things. Even resulted in Cisco switches rebooting, going into reboot loops because they had NTP servers that were configured but couldn't be resolved properly with the local resolver. Um, it's very attractive the idea to say that this is actually a problem with the client code and perhaps just update the client code, which in general I think is a good answer. But unfortunately, some of the affected client code code is very old. Um it's glibc, it's extremely widespread, and I think it's reasonable to imagine that this code is deployed in devices that will never be updated. So in fact, what we have is an actual requirement in this particular case. It's a bit ugly, it's kind of special-casing just CNAME processing, but it does stop things breaking on the internet and let's write it down. Um, we also think that this is actually not really much work for anybody because in fact any recursive resolver that actually works on the internet today already operates this way. This is kind of cautionary guidance for new implementations or future changes that might forget that this is true and might have their own outage. Um, we don't think it would take very much to take the advice that we got on the mailing list and spin up a 01 uh that is fairly brief and uncontroversial and narrowly scoped just to the CNAME problem. So our suggestion is that we should work on this and uh we're interested to hear what the working group thinks. **André Surikov:** Thank you. Any questions? **Stuart Cheshire:** Uh, I'm Stuart Cheshire from Apple. Uh, I understand your motivation here. The thing that just occurred to me is let's try not to send the message that client implementations shouldn't be updated to be more robust, because you've just told us how to crash Cisco switches on demand, and uh now—now they're dependent on not being fed bad data. So um I understand in some constrained environment where you're in a data center, there's no outside source of packets, that might be okay. But broadly speaking, if I was a vendor of a router that could be remotely crashed on demand, I would want to get that fixed. **Joe Abley:** And I—I did mention Cisco only after they'd mentioned this themselves. They published the vulnerability and they've fixed it in their codebase. So I mean there are always old devices that have old security problems. But um it's—it is certainly not the case that we are letting some cat out of a bag. This is—this is known behavior at this point. **André Surikov:** André with my BIND hat. So this was before my time at ISC, but there was a case where BIND made a change with uppercase, lowercase um in the code and again some hardware phones started failing and there was no push to publish this as, you know, RFC. So um I'm just thinking that if for every bug that's out there, um we should publish RFC clarifying stuff, or rather finally rewrite the RFC 1034 and 1035 and include this in the rewrite. Might be time better spent? That's a question. **Joe Abley:** Well, I—I agree with you that it would be lovely if we had a clear and updated and accurate specification for the DNS. Maybe Paul Hoffman has some comments on this from the nightmares that no doubt he still has around this question. But I think in this case, the pragmatic answer is: it's a shame that we need to write this document. It would be a much better world if we didn't have to write this document. Shane Kerr said something like this to me over the weekend. I thought it was a good comment. Um, in this case, the actual cost-benefit of this document: the cost is pretty low and the benefit is potentially non-zero. So I—I don't see that it does any harm to publish it. I'm not saying this will save the world and I'm not saying it's better than having a full specification of the DNS. But we might—we might wait forever for a full clear specification of the DNS rather—whereas we could we could probably push this out pretty quickly. **André Surikov:** Sure. So, there are no more questions and uh we should discuss this on the mailing list afterwards. Thank you, Joe. **Paul Hoffman:** Paul Hoffman, wearing my um minute-taker hat. This this next section is listed as on the agenda as DNS Dispatch. Is that an official name or was that just something that you threw in? **André Surikov:** No, it's an official name. **Paul Hoffman:** Thank you. Okay, so I will leave it in. **André Surikov:** Thank you. Thank you. And that also reminds us, this is really a dispatch function now in DNSOP. So it's important for us all to determine where this work should be continued, not necessarily in DNSOP. So uh yeah. So the next two presentations are the DNS Dispatch with the dispatch function. Thanks. And uh next is uh Joe, presenting for Andrea. **Joe Abley:** Yeah, I'm back again. Um, so Andrea couldn't join us to present this himself. These are not my slides and I don't have a lot of knowledge about what this particular implementation of this kind of protocol uh is actually saying. But this is a dispatch conversation, so it's not about the actual protocol, it's about is could this work go somewhere and see some progress in the IETF. **Joe Abley:** So what since a company called Dyn existed a long time ago, there was a um there was a there was a protocol, which is I think is a generous word to use in this case, there was a mechanism that allowed a device that might receive a dynamic address to register that address somewhere and have it reflected in the DNS. And the original use case of this was a dorm room in New Hampshire, as I understand it, where people would have their servers plugged into the Ethernet port on the wall and they wanted to run servers of some kind and they wanted a DNS name associated with it. And then there's a whole bunch of implementations that have come around this since. The the basic approach is that the device, the routery-type device, the CPE-type device does um send some sort of message to a known place. Um it's usually HTTP, it's usually REST-ish, and the source of that request ends up being an address that's associated with a name. The payload of the HTTP request contains which name you want to update. So that's roughly what we're talking about. There's—it's related to the DNS in the sense that there's a DNS name involved and there's an address involved, but the actual protocol used has nothing to do with the DNS. Um, this has been these kinds of protocols have existed for a long time. They've rarely been written down. There are if you look at an average TP-Link type device that you buy off the shelf that you stick on your home network, you can configure one of about 20 different protocols here. They're all a bit different. Um the idea here is that if we just wrote down one and we perhaps take a starting point for one that already exists but then a working group works on it, then perhaps we could come up with a standard one which could then be a focus for different implementations and different devices to support. **Joe Abley:** So the DynDNS2 protocol, as I said, it's it's old, it's a bit crusty, it's not a modern HTTP protocol, it doesn't approach the questions of authentication and things like that well. Um, it when this was mentioned on the mailing list, someone, Mark Andrews I think, mentioned that you can do all this with DNS update. I actually remember a long time ago Mac OS X with Wide Area Service Discovery allowed me to do this very nicely. Um, it's an option. But we are not necessarily talking about could this happen in the DNS, we're talking about recognizing the fact that many devices that are deployed and sold today choose to do it in a in a different way, and should the mechanism be standardized. **Joe Abley**Joe Abley:** I'm going to skip over what this protocol in particular defines, because I think the interesting thing is more the overall goal of the protocol rather than how it's implemented. But I think it's worth noting that there are a bunch of implementations of this particular protocol. Andrea's proposal, I think, is not to ask the working group to standardize a particular protocol that has been written by a particular person or a particular company. It's that this is a starting point. And is there interest in a working group in the IETF taking this work up as a starting point, understanding that this might give them a head start or it might need to be thrown away, but otherwise work on the problem space and come up with an equivalent protocol or a modification to this protocol that is a standard, so that in the future we have one way of doing this, which is nice and flexible and we can have lots of interoperability and that kind of thing. So that's the outstanding question, as I understand it. Does this belong in DNSOP? And if not, do people have opinions about where it might go? **André Surikov:** Thank you, Joe. Stuart. **Stuart Cheshire:** Stuart Cheshire from Apple. I'm going to be an old curmudgeon, so I will only say this once and I will not stand in the way of consensus if other people want to do something differently. I feel like DNS has a protocol for doing updates. It's very compact, it's very lightweight, it doesn't require multiple round trips. And that's why in Mac OS 10.4, 22 years ago, we said, "Oh, there's a standard for this, we'll implement that." I actually talked to Dyn and said, "Will you support standard DNS update?" and they refused, which was very frustrating. I don't see any reason to have 25 different HTTP ways of doing DNS update when the DNS community already defined a better way of doing it. Now we have the RFC published for the lease option for DNS updates, we have automatic garbage collection kind of like DHCP style where if you don't renew it gets cleaned up, we support it with TSIG symmetric keys and with SIG(0) public key-private key encryption. I feel like this has been done in a very well-specified, secure, compact, efficient protocol, and I have very little appetite for doing it again, but worse. In case it wasn't clear what I thought. **Joe Abley:** Great sales pitch. Thank you, Stuart. And thank you for 10.4. I enjoyed using what you did there myself over many years. **Joe Abley:** Follow up to Stuart's comment. Apple at the time did an underscore name for redirecting the DNS update to a third-party server, so somewhere or other IANA has lost that information, but it was defined at one point. I don't see the need to do anything over HTTPS, but if we do it over HTTPS, do it as a DNS-encoded message. There really doesn't seem to be anything beyond that. If somebody really wants to do DNS, we've got all the mechanisms to do it over HTTPS already with DOT or DOH. **Ralf Weber:** Ralf Weber, Akamai. So should this be standardized? Maybe. Should DNSOP do it? Absolutely not. I mean, this might be something for the Applications Doing DNS working group that I and some others proposed on the mailing list. **Ted Hardie:** Ted Hardie. A typical dispatch question is whether or not the community of implementers that is currently in this space would use the result of this. And I think the argument in the in the draft and in the presentation is that there's a whole bunch of variants of the DynDNS2 out there that are subtly different and don't work across providers. And finding a way to resolve that problem seems to me like a very useful thing to do if the people who currently use those 25 variants are willing to do it. As much as I feel Stuart's intervention had a great deal of merit, the fact that there are 25 different variants of the DynDNS2 thing means that if we can get them onto at least one variant, that seems like a good interoperability success. But I think we don't have evidence in the draft of that, that all of these variants would be willing to adopt it, or at least a substantial number of them would be. So I think it would be useful to gather more data. Somebody in the chat said BoF, and dispatching this to a focused BoF in Vienna and then going to the authors and others and saying, "Hey, if you've got an implementation that's a variant of this and you would adopt a standard if one were created," that would be a very compelling BoF outcome. **André Surikov:** Thank you for the suggestion. Yep. **Peter Thomassen:** Peter Thomassen. So I agree this is not something for DNSOP because it's an application layer thing. And I think that it is probably possible to, in a different working group, perhaps standardize the DynDNS2 protocol in a way that is liberal in what it accepts. So for example, at deSEC, we have implemented the DynDNS2 protocol and the different clients that we have speak the 25 variants, but the 25 variants mainly differ, for example, in what exactly the URL path is, and it actually doesn't matter, so we just accept all URL paths after the slash. And so if one would specify it like that in an interoperable server-side manner, then that would be I think progress, but it doesn't require coming up with a new protocol that has like JSON payloads, for example. There's also a bunch of extensions that are obvious, like for example if an IP address is specified, I think the DynDNS2 protocol just allows one, but a comma-separated list would be the obvious extension, which some clients actually support and we also do. So I think, yes, this should be done, but not in DNSOP, but somewhere else. And the starting point, however, should be the existing protocol and then we should see how that could be made, I don't know, some such thing that it is more interoperable and more future-proof and more flexible. **André Surikov:** Okay. Thank you for your input. I hear the suggestion from Ted, and it's not conflicting with what you say, Peter. So I think Joe is here, Andrea probably is also listening in. I think that's some advice to take the next steps, and if we can help you in the process, we're available. We're happy to help you. Um, yeah. Time is over, I see on my screen. **André Surikov:** Next is Nick. **Nick Williams:** Hello everyone. My name is Nick Williams. I'm here speaking on behalf of a co-author group, but regarding the problem space, there's been a fair amount of discussion among various stakeholders that we're going to get into first. So essentially some folks within the IETF side meetings and working groups have been trying to determine what, if any, role DNS can or should play in the AI application space. So when it comes to identifying if that is an option, we started with trying to identify a problem space first for discovery. So essentially we took the term agent, we took a bunch of stuff out of it, and we're just trying to identify what we would describe as entities. These entities are typically proofed to organizations, and this is a problem space that exists today. Some people have local versions of tools that are MCP servers, they have various pieces of infrastructure that live internally, and almost all of those solutions use gateways, and there isn't yet a unified discovery mechanism between open-source tools like that, and then when you look at some of the enterprise tools, like Anthropic and Tines and Glean and some of these other tools, it's hard to get those registry operators to build something that's going to allow the interop between those tools. So essentially we wanted to answer a couple questions of what functionality is kind of required and if there's a subset of that that DNS might play in, great. What communication methods might there be and where would this information likely live? We want to really be explicit though of what's out of scope, which is a whole bunch, like registration, trust and attestation, capabilities negotiation, task management. Again, we are not the big AI lab developers, so we are not trying to be prescriptive to them on how they should think about or describe their applications. **Nick Williams:** When we're talking specifically about DNS and what we were thinking, we really just want to use it as a stable beachhead to signal to other organizations where AI agents might live. We are not trying to take application data out of AI agents. We are not trying to semantically describe them using DNS records. We are not trying to put a bunch of records where they otherwise shouldn't. So essentially you have potentially an index, and that is something that is yet to be specified, but that would be you can think of like a landing page that tells you all the agents or AI tools an organization might have. And then potentially the co-author group and the problem space author conglomerate believe that most organizations are only going to list externally a handful of agents. Internally there may be a ton. Externally we don't yet see a wide use case of hundreds or thousands of agents or anything like that. So more times than not, we believe that many tools are going to know where they want to go, and they're going to know what they might want to discover, which we believe DNS is great for. **Nick Williams:** Things that DNS is not a great candidate for is of course, as mentioned earlier, describing all the capabilities of agents, trying to pick which agent is best, both within an individual domain, like having to enumerate a zone, or between different domains and trying to select what might be best. This is something that our co-author group is very has been very explicit in, that we are trying to identify really just the use case for "I know what I'm looking for." And by putting it in a well-known place, this allows the other tools I described, the enterprise tools, the build-it-your-own style open-source tools, this could be the substrate that facilitates discovery between all of them. **Nick Williams:** So our specific draft talks about that specific capability of if you know both the service and the domain, that really is the what we believe to be the largest use case on the internet today. Most of the time when you ask an agent or you ask a local MCP server, enterprises have trusted relationships, so they know that they're going to want to talk to Concur, they know they're going to want to talk to Copilot and not DeepSeek, for example. So if that's the case, then DNS is good at solving that problem. Where it's bad is when you'd ask it, "Tell me all the agents that Microsoft has" or "Tell me every agent out there that does image-to-text." Those are things that we want to avoid DNS really having a space in. So really we are looking if there's appetite for any DNS working group at the IETF to basically signal to application owners and developers, here's a place to signify or make discoverable where your well-known agents would live within your current namespace. So in this case, `_agents.example.com`. Within that, the SVCB records with minimum viable parameters, potentially like a URI to a model card or again extensible characteristics over time as the big AI labs continue to develop and innovate in that space. Not every DNS server supports SVCB, so there is text fallback. **Nick Williams:** The salient point that I thought was really interesting, if you haven't been tracking a lot of the side meetings, there are five or six extremely similar proposals describing this same thing. TXT records, SVCB records, or service B records. Some people are talking about using plain service records and not service B records. But ultimately, there are a growing group of folks that understand that DNS is going to be good in the use case of "I know what I want and I know where to go." And again, this provides the interop between the registry operators and the search engine operators and everyone else to start identifying in one place where it should be. There are complementary drafts in the problem space as well, not just specifically to what is required to facilitate discovery, but even the pieces that I described, the second one here, the index, that's that problem space of "Microsoft, what are all the AI agents that you have? Tell me everything that you have that can't really be enumerated in DNS." So ultimately, um, we're looking for questions and feedback. We've heard in various side meetings previously that well, we don't think the IETF have the expertise, because again the big AI labs aren't here. But we also don't really have a natural landing place for this work to be done. And so we wanted to poll the DNS folks here if it is sensible to standardize where AI agents might be listed and then hopefully over time start building out a place for application developers to start sharing some of their insights and coordinating with other various standards development organizations, because this problem space is large. There's going to be things that aren't agents, there's data stores and tasks and, you know, potentially, you know, in the future there could be like embodied AI and robots you want to discover and things. So there's going to be a lot of that that is not a good candidate for DNS. But for a problem that exists today, for something that can be solved today, within that problem space and narrowly narrowly focused on just listing what's out there, our co-author group, but again there are plenty that are proposing something similar and we haven't yet found a natural landing page within DNS. So thanks. Questions? **Speaker:** This is D from ZDS. As we all know, DNS service discovery that Apple and Google used has existed for a pretty long time. What do you think the DNS-SD, I mean RFC 6763, cannot do to meet this end in terms of discovery? AI agent is kind of service. Maybe we just need a service type that we can use to map the service type to a instance. So what would you say? **Nick Williams:** Yeah, this is essentially a special use of DNS service discovery. So instead of doing, you know, `_443._tcp.example.com`, we're just picking a specific landing page. In in our draft compared to other drafts, some drafts wanted us to some other drafts specify `_a2a` or `_mcp._example.com`. But every time we want IANA to set aside a leaf attribute zone like you'd see in DNS-SD, it requires this type of discussion. So that's ultimately why these drafts exist, is we're just trying to say, "Hey, can we—it is DNS service discovery, but it is a special use of it, and we just want this specific label to be reserved for that purpose." **Paul Hoffman:** Paul Hoffman. I know we're almost out of time, and this is a dispatch question. Not DNSOP. I don't think it should actually go into DNS-SD either, because of the problem space. And you're going to have to answer the question, like if you go back one slide, why are you using a TXT record in the DNS instead of a well-known URL and putting it over HTTP? That's part of the problem space question, and that's why I think a different working group, a BoF for this would actually be good, especially because you already have a long list here. Thanks. **Nick Williams:** Thanks Paul. **Speaker:** Nan Gang from Huawei Technology. Just a question. What's the relationship of your proposed mechanism and the maybe centralized registry mechanism? For example, the Google's A2A registry. What's the relationship? **Nick Williams:** Ultimately our co-author group believes strongly that centralizing both where you must register an agent and where the discovery happens can provide some opacity and actually prevent end users from knowing what services they are unable to discover. And service operators like Google could potentially omit results and de-list agents in an anti-competitive manner. So our goal is providing an interop for a tool like that to go and discover others, but also provide a capability for local individual operator sovereignty to list their own services and go and discover others. We are trying, we are explicitly even in our draft avoiding centralization of the search capabilities and the registration capabilities. That doesn't prevent somebody from doing it and offering it as a service. **André Surikov:** I'm so sorry. We are running over working group time, so only speak if you have like a recommendation where this should go, not to discuss the contents of the draft, please. **Speaker:** So this is dispatch. **Speaker:** OK. At the end from Alibaba Cloud. I just would like to—I heard a lot of discussion during this meeting about the agent and the discovery. It still troubles me a lot about the discovery scenarios, what the real problem you want to solve. It looks like that you want to find somebody you're not familiar with and want to talk to them, don't know who is he. So we don't use the DNS like that. We don't use DNS to find somebody you already know who you want to search, right? You just want to connect to them and... **André Surikov:** I'm sorry. We are over the time, and if you have recommendation where this should go, this is a DNS dispatch function, we don't discuss the contents of the draft. **Speaker:** Yeah. Yeah, that's that's my opinion. **André Surikov:** OK. Thank you. Jim, be brief please. **Jim Reid:** Thanks very much. Short answer is I don't know where this should go at the moment, and I would suggest probably the next step is going to be to have a discussion with the area directors and find a path that way. I think that's probably the most sensible approach. One other small piece of advice I would give is that some of the suggestions that were made at the mic about using SRV records or well-known are worth considering. But I think it's very important that you don't go down the route of abusing TXT records. If you want to use a DNS resource record type, define a specific one for that or use an experimental one until we find something else that's a more appropriate solution. Agree. **André Surikov:** Thanks. Thanks Jim. Ralf, very brief. **Ralf Weber:** Ralf Weber, Akamai. Not DNSOP, and I agree with what Jim said. And one thing, I mean you are using SVCB, which is used a lot in the internet, and I don't understand why you fall back to TXT, so please don't TXT. **André Surikov:** Thank you. Okay. This concludes our session. Thank you very much everyone, and until next time. **Speaker:** Vienna. **Speaker:** Yeah. Vienna. **Speaker:** Schnitzel.