**Session Date/Time:** 16 Mar 2026 08:30 **Gavin Brown:** Welcome everyone to the RPP working group session. Just a reminder that this meeting is preceding at the IETF and is covered by the Note Well. If you're here in the room and you haven't already signed into the MeetEcho tool, obviously you are subject to the requirements of the Note Well. So it's a reminder about the processes and policies including conduct, privacy and intellectual property rights that you agreed to when you registered to participate in this meeting. If you need to you've got source documents that you can go to to get more information. If you've got any more questions you can ask myself or Marco as working group chairs or one of the area directors. Okay, so just an overview of our agenda for the day. I think we have already a note taker, thank you. If anyone else would like to contribute notes that would be much appreciated. You can do so through the meeting page on the data tracker. We can have multiple people producing notes and I think the more people on that the better. We will have a quick look at our milestones and deliverables for this working group and see how we're doing. And then we're going to get into the working group business which is looking at the documents that are forming this protocol. So it's going to be tag teams between Pawel and Maarten. And then if we have time any other business at the end. So these are the milestones that we have for this working group. The first one was obviously finalizing our requirements. We now have consensus on that, so well done everyone for all your efforts in achieving that milestone. The next thing for us to be working on now is the core architecture with the extension mechanisms and then the actual specification for provisioning of domain names. We also have a fourth deliverable there which is the mapping between RPP and EPP. Some uncertainty on the part of your chairs as to which document if any meets the requirement for that milestone. So I think it's worth us as a working group thinking about that. We don't have to set - our milestones are not set in stone, we can change them if we think that they are no longer applicable or if there is something that is missing that needs to be closed in terms of a gap. But you know I'll be looking to working group participants to give us some guidance on that that would be appreciated. You know when we originally talked about producing a RESTful provisioning protocol there was some concern to ensure that there was kind of parity with EPP and that there's some sort of interaction with EPP and that I think was where this milestone came from. So it may be that actually, and I'm looking at Maarten in the room and Pawel remotely, to just let us know where you think this milestone might be met. It doesn't have to be an answer right now just worth thinking about. So yeah, so we can move on to the first of the session presentations which is going to be Pawel talking about the architecture draft [draft-ietf-rpp-architecture]. **Pawel Kowalik:** Right, waiting for the slide control. Yeah, I have it. Right. Hi everyone. Sorry for not being with you in Shenzhen. I hope you have fun. So I will be presenting the changes we've done to RPP architecture document [draft-ietf-rpp-architecture] since last update, so it will be the version 02. The quick recap about the document what it is about, so it defines the architecture for the whole protocol by defining the layers of the architecture following the resource-oriented architecture principles and trying to define what elements of HTTP and related standards we will be trying to use and how. I presented this last time already but there are some updates in the meantime so I think it will be helpful also for the session today to locate the presentations and the drafts in the places in the architecture. So we have this HTTP transport layer where the core draft [draft-ietf-rpp-core] presented by Maarten later plays a role. We have a kind of missing draft in this layer on the authentication/authorization with OAuth. It may be that we will include it in the core but I think it can be beneficial to have a separate draft about authentication so that this module is let's say pluggable separately. Then we have on the data structure, the data objects draft which I'll be also presenting later [draft-ietf-rpp-data-objects] defining data elements and the operations. And we have the JSON RPP JSON draft [draft-ietf-rpp-json] which shows basically the mapping of the data elements and operations on JSON, this is something that will be also in this session today. So as you see the picture is now getting more filled with all the contents of this architecture is also implemented in the early drafts that we'll be presenting. So in the version 01 that was published, so it's 01 because we after the draft was adopted by the working group basically the numbering was reset. So basically what we've done as a main activity was to compare the final version of the requirements document [draft-ietf-rpp-requirements] that reached the working group consensus and check what elements from this requirements document are still were still missing in the architecture. So we put a lot of effort into defining better the extension mechanism, extensibility mechanism, the discover and profiles, security and authentication, some elements from the requirements we just translated into architecture about the TLS requirement and credential management. We dealt with collections, bulk operations and filtering basically from the architecture perspective on the pure HTTP layer. We defined the canonical addressing and usage of status codes. And finally the resource definition and data model, the relationship required, optional data elements, server manage resources so all the things that were in the requirement we added to the architecture as well. Poll message definition and message validation handling was changed from lenient to strict according to the requirement. So I have brought up here also a few issues that appeared during this work where we would like to hear opinions from the working group. Before I go there is anyone in the queue so far? No. So I propose if you have questions or opinions or something to add about these issues each single one please put yourself into the queue so I think it will be more viable to have the discussion on each issue separately so that we don't mix up the topics. I think we have enough of time to cover for this. Jim, is it related to the to this topic already or to the previous one? **Jim Gould:** It was the prior slide. Yeah. The last one related to message validation handling from lenient to strict. In my review it looked like there were still references to lenient validation, is that is that a mistake or what's your intent there? It says the default is strict but then the architecture specifies the ability to support lenient validation as well. **Pawel Kowalik:** Yeah, so what we changed so we still kept both of them in the in the draft but we changed the default because previous version said that default would be the lenient handling and now we are saying that the default should be strict. This at least what I remember. **Jim Gould:** Is that how we - is that how we did it in the requirements? Because I'm just pasting the requirement in the chat here. We probably should have modified the requirement language if that was the intent to have it be like the default must be the ability to support lenient, because I'm still trying to understand when a client would want server to use lenient validation. I'm just - it may be worthwhile to hear from a a client on this, a registrar. **Pawel Kowalik:** Yes, so as I said we now kept lenient still still in, but as I said we didn't have yet too broad discussion about this new version, so I think it's absolutely valid to say okay do we want to keep this optionality at all or we want to to just remove it completely and just have strict. So I think the requirements in the current version is still fulfilled but this optional feature maybe needed maybe not, probably we need some implementation experience as well. But let's bring it to the list and yeah this is a good point. **Jim Gould:** Yeah, I'll go ahead and do that. I apologize for not reviewing the draft earlier so I'll be sure. **Pawel Kowalik:** No problem. All right so one of the issues we stumbled upon was about the discovery document [draft-ietf-rpp-core]. So basically what the architecture document is now saying is that discovery document is considered to be static between server reconfigurations, so basically it's not expected for the client to make discovery for every single call rather on the sporadical manner when let's say to check whether server changed the configuration in the meantime. The what we built in is the versioning mechanism for the service discovery document so basically the client can easily know that something changed. But we are asking also okay how strict should be the handling of or notifying the clients about the changes. So one option would be to have a way of mutual signaling so basically the client saying the server this is the version I know and the server telling the client okay this is the most recent version of the discovery document that I'm publishing. So basically by that the both sides will know and the client can fetch the the new version let's say automatically or by you know having a operational team team looking into it. So this this doesn't have to be completely automatic but at least the client will know something is happening and the server can even warn client if they use kind of version outdated to the extent that it would make their let's say operations not compatible for example or they will be not compatible in the very near future. The other version can can be much simpler just saying that the client should poll for this for this document in some regular cycles and the HTTP cache mechanism should basically be enough so that not to overload the server in for request for this resource. So basically here is almost no overhead for the for the servers and because it doesn't have to do processing with every request but the basically the server would not know about outdated clients in the sense. So I would love to hear some opinions from the working group if but I don't see anyone in the queue. Okay then I will bring it to the list anyway so Jim. **Jim Gould:** You know you're always going to get a comment from me so I'll jump in. Hey, for the caching, I still don't get the caching in a provisioning protocol. Can you describe where the caching would benefit? I'm thinking maybe the check command, but for the transform commands I have a concern of the benefit of caching. **Pawel Kowalik:** Here we are talking about the discovery document [draft-ietf-rpp-core]. So this is a basically server exposing its current configuration, so for example set of supported extensions and their versions and whatever we put in the let's say discovery document because architecture is not defining what exactly is in this document. I think the core spec is already having some elements of this if you are interested but this is a a document which is not let's say related to provisioned objects in any sense. Yeah, so so basically it's equivalent to what EPP. **Jim Gould:** This is features and policies correct? Features and policies. Yeah, we have experience with that in EPP and it wound up to not get very far. It's very very challenging to accomplish this. **Pawel Kowalik:** Yeah but but even in EPP you have the the initial handshake in the session but here you have no session. So basically the client this is kind of equivalent of RDAP help resource which is basically not changing that often that's why it can be cached very very good on the HTTP level in this sense right? Because you don't you don't need the client to let's say receive a help every every day or whatever cycles they want to to query this this resource to receive the whole content every time if they if they don't need it. **Jim Gould:** Is there a plan to have a lightweight of negotiating the extensions similar to EPP and have you envisioned anything? **Pawel Kowalik:** Of course but this is something that Maarten will cover better I guess in his presentation. **Jim Gould:** Okay, all right. Thanks. **Pawel Kowalik:** All right so second second issue we faced, so one of the requirements is telling that RPP should carry over the login security extension functionality from RFC 8807 [EPP Login Security Extension]. And what we are saying now okay but the policies related to authentication or authorization are let's say delegated authentication schema which may be external to the to the protocol if it will be for example OAuth. And there's also element about metadata about the client, it's even called User Agent in in the RFC, so there are some in the information about application, operating system, technical information. So I think the most straightforward solution for this would be the to use the User Agent header. The question we were asking ourselves is whether the data set in RFC 8807 is still relevant, should we carry over just everything, should we maybe review this list of attributes which are let's say sent over and by defining this user agent. So this was was one question. The second is the element of the login security event which is right now defined as an response to login command. So in RPP we won't have login as such so basically this is kind of complex to how to fulfill the requirement of carrying over this this information. I think if the authentication is delegated to let's say different protocol then this protocol would have to fulfill it. For basic authentication which is kind of corresponding to EPP login right now it may be not practical to deliver such event with every request because it will be kind of overhead coming with every response and basic Auth is also from RPP perspective considered as rather legacy bridge solution rather than a target solution. RPP on the other hand what we have defined in the requirements and in architecture is the generic mechanism of sending warning or informations with the responses. So maybe it's just enough to transport whatever server wants to tell to the client, just the cadence is something that one would have to decide upon because I think it's not useful to have it with every response but maybe with every let's say every response which happens in the period of time once per hour once per day whatever the the server would see fit would be would be enough. All right, anyone in the queue to this topic? All right if not we will bring it to the to the list. And the last one we had the requirements R9.3 that we may want to support a let's say other transfer process which will be quicker and where the approval would would be inline and replacing the the shared secret approach of EPP. We've right now came to the point that we don't have good ideas how to approach this. Delegated end-user authentication and transfer authorization is in the core of the problem. I think we had a similar discussion in the proposed extension to EPP which so far didn't lead to any approachable solution. So at least right now the proposal from the editor team is to park it unless there is someone who willing to take a lead on this and and project and let's say work out some some proposal. Because this is a May so we see that it's not right now a priority to work on this aspect but if someone thinks to have a good idea about it we are we are fine with that. **Jim Gould:** Yeah I agree to park it. I think less is more in this at this point. **Pawel Kowalik:** Great thank you. I see Rick also commenting on the chat along the similar lines, so also raising the business policy issues. Yeah. So so right now we are not having many more open issues on the architecture, let's say just these three that I presented. I think this draft now requires more eyeballs, more reviews to let's say get more feedback and if no no issues appear I think this document should be ready for working group last call. The one open question that I put last time already is about early reviews from other working groups or directorates. Likely the HTTP directorate is someone we should actively look for for review from maybe before we we proceed with the last call. I don't know if if chairs have any magic ones to ask for such review if this is possible this will be helpful for sure. **Gavin Brown:** Uh yeah, so so we can do that but as document authors we need you to request us to do that. **Pawel Kowalik:** All right, will do. Jim. **Jim Gould:** Yeah, I had a couple more items of feedback based on my recent review. The first is, and I'm going to paste it in the chat here, is in reviewing BCP 56 I noticed this language. And I was really reviewing this in related to EOH, but in looking at the use of status codes in RPP I wasn't sure whether or not this language of overlaying the status codes on top of the HTTP status codes would be compliant with BCP 56. So I kind of wanted to get your thoughts on that. **Pawel Kowalik:** I think what we are doing is exactly by saying that we want to use the HTTP standard codes which correspond to to RPP codes and if there is no no such code in HTTP we basically take the the generic code which is kind of fitting. Yeah, so so I believe strongly believe that we are aligned with this requirement. Let's say if we get different feedback from the review we are open to to discuss that as well, but but at least from our perspective we should be fully aligned with this. **Jim Gould:** And you may want to think about bringing this up to like Mark Nottingham or something to see whether or not your interpretation matches ahead of time until. **Pawel Kowalik:** Yeah we may we may seek for for some direct review but I think the HTTP directorate is exactly the folks that we are we are referring to. **Jim Gould:** Right, right. And then the second one is really related to the message body in the queries, and I've pasted my comment into the chat, whether or not we should look to leverage the draft being worked on in the HTTP BIS [draft-ietf-httpbis-query] to be able to leverage the query method that would support a message body in queries which would allow for extensibility of checks infos which is has been done a lot within EPP. **Pawel Kowalik:** This I think still open whether we want to use this method or whether we have a good use of this method. So because for even for query in HTTP there are query parameters which are equally let's say possible to be used. I think the current approach is basically to go with GET until the point where we figure out the use case where it doesn't work. **Jim Gould:** Well take a look at the registry fee extension [draft-ietf-regext-epp-fees], I think that would be the place to start. **Pawel Kowalik:** I think I think we've we've done that and so far it was let's say in the response side but not on the query side that was kind of making any any worries. **Jim Gould:** Oh so you've already addressed the query, the extension to the check in your approach? **Pawel Kowalik:** So in the in the response we addressed this already. I think it's more addressed in the in the architecture, I think it's a bit of probably more text than needed about HEAD versus GET approach and in the core spec right now we have already a a solution or proposal where basically you have both, so HEAD for queries which are which you don't need any payload in and out and GET for anything which you want to have a more complex response like with with the prices or whatever. **Jim Gould:** Well the issue the issue is having a complex command or request versus the response. And that's if you look at the registry fee extension [draft-ietf-regext-epp-fees], it's got a complex command. So it would be it would be a lot more extensible to be able to use a message body in the query, that's all I'm saying. So I would try to look at the registry fee and see how it would work in the command not the response. Thanks. **Pawel Kowalik:** Okay thanks. **Gavin Brown:** Sorry Pawel, Maarten's I think he's got something to say. **Maarten Wullink:** So we we had a an issue in GitHub about query method for a while. I'm just I haven't followed it recently in the HTTP working group, so I'm not sure what the state is of implementation in common used servers and clients. So that might also be an issue when if you might want to use the query method but if it's still limit it still has limited support then you might still go for the GET method as an alternative maybe. **Jim Gould:** Well I was just going to jump in here. So yeah the question is the timing of RPP versus this particular new method. I believe it'll take a long time before you're going to see deployments broadly for RPP, so my recommendation is to look to leverage the right set of methods that will allow for the extensibility needed for a provisioning protocol. Thanks. **Pawel Kowalik:** Yeah I think the the approach we can we can take basically to take a look at the extension that you mentions and see how it would model within current framework that we defined and whether we faced any particular issues with this. Yeah. **Gavin Brown:** Okay so the next on the agenda is data objects [draft-ietf-rpp-data-objects] which is you Pawel again. Are we changing the order or? **Pawel Kowalik:** It's fine. It's fine. I was just too fast to click myself away. All right so this the update on the data objects draft [draft-ietf-rpp-data-objects] that I presented first time during the last IETF session. So again we are here in the architecture defining the data elements and the the operations without defining how they look in the representation. So about telling how it is represented in JSON or whatever else, so this kind of abstract model. So there was some quite of amount of work that happened since 02 version which was published the last time. So one of the elements we incorporated fully the DNS data model that was presented in the draft from Christian Simon. This was kind of useful because this automatically brought us the support from for the DNSSEC extension. We fully integrated the RGP let's say set of statuses and transitions and processes. The restore operation is now defined both as a one-step process and a two-step process. We defined also the extensibility of the of the data objects, so basically the saying what it is how to deal with standardized extensions or private extensions, what are the extension points and how to extend also the the operations. In the course of integrating the the RGP actually we realized that we need a new kind of entity type in the in the data objects which is called process object. So basically I think I have a slide more detailed about it. We have also some new primitives and we have common provisioning metadata component and right now we consider that for the core set of provisioning objects domain host contact we have now a full set of objects and operations in the document. So now a bit of deeper deeper dive into the new elements in the draft, so the DNS data model if you recall how it's structured, so basically we have a DNS data objects which can contain zero or more records which are basically the DNS resource records with a name class type and RDATA. The type is now kind of limited to NS, A, AAAA, DS and DNSKEY for EPP compatibility. But let's say this is also depending on where the resource record is used, basically different types are allowed. Yeah, so if you are on the on the domain domain objects and this data DNS data object is attached then you're allowed to define DS, DNSKEY for DNSSEC and you can also use NS, A, AAAA if you are in let's say equivalent of the host attribute model, but if you are in equivalent of host object model basically this this three types are now then attached to the to the host objects instead of the domain resource. And there's the second let's say controls object which basically right now contains the TTL definition per record type and the maximum signature lifetime which was right now in the DNSSEC extension of EPP. So process objects as I mentioned is a new thing that appeared, so it has a bit of different properties compared to to data objects. So basically it can contain data which are which is representing the process internal data, input, state, output, but which is different to the object the process operates on. It can contain own operations as well, so basically if there's a long running process, just imagine transfer, you have a interactions with this process to to approve, to cancel or to reject and this these are the operations that process objects can contain. Important that the process are created and operate in context of owner data objects how it's defined now. So basically this the main provisioning object the process is created upon and basically the life cycle of the process is bound to to this object. Yeah, so if you if you delete the domain and you have a running transfer of course this transfer is also disappearing from this context. Right now we have two usages defined in the document for the transfer process as I mentioned and for the restore process. But likely we will need more, so probably the renew will be defined the same way and maybe maybe more but this this something we'll be be looking into. The common provisioning metadata basically we realized that the set of of data is basically always the same no matter what provisioning object is in question, so we defined it as a common component object which is basically then referenced from from all three. So basically this has a benefit that if ever you realize okay you need a more generic provisioning metadata you can always extend on this structure and basically it will automatically apply to all provisioning object types. Yeah and in the work here again we stumbled upon a few questions. So one of them is from the DNSSEC extension basically it contains the urgent flag. So if you ask yourself what it is, this actually allows a client to signal to the server that there is some urgent update in the DNSSEC context and basically the server should roll it out rather sooner than later because of operational needs. Support for it in EPP is optional. The question that we ask ourselves okay should we carry over this functionality, does it have any operational significance these days when the zones are published rather in the cycle of minutes than hours or days because probably this what at least what I learned to to here that that basically this was the reason of this flag if the zones were published like in in 24 hours cycles and there was a urgent update because of DNSSEC keys problem or alike that that people would want to signal it for for immediate response. Right, I see Jim in the queue. **Jim Gould:** Yeah I recommend just keeping it optional. I mean you'd have to go out and see whether or not there are any servers out there in the industry using this particular optional attribute. We don't use it, but that doesn't mean that others are not and if they are it would be a lot easier to transition or leverage RPP if it remained. Thanks. **Stefan Botzmayer:** I'm wondering if you are keeping it how much a burden it will be to maintain everything and to keep everything working and if you just skip it you don't need to maintain this part of your document etc. **Pawel Kowalik:** Yeah so basically basically skipping it means that we will not we won't get rid of it for any time soon. Yeah, so so if you if you skip it we have at least a a chance. Yeah, and I think there are quite many voices for for this. But let's move forward because I have a few more of of the issues on the list. So we have let's say right now carried over the auth info as a part of the standard data model for the objects that can be transferred. And we are asking ourselves okay is it still a best practice to send credentials clear text with every response? So let's say put question aside whether it has to be clear text or not because I think this is a different question. But we thought maybe for this kind of information we should define a a sub-resource or sub-object which will just carry this information and which can be queried on demand, but if someone will be just doing the domain info equivalent he wouldn't receive this information unless let's say the query is flagged with a special parameter or this sub-object is queried directly. This was at least one of the ideas we we had about it. Any any opinions about this? So seeing no one else I'm going to put myself in the queue without taking my hat off. It's on the table here. RFC 9154 which is the secure login for EPP is standards track, so I would say that it can be considered that the consensus of the IETF is that putting secure credentials like auth info codes in every info response is a bad idea and I think that's probably something that this working group should consider. **Maarten Wullink:** Yeah I strongly agree with Gavin here. We should have a strong security and privacy focus in mind here and this is clearly a bad idea nowadays. **Jim Reid:** I agree with the comments that have just been made. But I think it might be helpful Pawel when you're posing these rather than post questions here is to actually maybe just make a statement of fact as what you think is the right thing to do and challenge people in the working group to disagree with you. Having these kind of open-ended questions makes it difficult sometimes for people to respond. I think it makes it easier for response when they're given something to react to rather than just answer a question. **Gavin Brown:** That's really good Jim, thank you. Yeah, I'm sure you don't disagree Pawel. **Pawel Kowalik:** Yeah. All right so in this sense I think the the good idea the proposal was basically to to move it away to not to send it to every response and I see let's say people agreeing to that. So we have the restore report object, so this is a part of the of the RGP process. So right now basically we imported everything from EPP like the data set of of what EPP extension is is defining. So right now I think it is the valid approach and but we're because we defined the restore process to be generic for every kind of object that can be restored. I think there are some BCPs telling about the okay possibility to restore host objects as well, so basically the question was was okay is this model fitting for every use case or should we define it very generically like saying this restore report should be just a a blob and for domains it would be this kind of representation or something different? But right now as I said our proposal is to to keep it as is. **Gavin Brown:** Pawel you've got two minutes. **Pawel Kowalik:** Yeah. Right not seeing anyone in the queue so. This a more document structure question, so basically when integrating the RGP draft we carried over the process part or business process description of the let's say the state machine of the of the RGP. Which is beyond let's say obviously beyond data elements and operations. But we think that this is the best draft to cover cover this, so I was just wondering if everyone would object to include this kind of description in this document or if people have let's say opinion that it should be a different document and this document should focus only on data elements and operations. But right now we want to let's say keep the business business description there as well, right now it's also sitting in the in the business in the constraints part of the of the draft. So no no one in queue. **Gavin Brown:** Sorry Maarten's put himself in the queue. I'm assuming to talk about your last slide yeah. I was going to say Pawel we've got more time in the in the schedule so you can have a few more minutes. **Maarten Wullink:** Yeah exactly you have some time at the end over right so. Okay so I was just I'm not opposed to putting also business logic in in the same document but then maybe we should think about changing the title of the document so that it would cover more than just data objects. **Pawel Kowalik:** Right. Okay so the definition about the associations which which are basically rendering kind of arrays right now are defined as ordered array. And this had some let's say few reasons, basically we by envisioning that okay this will be a JSON representation of the of the data then this definition of array is actually fitting what how JSON is defining the array so basically the elements are are in the order considered ordered and also it may allow us to use JSON pointers to array elements. We don't have yet any usage of it but it it may turn to be useful at some at some point of time. So this however a constraint that EPP does not define. So for example host object element within NS element is not defined in what order it should appear so basically it can be any order or random order. On the other hand it is probably not a big burden to assure that even if you are proxying from RPP to EPP or or otherwise because you just have to imply some defined order for the for the list and keep it stable. So my proposal would be to keep it this way, to keep it as an ordered array unless someone needs sees a real operational issue with doing that. Okay no one in the queue. So the one of the last ones is the localization of the contact data. So the EPP model is now defining the that contact data may have two two elements, one for internationalized address which is considered to be always all ASCII version of the address and the localized form which can contain a non-ASCII character set. So we were asking really should this all ASCII limitation remain or is it still a definition of internationalized address to be all ASCII which is kind of a very kind of western look at the thing or we should take a universal acceptance first approach and say well we basically have one definition with let's say whole Unicode set or let's say the subset of Unicode which is accepted and and basically remove this limitation with probably compatibility break to EPP. Stefan. **Stefan Botzmaier:** It's good to have both because it allows you to to handle off the case all the case, but I disagree with the terminology calling internationalized something which is only for English, it's not a good idea. **Gavin Brown:** Uh so I put myself in the queue just to answer that question. It's it's not anyone's fault, it's a legacy thing from EPP. We can we can rationalize that in the document, but one thing I would say is again with no hat on, there is a reason why EPP supports two different types of address information. If you're in a country that doesn't use the Latin script you can't necessarily write a letter to someone using local to your country that's in Latin, and if you're out of that country you can't necessarily get a letter to them that's not in Latin. So there is a potentially a need there's a need for both and obviously an EPP server which supports both at the moment but wants to move to RPP theyhave to make a choice. Uh, they might lose data that they later find out they need when they switch to RPP if it no longer supports both types of address. **Pawel Kowalik:** Mm-hm. Do we still have people in the queue? **Gavin Brown:** Uh, we have Werner. **Werner Staub:** Uh, yeah, Werner Staub from CORE. I I would say the same as what Gavin said. But we would add something, which is actually the all-ASCII limitation is nowadays for the internationalized version probably obsolete. Everybody would be able to display Unicode. And for some renderings in uh Latin script, it may be good to be able to add accents, um, to kind of make clear what kind of character that is. But it is quite useful to have the ability, even though few people used it up to now, to write something in Chinese script and then write at the same time what that is supposed to be in in the in Latin script. **Gavin Brown:** Okay. So um, we've cleared the queue, um, and we are now running a bit behind time. So... **Pawel Kowalik:** Yeah. So this this just the last slide saying that we have a few elements to still work on, um, in this draft. So, so it's still still under development. Um. So reviews are are welcome, um, because as I said we have a quite of already a a good portion of scope covered. Um, any help with any of the the points above is is welcome. And the question mark is this document already at the stage where the working group would consider adoption. If there would be people saying yes, we could ask for for adoption from from the chairs. **Gavin Brown:** Okay, um. No more time for any further questions. So we'll move on to the next item, which, uh, is Maarten. Thank you, Pawel. **Pawel Kowalik:** Thank you. **Gavin Brown:** Thank you Pawel. I think you need to stand here actually. Oh, yeah. There. How cozy. Hold on a second. Oh yeah, there's an X here. It's was quite dangerous to walk on stage there. Yeah, yeah. Okay, uh, let's see if I can cram all the content I have in the last 30 minutes because Pawel took his time. Which is good, uh, interesting discussions. Uh, this is about the uh core document [draft-ietf-rpp-core], the update uh for uh version between version 3 and 5. The we published or Pawel published, uh version 5 this morning, um, because uh we had an unfortunate merging issue for version 4 where something was dropped that wasn't supposed to be dropped. So we published two versions between the last meeting and this meeting. So this update is about the bootstrapping mechanism, discoverability, uh, how to version different elements, uh, RPP profiles that we defined, and about potential IANA registrations for RPP for the core document. So bootstrapping is basically uh the process uh that a client can follow for locating the RPP discovery endpoint. And the discovery endpoint is the URL on a server where the the discovery document is located. We briefly mentioned this, uh, in the last presentation when Jim was talking about caching, uh, why why we want to use caching and discovery document is a particular example of where you could cache something. And we defined two options for uh locating the RPP server. One is, uh, create a IANA registry for RPP servers and where it you would register uh your server and link it to uh one or more TLDs that you support with your server, register also the URL for the uh well-known URL with the with the discovery document endpoint, and have a brief description. With that, anybody could look up the list of registrations in the IANA database and then figure out, okay, what what are the endpoints that I need to use for the TLDs that I want to connect to. But of course you would probably already know maybe that know the URL as well because there's also an all separate onboarding process when you connect to a registry as well, so um, it's not the only place where you can find the URL. Um, and the second method or mechanism is, okay, maybe we can use DNS and use the service record for this. We could uh create a special record that says uh look at _rpp._tcp whatever zone, and there you could have a a service record that points to the URL, or sorry, the hostname where you could find the the well-known endpoint for the discovery document. The nice thing about this service record is you could potentially also have multiple endpoints or multiple hostnames in your service record and have a priority option for each one of those. It's kind of similar for uh how email servers work. Uh, you could also say, okay, I definitely do not support RPP at this name, so just use a dot in that case. Okay, and the discovery part is, okay, now I know where the where to find the server where the discovery document is. Um, I need to know, okay, what what are the capabilities of the of the server. And uh, so the the client uses the the the hostname to create a URL to find the the well-known endpoint which is in the well-known directory, which is a standard directory, um, I'm not sure which which RFC it's specified, but somewhere. And this discovery document then includes information that's useful or even required by the client to be able to function, such as uh the base URL for the for the RPP server, RPP version that's used, TLDs that are supported, extensions and their versions that are that are available on this server, profiles that are that are available, the object types that are supported, are are all the objects that the clients expects, for instance hostnames, domains and contacts, are they available or maybe there are even additional objects available. Uh, and another one is the authentication type. Um, so the server can let the client know, okay, how how should the server or how should the client authenticate. And there's also a list of available endpoints for the specific processes. This is kind of an open question if we need the endpoints. If they're already documented in the in the in the core RFC, then they're probably fixed already or they use URL templates and might not be required by the client to have those in the discovery document. So we might choose to not include them. Uh, I forgot one, that was the the server maintenance notices which which the server can use to let clients know, okay, maybe there's planned maintenance coming up and this so it's a kind of free free text for the server operator to let clients know about upcoming events. This is an example for a discovery document. It's probably too small to read from uh in the room, but if you have uh it on your laptop then you probably can see it. So it's it's basically a big JSON document with all the elements or properties that I just discussed. Um, this is just an an earlier example, so it will probably change. Uh, maybe we need to remove, uh, as as I mentioned, the endpoints and add other interesting or useful or even required information for the client to include in this document. So we need help for from the working group to determine, okay, what what's actually needed by the client for in to include in this document. So in summary, uh the the process uh would be like like this: the client would optionally bootstrap from IANA or DNS. I say optionally because if if the client already knows the endpoint, uh for the discovery endpoint, it doesn't need the bootstrap. And it can then do the discovery of the capabilities of the server using the the the endpoint for the well-known document. This is also optional because if you already have it as a client you don't need to do this every time as as Pawel mentioned also briefly, there is some interval that you can use, or maybe the server can indicate to the client at some point, hey, the discovery document content changed, maybe you should fetch a new version. And uh, then the the client would extract from the discovery document the the base URLs and the endpoint URLs that are required for the the server to be able to send actual provisioning operations to the server. Um, yeah, we have a couple of people in the queue. So uh, Jim's up first. **Jim Gould:** Thanks, Gavin. Is it this Jim or one of the other Jims? **Gavin Brown:** The Jim that's in the queue, which I assume is you. **Jim Gould:** Okay, thank you. Um, Maarten, this is fascinating work. I think this is a great idea. Uh, I have one or two concerns or reservations about it though, is to what extent do you think registrars are going to be interested in doing all this discovery stuff to find out what kind of features and facilities are on offer by a particular RPP server? My suspicion is a lot of them are going to do the barest minimum and they're maybe not going to be all that interested in all these other fancy bells and whistles that you're offering here. And I think another part of this is going to be is how far can this go, say for example in an ICANN setting, to get this embedded into registrar and registry contracts? I realize of course this is much, much further down the line. But to what extent are we going to have almost like a carrot or a stick approach to actually getting this stuff fully adopted and supported? **Maarten Wullink:** Uh, when you when you say the stuff, uh you mean RPP or what? **Jim Gould:** All these discovery features you were offering about what kind of facilities the server's offering. **Maarten Wullink:** Okay. Yeah, so um, there's the the discovery document contains like the the minimum stuff the client needs to know to be able to actually do something useful. Mm-hm. There may be other stuff in there that's also useful but not required for the client to be functional. Yeah. And I'm not sure if we need a way to force registrars to use that. I think we need a way to make it useful for registrars somehow, so it's in their best interest to use it. Otherwise, as you say, they will probably not use it. **Jim Gould:** Exactly. My concern just is if we have this overarching document that's got all sort of fancy bells and whistles and all these fancy features is we might find that almost none of that stuff gets a degree of interest from the customer base. **Maarten Wullink:** Yeah, I I am not sure I totally agree with all fancy features. I think most of them are actually very useful and required by the client. Uh... **Jim Gould:** I agree they're useful, but they they seem a bit fancy. **Maarten Wullink:** Well, that's the way we like them. No, no, just kidding. So, but, um, yeah, so I'm not sure about the whole ICANN carrot-stick approach. Uh, I think we need more input here from registrars. **Jim Gould:** Yeah, that would be very helpful. Thank you. **Maarten Wullink:** Yeah. So if there are any registrars here or online, um, please reach out. **Gavin Brown:** And then we have Stefan in the queue. **Stefan Botzmaier:** About the bootstrapping, I don't like the idea of IANA registry because registries are not only TLD on IANA's top-set TLD. So, but there is a more general reason. I suggest to drop the bootstrapping because a registry has only a limited set of RPP clients and you have a relationship with them, typically even a business relationship with contracts, so manual configuration is okay for me. **Maarten Wullink:** Yeah, so that's all also a a possible solution, not to do bootstrapping. So if we say, okay, bootstrapping is nice but you already have an onboarding process with the registry and you already get the the name of the of the of the server. So this is not particularly useful to have this extra bootstrapping process. So that that can be an outcome and then we just remove it from the document if that's the like consensus. **Stefan Botzmaier:** And one remark also about discoverability, the example you gave, there is a announcement of a maintenance at the end. I don't think it's a good idea to put uh very temporary information in this document, it should go into the message queue because RPP has a messaging system, and the document the discovery document should be for things that are more or less static. **Maarten Wullink:** Uh, yeah, the discovery document is more static. But the advantage of having I guess like maintenance notices here is that you have it in a single location and don't have to put a message on the queue for every registrar or every client. But yeah, that could also be a solution. So that's uh something we can discuss. Thank you. **Gavin Brown:** And uh, Jim, Jim Gould. **Jim Gould:** Yeah, yeah, I agree with the uh the lack of the need for bootstrapping. I mean, there's contractual relationship between the registrars and registry. Um, related to the discoverability, my recommendation is to keep it high level, uh, to be able to negotiate extensions, uh, and leave the discoverability richness to an extension itself. Uh, an example is the registry mapping we attempted to do in EPP. Uh, I lived that for a couple years and it's exceedingly difficult to do. An example was a registry wanted to add in uh indications that they didn't follow the RFCs. So it's just I think it's a rabbit hole that uh you're going to get distracted by. Thanks. **Maarten Wullink:** Uh, thanks. **Gavin Brown:** Uh, we have three minutes left. **Maarten Wullink:** It's never gonna work. Okay, versioning. There are a bunch of elements that you can use versions for, such as uh the extensions and the profiles. I'll skip this and go to the more important part. So profiles is comes brings us to a point that Jim also mentioned, is there a way to for a client to simply say to a server, hey, I want to use this set of configurations? Because you can have a combination of extensions and versions for extensions, um, or... and and it would be difficult for a client or it would be a hassle just for a client to have to send a lot of parameters. And therefore we decided, okay, maybe we can configure some of or define something called a profile, which is just a name for a set of protocol features and versions that the client can send to the server as a single string. And then it is clear to the server and the client, okay, what set of features are are are are does the client would like to use. Profiles can inherit from a base profile. So RPP could provide a base profile, the server implementor, so if you're a registry, you can create your own profile, you can inherit from the base profile, uh override what's in the base profile, add to what's in the base profile. And this is a important question that I still have: so you need to signal what profile do you want to use as a client. So we defined two possible options. You can use a HTTP header, the RPP-profile header that says, okay, hey, this profile this version. Or you can use a media type parameter in the Accept and the Content-Type header uh of the HTTP request, which basically uses the the parameters for the media type to say the same thing. Um, so we're not sure which one is best, so we need input here from from the working group. And this is an overview of the IANA requests that we currently have in the in the in the draft. So it has request for discovery registry, extension registry, profile registry, and for the result codes. And we were also thinking, okay, maybe we would like to have a URN namespace for uh things like IDs for extensions or profiles. We're not sure we're just discussing this between ourselves as as well, we're not sure if we we need this, so it would be nice if we get some feedback on this. Uh... **Gavin Brown:** Um, so Pawel has put himself into the queue although um, and I think we actually do have time for for him if he wants to um, say something. **Pawel Kowalik:** Yeah, so to the to the, um, URN issue, um, basically, uh. Yeah, URNs look look kind of, um, nice as as such, but I'm really questioning whether we need URNs to identify, um, the profiles or or extensions, um, because the basically it will make, um, all the requests very very long with no information contained with this all URN prefixes with every every request for every extension every profile that is being resp- um, let's say requested and used to construct the response. So, um, I would be rather opting for a single- simple, um, naming, um, schema, um, because it- this names don't have meaning outside of context of RPP. **Maarten Wullink:** Yeah, I I agree. If we cannot find a good reason to use URNs, then we should not use them. So. Uh, so yeah, I I mentioned this, so we need a input, okay, which is the best way to signal profiles. Next step is, okay, we have the other documents, the the JSON document and data objects, they're kind of a little bit ahead of the core document already, so we need to sync with those and then we might go back to the working group and ask for uh to see if we want to adopt this core document. **Gavin Brown:** Uh, Stefan, you have a comment? **Stefan Botzmaier:** About the way to signal the profile, profiles are not just for the body of the request and response, they can specify many other things. So the media type parameter is not a good idea, which leaves the HTTP header. **Maarten Wullink:** Okay, thanks. **Jim Gould:** Uh, well I was just going to jump in here. So, um, yeah the question is the timing of RPP versus this particular new method. Uh, I believe it'll take a long time before you're going to see deployments broadly for RPP, so my recommendation is to look to leverage the right set of methods that will allow for the extensibility needed for a provisioning protocol. Thanks. **Pawel Kowalik:** Yeah I think the the approach we can we can take basically to take a look at the extension that you mentions and see how it would model within current framework that we defined and whether we faced any particular issues with this. Yeah. **Gavin Brown:** Okay. I'm done. **Maarten Wullink:** Says 18 slides, says thank you, Stefan. Well, thank you. Okay. **Gavin Brown:** Okay, um, can we squeeze the next one in, 11 minutes, 12 minutes? **Maarten Wullink:** I can do a speed presentation. And Pawel stole all my time. **Gavin Brown:** No, I don't blame Pawel, I blame the chairs. **Maarten Wullink:** Okay, so this next one is an update on the JSON document [draft-ietf-rpp-json]. And this is basically, um, if you remember like the the diagram, the architecture diagram that Pawel showed, where there is this mapping component in the architecture, this is the mapping between the data components or the data objects to JSON. So this document previously used the EPP XML as its source for converting to JSON and JSON Schema. We updated this to use the data objects now, now we have this nice document. And, um, so we used the data objects, created a a set of rules and from those rules we derived the uh set of JSON Schemas and JSON examples that are in this document. Yeah, so so in short, this is what I just actually said. So we have input data objects, um, bunch of rules, and outcome JSON, JSON Schema. Uh, so I'll skip this, um. So we chose to use JSON Schema even though it's not an IETF standard, although there is a working group now that's working on it. Um, so who knows. Um, the process or if the rules that are in this in this in the in this draft, um, create schemas for the shared common objects but also for high-level resources such as, um, domains and and hosts. But it's not always possible at the moment to capture everything in JSON Schema, so there might be situations where, uh, properties might be required or not be required depending on the state of the process that you're in. For RGP for example, there's sometimes a date is supposed to not be in the output if you're in a particular state of the process, so report required or not required or report received or not received and then there some dates are either returned or not. So that's very difficult to capture in JSON Schema. So still thinking about the best way to do this. So it might be that even though you get the JSON Schema, that you need some additional validation later on in your implementation. And, uh, so JSON Schema has a nice thing that will allow us to uh enforce a strict validation for JSON. So this is mapping uh of the of the types in the data objects to JSON, I'll skip this, it's pretty simple, um. In the data objects there's, um, mentioning of of cardinality, so um, if you have, um, uh a property and then you can have a value, the value can can you can have exactly one value, you can have zero or one value, so depending on uh what cardinality is in the in the data objects, the generated JSON or JSON Schema will be different. For example for the, um, uh for the first exactly one, this will render into a schema that has a required property. So by default the JSON properties are all optional, but now because you need exactly one then the required keyword from JSON Schema is used. I'm not going through all of them because we don't have that much time. So, um, these are the like the where the cardinality gets higher. So the the last one for instance, you have one plus, so you need at least one or more, uh items. So this this will turn into a JSON array, a required JSON array with, uh, minimum items is one. So this is pretty nice you can you can capture all this pretty easily in JSON Schema. And then we have associations. These are pretty basically like the relationships between objects. There are two types of associations, you have the um... oh sorry I'm I'm going too fast here. So you have two types of associations, one is the aggregation, which is basically you have a object that refers to one or more other objects and all of these objects can have an independent lifecycle so you they don't have dependency on each other per se. But if you look at composition, then the relationship is different, it's more like a parent-child where the child cannot exist without the parent itself. So it's the child is completely, um, included in the parent JSON. Then we have labeled associations, this is also interest- uh, um. So the labeled associations from the data objects, they these are converted to JSON. In, uh, in this case you have the, um, uh, a list with a fixed label property that has information about the type of information that uh that you that's being, um, uh being referred to and there is a um... oh no I I'm mistaken you. So there's a label, um, yeah, so there's a label that that describes the the actual object type and there is an object key property that contains the actual object. And here you can have uh multiple objects with the same label. So for example if if this were contacts you could have one or more techC contacts if you if you would like. And there's also dictionary of course, this is kind of similar as the previous example although the the names are, um, unique in this case. In this case you cannot not have more than one techC. And, um, so objects are like the base, um, for all the for all the objects in the in the in the data objects catalog, and these are just converted to a plain JSON object where, um, we included a, um, JSON-LD kind of style, uh, @type field. So we're not totally sure if this is actually useful. Uh, we kept it in for now, so feedback would be nice on this. Uh, we think it might be helpful when the schema cannot validate everything, or you want to use other ways of validating. Um. So I'm skipping this one. So the um, data objects objects draft discusses two different types of objects, you have like the shared objects, these are objects that are reused by other objects, so these are like basic objects such as a period or a status or postal info. And these are defined using the defs keyword for uh JSON Schema. So this is an example for the period uh object type. I'm sorry, I have to go a little bit quick through all this now. And here you can see, okay, this is like a high-level resource object. So these are the resource objects are the actual things that you're interested in, that these are the objects that you want to provision, so a for example a domain name. So this is the an example for a domain create or domain create schema. And here you can see that for uh multiple fields there're we're using references to uh shared components. So you say the contacts are is a is a list of or array of of contact objects. And the same for name servers or the period, it all refers to the shared component. So this is nice, you can reuse a lot of stuff. So and when you render this to actual JSON you get something like this and if you receive this you can validate this with the previous uh JSON Schema to see if it's correct or not. Kind of similar as you would do with XML and XML Schema. Uh, I have a few questions. So um, yeah, so we mentioned profiles in the previous presentation, so they're interesting but they also make uh things a little bit more challenging if you want to validate. For instance, um, uh you can have additional stuff in in your profile, you can extend the profile and and a server could could put custom fields or whatever in there. So that's that's going to be a challenge on how are we going to validate this. Um, so does the server need to create a custom schema, JSON Schema, or are there implementations that need to do, um, additional checks? So we're not sure there. And, um, another one is about contact data. So now we have more more or less the EPP-ish, um, style of of contact data. Um, but, yeah, it would maybe also be nice just to see if we can fit in jCard somehow, because then we would be more in line with what's being done in RDAP as well. And it would be a pain for for developers, um, to have to use a different format for putting stuff in than they see when they get it out through RDAP. Um. So please, um, if you have any comments or feedback on this, please let us know. And for next steps, um, yeah, also this document needs to sync up with the other object- other documents, especially the the data objects draft where, uh, we already have stuff like, uh, dedicated process objects and this is all not being converted or being included in the JSON document yet. So that's will be in the next version. And then we might also go back to the working group and see if we can adopt this document as well. Ah. Done. In time. On time. **Gavin Brown:** Thank you, um, Maarten. So we have, uh, two just under three minutes left. Um, if anyone has anything any general comments they want to make for what's been discussed today. Um, Pawel, I see your hand up. **Pawel Kowalik:** Uh-hm. Yeah, so this is a general comment about this last uh draft, because it um was telling a lot about the JSON Schemas, which are very useful facility. Um, but the draft itself, it doesn't uh require to use of JSON Schema because it defines the rules and the structures, um, but the working group has to make a a decision whether a normative usage of JSON Schema is something we want to do from the process perspective, because I think for this we would need to probably be wait for JSON Schema to be standardized by IETF at some point soon, which IETF was not really successful about for last several years, so it may be a hard dependency that we don't don't want to have. **Gavin Brown:** Okay. Anything else? Oh, Werner. **Werner Staub:** Yeah, just as a thing that we might to think of now. Um, the experience with RDAP has shown that it is quite painful do not have a recommended order of object elements. So wherever we look at everyone puts it somewhere's different. There's enormous stress on on the on the users. So I wonder if the standard could contain recommended object element ordering could be written somewhere. We might also have positive hope that JSON Schema would do something about it, but you know, we we had the comfort in XML of having a certain order of things, and now we we just it would be anywhere, so maybe if we want to think about avoiding those problems it would be now to consider a solution. It might also help if some of the stuff were to be signed, then you would have to- you would have to have a recommended order anyway. **Maarten Wullink:** Uh, thanks for the comment. Uh, I'm not sure about ordering, recommended order, because uh basically it's machine-to-machine protocol, uh so, I know developers look at it but yeah, I I mean, if it's not causing any additional problems then we could I don't care we can do it, but if it's causing any problems with validating or whatever then I would not be a big supporter. Thanks. **Gavin Brown:** Andy. You've got minus nine seconds. **Andy Newton:** Okay, minus nine seconds. Andy Newton, as an individual. Werner, were you talking about ordering of the contacts? Is that the issue? **Werner Staub:** No, ordering of the keys. **Andy Newton:** The keys? So my my general comment on that would be that RDAP specifically did not do that because had you done that you would have been limiting the use cases you can- that's available for. So the uh I think what you want to do in that case is you want to have a profile, which is what you do see in RDAP, that if you need a specific ordering, that's what the profile says to do. Otherwise, you're limiting the protocol. So. **Gavin Brown:** Okay. So we're out of time now. So thank you everyone for your contributions. Um, just wanted to say thank you and apologies to Stefan Botzmaier. I had asked him if he wouldn't mind reporting on the hackathon, we didn't have time for that, so um, if you have prepared something please send it to the mailing list so we can have a look at it there. Otherwise good work and keep up the good work for for for going to the hackathon and working on that stuff. So thank you everyone, um, see you on the mailing list.