Session Date/Time: 16 Mar 2026 01:00
Shuping: Okay, so it’s time. Let’s start. And welcome to this session, and this is the ART/SEC Joint Dispatch session. And please, here, I am one of the co-chairs, Shuping, and together join me are my other co-chairs, Jim and Rifaat. They are remote this time. And please keep in mind that this session is being recorded. Same time.
Okay, so this is the Note Well. And if this is your first time IETF, please do read it carefully because it specifies the IETF processes and the policies that you need to follow when you are participating IETF. And please behave in a professional manner and be aware of the IPR disclosure issues. And also, you can—you need to follow the standards process and those working group guidelines and procedures. And also be aware of the privacy statement of IETF. But you are in good hands. If you have got any questions, please talk with the chairs and also our ADs.
And some meeting tips because this is the first session of the IETF. And so please scan the QR code or sign—sign in the session via the datatracker. And if you want to join the queue, please use the tool. We’ve got this onsite tool. And also for—this time we have some remote participants. Please make sure to mute yourself unless you are speaking. And please use your headset. And if you want to make a comment, please state your name clearly.
These are the resources. Okay, so this is the most important part because this is the Dispatch session, and we need to focus on answering the dispatch questions. And so, here are a few options. And should we direct the work to an existing working group? Does it exist? Or shall we propose a new one? I'll recommend above. Or this draft could be published as a AD Sponsored if we could find a AD who is willing to do so. Or more discussions or development are needed. Or the work is not appropriate for IETF. So for—because today we’ve got a full agenda, I mean for every—person, if you want to comment, please first state clearly about the dispatch outcome you want to suggest. Thank you very much.
Okay, so here is our final agenda, final final, and after a few updates. And we have been tracking each request. And so here we have got ten. And we got—for each one we have ten minutes. And for—five for the presenter, five for the community. We have the slides template for the presenters to use, so straight to the point. And anybody wants to adjust the agenda? No? So we can start now. And the first—our first speaker, Christian.
Christian Grothoff: Okay, thank you. So I am presenting the Donau scheme. It’s a very small URI scheme. Next slide, please. Basically, the idea is well, you make donations to some charity, but you want to deduct them from your taxes, which in many countries is possible if the charity is recognized by the respective authority. And well, we don't like filing taxes on paper, so we could have a QR code to reduce the paperwork. And the URI scheme we built basically includes the amount you donated, when you donated, your taxpayer ID, and of course a cryptographic signature by the respective authority showing that you did the donation.
Next. The approach, the use is very simple. You include this in your taxes and then the tax auditor can use their smartphone app to check it. Now you might say okay, that's extremely simple. Yes, it is quite simple. Next. But we had some first good inputs from Ted Hardie at the IETF URI review list. The draft also includes an OCR-friendly Base32 encoding. The idea being if you print out those QR codes, you want to—well, you might have somebody manually typing something in as a URI, and then it should not be causing confusion if you have 0 or O, or V or U. We also had this same encoding used already in RFC 9498. And so in the review process, somebody suggested this might be separable if desired.
Now to give some larger context, what we’re actually doing is we’re not just doing this donation statements—Next slide—but we are providing a way to do anonymous donations where the charities can have to be approved to sign the donation statements, but the total amount received per charity is transparent to the authority, so the charities can’t just make up donation statements where they didn't get a donation. And we built the whole thing in free software so that we could have donations where the state knows that I donated, but doesn’t know to which charity I donated. This can be important if the charity is, for example, a medical-related charity and I might, of course, have that respective disease or underlying condition and I don't want to disclose that information.
And this is again part of a larger effort where we’re also enabling anonymous digital cash payments overall. So not only are your donation statements private, but also how you paid for the donation in the first place. There again, it's a much larger protocol with change, refunds, age restrictions and so on. And we already have another RFC, the Pay-to scheme, that came out of that larger effort.
And so basically we’re asking—now, asking—Yeah?
Shuping: Christian?
Christian Grothoff: Yeah?
Shuping: We lost a great deal of your presentation. The Meetecho was messed up, so we need you to back up a little bit, probably start just about start over. Sorry about that.
Christian Grothoff: Okay. Well, you want to go back?
Shuping: Yes, please.
Christian Grothoff: Yeah, in the slides. Then you should go back in the slides then to, I guess, the second slide? Okay, let’s try again. Sorry for that.
So, the idea is you want to file your taxes. You made some donation to some charity. That’s a tax-deductible event. We want to avoid filling out lots of paperwork. And so the idea is you instead have a QR code that you can put with your tax records that you file them, and it would include how much you donated, when you donated, your taxpayer ID, and of course a cryptographic signature proving that you did the donation. Next.
The flow is then pretty simple. Next slide, please. The flow is pretty simple: you have received this URI somehow, somewhere. You can put it into a QR code or transmit it via other means like as a URL, and just send it to your tax auditor and they can scan it, validate it with their smartphone app and can see this taxpayer donated this amount in that year. Next.
So, we had a—next slide please—so we had a initial review on the URI review mailing list, had some good inputs that we all addressed. And the draft also includes some OCR-friendly Base32 encoding. It's inspired by Crockford but allows O=0, U=V. The idea being you don't want to have problems when you read a URI and you can't recognize the characters, that then things like cryptographic signatures go wrong. It's also used by another RFC, and it was suggested that this might be a separable thing if people are interested in OCR-friendly Base32 encoding. I know we already have lots of those, but none that are OCR-friendly yet.
Next. So, this overall thing is part of a larger effort. We have also built a protocol for enabling anonymous donations where you have charities that were approved by the authority, they can basically create the respective signatures. The total amount you donated is then disclosed to the tax authority, but not who you donated to. On the other hand, the charities are—can be held accountable how many donation statements they have created, so they can't just make those up. We have free software implementations for all of these. And this is again part of an even larger effort to also allow anonymous payments, where already also built another RFC.
Next. Basically our question now is: Do we leave it as a draft where we document the schema and that’s it? Is it worth sending this to the informational track at ISE, with or without separation of the appendix? Now, if the appendix is separated, is anybody else interested in this? Or, my favorite option would be to create actually a payment BOF and possibly charter a working group for payment protocols, because I think that's something that should be of interest more broadly. And this might just be a first small contribution in this direction. That's it. Happy to receive comments.
Shuping: Okay, so Mark first. Please dispatch our outcome suggestion first.
Mark Nottingham: Oh, lovely. Mark Nottingham. From a dispatch standpoint, I think you should go and chat with the folks in the W3C. They have lots of stuff going on about payments. From a technical perspective—sorry for the people being in the W3C—from a technical perspective, I do not understand why this is a URI scheme. This seems like it would be much more naturally expressed as a format. I understand that sometimes people try to shove things in URI schemes so they can hack into browsers without actually getting browsers to change, but this seems like a really bad path to go down, to start shoving formats into URI schemes just to do that. Thank you.
Eric Rescorla (ekr): Yeah, I want to agree with Mark. This does not belong here. It seems to be part of some larger effort, which we decided not to do, and we should not do this either.
Bron Gondwana: I was going to say I think the Base32 format might be something the IETF should take independently of this if that's useful for other things. Separate that out. I agree that the rest of it...
Ted Hardie: Ted Hardie. Thanks very much for bringing the work to the attention of this larger group. During the URI review, there were actually comments very similar to Mark's that this probably didn't need a URI scheme since it was simply an HTTPS access to a REST API. So, the bar for provisional has been met and it has been registered, but I don't think any additional work on the URI side of this is going to push it toward full registration. And I think the rest of the work probably, unless there's separable work as Bron points out, is beyond the scope of the work we’re currently doing here at the IETF. In particular, I don't think anything in this would justify a payments BOF or a charter for a working group, and we’d need to see very, very different presentation to consider that. Thank you.
Richard Barnes: Yeah, I’m a plus-one on dispatch to dev null. This seems like a fairly niche thing in telling a bunch of stuff that the IETF shouldn’t necessarily sign off on. So, and as Ted says, it’s not broadly used enough to justify all the effort of a BOF. So yeah, there’s no work to do here for the IETF.
Shuping: Sorry, it wasn't very clear. What is your suggestion for the dispatch outcome?
Richard Barnes: Suggestion is dispatch to nothing.
Shuping: Okay, thank you. So, our queue is clear and it seems people think we don't need the URL scheme and no additional work is needed and also beyond the IETF scope. So maybe talk with the W3C. Is that correct? Okay. So, our next presentation. Can we—can we ask that if you are on remote to leave your microphone off unless you're actually talking?
Speaker 1: That’s not going to work, chairs, because we can only receive audio with mute off.
Shuping: Regrettably, you’re going to get a bunch of noise until Meetecho gets things fixed. Okay, so Yusuf. Yusuf, are you online? Okay, let's move on. Mauro, are you online? Okay, it seems we fixed the audio issue. The people online you can hear me. Also you mute yourself. Mauro, are you there? Chairs, what are we doing with the last presentation? Like, none of us could hear it. Yes. It seems people just lost. We are awaiting them to be reconnected. The next one? Peter. Wow, cool. Welcome to our first onsite presenter.
Yaroslav: Hello everybody. You can hear me? Great. That's a good start. My name is Yaroslav and I am presenting, bringing to your attention AI Agent Authentication and Authorization framework informational draft proposal that we've put together with a group of co-authors.
So the motivation for this work is AI, agentic AI usage is exploding and we need guardrails for, especially when it comes to identity, authentication, authorization, audit and all other lovely things as soon as possible. There is a tendency to invent new AI protocols just because AI is new and trendy, and we believe that we should be leveraging learnings of existing standards of several decades in identity authentication authorization. Those topics are very treacherous, it’s very easy to make bad, non-obvious bad mistakes that then are very, very, very hard to fix.
So instead of inventing new AI protocols, we explore how existing frameworks can apply to agentic AI workloads. So there is already wide spectrum of standards involved there, vast majority of these standards actually live at IETF, such as OAuth as and WIMSE, and during this work we may locate missing pieces in existing standards and then motivate relevant working groups to update to—rectify that and build those missing pieces.
So few things on the background. First of all, we think that agentic AI nicely presents itself as a workload, so we can treat it as a workload. It does communicate with users and systems, with large language models, with all sorts of tools, services and resources, with existing and new and coming protocols, but again at the end of the day agentic AI are workloads. Now unlike traditional workloads that are pretty well defined and supposed to behave in a predictable fashion, such as your database, you send the request to the database and for the most part you get responses. Agentic AI is a little bit more chaotic, you don't know up front what is it going to do, and that's kind of the thing of agentic AI, but still in terms of how you reason with its identifiers, how you authenticate, authorize it, it very nicely projecting itself as a workload.
So we've put together again this proposal where we present this identity concept in number of layers. We propose this framework where everything starts with identifiers, then we have credentials that can be used to cryptographically prove that you have this identifier. You have attestation, you need a way to provision those credentials, you need a way to authenticate yourself, you need a way to prove your authorization that you are authorized to do something, and of course human in the loop is supposed to play an important role here. And on top of that we have monitoring and observability, so when something interesting happens with your agentic AI we need to have audit trail for that.
As I've mentioned, key building blocks already exist at IETF. They might not have agentic AI in working group charters or even sometimes spelled out in relevant drafts, but again they project into agentic AI nicely. So we use—we propose to use WIMSE identifiers for agentic AI identity, WIMSE credentials and authentication for agentic AI credentials and authentication, the wide lovely spectrum of OAuth standards for authorization.
Speaker 1 (Community): Lots of problems and they say that they want to define many, many aspects in IETF. So my point is that how do you see—the first question is how do you see that in IETF the role and your role and how to coordinate with other like open source community on security part, especially on authentication and authorization. Because many other, you know, open source communities or like other standardization groups are also doing similar things. For example, like W3C defined like DID. But IETF, I think that IETF do not care about what other, you know, SDO defines, but how—is there any way that you can, you know, maybe give a potential, you know, like choice or like um, you know, um to—better for—for people like like vertical industries to, you know, choose appropriate security protocol focused on authentication and authorization. In what way do you think is the—how to coordinate with the larger scope outside IETF? This is the first question.
The second one is that how do you think um the working streams in IETF? Do you want to, you know, focus on authentication and authorization first or, you know, many other security problems exist. For example, DNS-based like security problems also exist. Because today is also like SEC Dispatch, so how do you prioritize these work? So one question, the first question is on the IETF role with other SDOs, and the second question is working streams within IETF. Thank you.
Yaroslav: Okay, that's that's a lot to unpack here. So um, first um, I don't think here in this draft we define—well in general IETF is not enforcing people to use certain standards. So following standards is following IETF standards is always optional. You can choose to do that or you can choose not to do that. We don't certify implementations and this framework document is suggesting to use some of what could be seen as best practices according to experience decades-long experience when it comes to authentication and authorization. Um, if you believe that there is something better that we should be recommending instead what's currently in the framework, please reach out to us and we’d be happy to adjust the text if that's required.
Um, hopefully I answered the question, and when it comes to priorities, again we’re not defining all those mechanisms, we’re pointing to where existing mechanisms are already in place and it's down to individual working groups to keep working on those mechanisms. Hopefully I answered the question.
Aaron Parecki: Hi, Aaron Parecki. Um, I think there’s—I think there's some misunderstanding about what's going on in this draft exactly. Um, I appreciate the need for guidance on how to use the collective set of specs that are in a lot of these working groups at the IETF. Um, I think that didn't come through in your presentation quite right based on the chat I'm seeing in Meetecho. Um, so just to clarify, this seems to be more of like show the menu of options available from OAuth and WIMSE and others as they would relate to people who are working on things that they call AI. Um, and I agree there's a need for that. I'm not sure it fits neatly in OAuth or in WIMSE because it spans across both pretty broadly. Um, but it's again not creating a new protocol so it's not necessarily a new working group either. So I'm stuck on the dispatch question, but I did want to clarify the scope of what's in this and say that I think there is a need for it.
Brian Campbell: Brian Campbell, co-author on this document. I was going to stand up there with you Yaroslav, but I was intimidated by your striking good looks, so I stayed down in the audience here. But I actually came to say much of what Aaron just said. I'm not entirely sure the dispatch question here either, but we do feel there's a real need for this work both as a conceptual framework to guide people in the direction of existing work that already exists that solves problems that people are trying to address today, as well as that conceptual framework to help unearth and identify areas where the existing building blocks might not be fully sufficient for the needs. Um, and I likewise to Aaron's point don't feel it fits neatly into the existing working groups. Um, but I am concerned that a new BOF working group type formation process might be too heavyweight and take too long, and um I don't know quite what to do with that. But it's definitely not meant to be itself invention of new protocols, it's meant to be a map of existing structures and pointing to them. Um, and I hope that helps people think about the dispatch question. Thank you.
Richard Barnes: Yeah, so I missed the entire presentation and most of the Q&A given the audio issues for remote participants. So I'm going to suggest the dispatch decision is no decision here because it hasn't gotten proper discussion with the full community. That said, based on the written materials, like, I think it would be okay—Obviously we’re having a lot of technical difficulties and we’re going to have to have a—figure out how to get past this and maybe with another session. So I think what—that was Eric, right? Oh, was it Richard? The—I think what—what he said was we should not be making dispatch decisions. Um, sounds correct. So we can continue with the presentations. Um. Maybe I'm back now. I’m sorry. Yes, Richard, go ahead. Yeah, no dispatch decision. Only say it briefly because I only have a second here. No no dispatch decision, but if the ADs wanted to consider a BOF on a focused topic, maybe. Let’s just continue with the next—any other questions for this presenter? Thank you.
Shuping: Mark. Mark is fine. Good morning. Um, I'm going to speak about the registration policy for Well-Known URIs. Yet again, I'm doing this with my registry expert hat on, so I’ll try and speed through these.
So this is about the Well-Known URI registry, you know, most commonly for HTTP and HTTPS URIs. We a while back reserved a special directory called .well-known for standards-based uses. This was when we realized that people liked the pattern that robots.txt set up and we didn't want that to be kind of blotted all over people's web servers, so we gave it a little corral to put things in. And the most recent iteration of that specification is RFC 8615.
In the IANA considerations section of that specification, it says that it is a specification required registry. Um, and we put in this text. This was most of the focus of 8615. The experts' primary considerations in evaluating registration requests are: first of all, conformance to the requirements in section 3, which we’ll get to in a second; the availability and stability of the specifying document; and basically security considerations. And this was a very purposeful change. We wanted to make sure that we opened up the registry and allowed it to be used for a lot of purposes, even if it was debatable whether they were really appropriate uses of Well-Known URIs.
Section 3 says: first of all, gives a syntactic requirement, fairly easy. It says: registered names for a specific application should be correspondingly precise. Squatting on generic terms is not encouraged. We’ll talk about that a bit more in a minute. And then a registration will reference a specification.
So, this is a human-readable namespace and so some names are more desirable than others. So for example, if you have some incredibly specific product name and you want to register that, that's not really going to be a problem. If you want to register something like, I don't know, AI, um, then other people might want to use that name too. Um, so as I said, we explicitly reduced the restrictions in the latest revision of the RFC, and it is almost close to a first-come, first-served registry. If you happen to be the one to grab a desirable term like AI, by the current terms you get it.
We are seeing an increasing number requests for those names. And so what has concerned me and what brings me as expert to the community for some guidance or advice and potentially some further action is whether that's a problem or not. You know, we’re seeing these requests from folks who don't have much evidence of community discussion, much less buy-in from people like implementers. They're using references that have questionable value. So, you know, it used to be when we say specification required that, you know, that meant something that we knew what it meant. These days, people can spin up a GitHub repo and then turn it into a website with almost no effort, and they use AI to help them, so it's even less effort. Then they buy a cheap domain name and off they go to the races. And and all of this consumes these names, and especially those desirable names, and arguably, I would say, gives them a misleading legitimacy. The pattern we start to see is people say, "Look, it’s registered. It’s a standard." And there's a question as to whether that's really a good function for the community in managing these resources.
So just to give you a couple examples, we've had a recent registration request for a whole bunch of different entries, all related to AI: agents.txt, agents.json, agent.txt (singular), agent.json and ai.txt. That's all from one person. They've made other requests unrelated to these as well. They're all specified in documents in a personal GitHub repository. At least one of them, ai.txt, can be seen in a con—as being in conflict with an active working group, and there's no sign of greater community engagement. I forget exactly which one of these it was, but one of the examples here that I give, they went off and opened up a bunch of issues in GitHub repos of implementations that might conceivably implement them, and a lot of them just got closed or ignored. So there's no real sign of engagement around these things. Similar one for agent.json. It’s in an organization GitHub repo, which makes it look a little more, you know, community-based, but that person look—that repo looks like it only has one contributor. It has a website that’s hosted on GitHub pages, so you see the pattern here. Another one, there was a request for just ai. It’s a two-person GitHub repo this time, although they share a last name, so that’s kind of interesting. Oh sorry, technically it's a three-contributor GitHub organization, but one of them is Claude. So again, no signs of greater community engagement, except that they've talked—started talking to the registrant in the first example.
So yeah, to me there's a common thread here. There's confusion between registration and standardization, you know, this idea that if it's registered they’ll get some sort of authority from that. And it creates an incentive to register stuff first and then figure out whether you can actually get community support and buy-in, and so you get a lot of half-baked proposals come across the threshold. And so the squatting concerns in the current RFC kind of hint at this, but as expert I'm a little uncomfortable making judgment calls quite so deep, especially since we kind of were so explicit last time we modified this document about opening up the registry.
So far we've had proposals on the mailing list to give the expert more, and this was my proposal, to give the expert a little more latitude and a little more guidance around requests for common terms or attractive terms so that they need to be more community-based. And there's a couple ways we could do that. It was proposed to prepend a random number to the registration, which I think creates a lot of problems, or create a less restricted subspace, or to use provisional registration in a different way. So that's a whole discussion, I don't want to get too deep in the proposals here because this is dispatch.
So my questions is: is first of all is this a problem? Should intuitive and attractive names be reserved for greater community buy-in? And if so, is an update necessary, or do you think that the current language in the document (I can scroll back if you want) is adequate to get us to that outcome? If we do need to do work, what's the appropriate venue? I'd point out that the last time we revised this document it was AD sponsored, so yay ADs! And then there are some aspects that are maybe interesting for IANA-bis here. I think that's all I've got. I don't know how I'm doing on time, but questions, comments?
Speaker 2: Can the remote people still hear I wonder? I can’t see anything.
Speaker 3: I hope the remote folks can hear. I just wanted to come to the mic to say, you know, for the record and, you know, as some of the folks have mentioned in chat, obviously we are experiencing some technical difficulties here. We'd like to proceed with the onsite presentations. Um, we’re not going to make any, you know, concrete dispatch decisions at this time without the ability for the remote folks to participate, and the IESG is going to look into, you know, how to address this issue in the future. And thanks, Mark, for your presentation, but I just want to say at the mic for everybody here in the room and for the remote folks who I hope can hear me. Thanks.
Mark Nottingham: By "in the future," do you mean for the rest of this meeting, hopefully?
Speaker 3: You said the IESG wants to address this for this, you know, for the future. We'll try to address what's happening in this room.
Mark Nottingham: Yeah.
Speaker 3: Yeah. But you can continue asking questions to the presenter.
Bron Gondwana: Hello. I agree it’s a problem. I think AD sponsored seems like a reasonable path for it. I agree with the idea that if it's not an IETF specification, it doesn't get the nice names. They don't have to be beautiful, they just need to be predictable and codable into systems so that you can find things reliably. If it is domain.name for everyone who's not an IETF spec, that works fine and they can't squat AI unless they get an RFC for it, and then it’ll only be one of those.
Ted Hardie: Ted Hardie speaking. I actually think you can push back given the guidance in the current document. Um, I agree that it is a little bit of fancy reading to do so, but until you got an actual appeal on the basis that you don't have that right, I would say as designated expert you can certainly push back, and if it does go to an appeal you might need to rev the document. And if you do need to rev the document, AD sponsored seems fine to me. But I would actually start pushing back now since the problem is now, rather than waiting whatever document cycle it takes us to rev the document.
Mark Nottingham: Two questions for you Ted. Um, would it be appropriate, do you think, for me as expert to write down perhaps my thinking around this and then what, you know, how I think about it and use that as a guide? Is that an appropriate thing to do?
Ted Hardie: I think there are two ways that happens. In the URI review process, it takes the form of a set of precedents which are then referenced by people who wish to have other URI schemes that might touch on the same case law. Yeah. So that that seems to work in the URI space and there is a similar problem there with attractive strings. Um, so I don't think it's necessary to write down generalized principles at this stage, but that if you do go into the need for a rev of document, then it would definitely be at that point you do that and it would go into the AD sponsored document. But I would say start pushing back now based on the existing language and then consider whether you need a new document based on the community reaction to the push back.
Mark Nottingham: Okay. That's really helpful. Um, and I'll definitely take that approach. I think in my mind, I’ll talk to the ADs, but it may be that we don't deny a bunch of requests, we just put them on hold until there's more resolution, or give them some advice that they’re not going to get a registration anytime soon. Okay.
Tara Whalen: Tara Whalen. You are the expert. You should be using your expert powers to actually, you know, decide all of those things. And that's why I think if you think that the idea is a bad idea to assign something, just say so. And if they don't like it, they can always replace the expert. I'm in the same situation because I'm the IANA expert for lots of registries, and if they don't like what I'm doing there they can always go to the IESG and replace the expert.
Mark Nottingham: Right. So to be clear, you know, part of the reason I'm up here is to get the advice of the community because I've been this expert for a long time and the last time I exercised my powers as expert in a way that made sense to me, I got yelled at in front of a room very much like this. So I want to make sure the community's bought into the way that I'm running the registry. Um, and just to note, I think part of the problem with hearing the remote folks was I didn't hear them on the monitor up here.
Speaker 4: It’s echo here too. I can’t get it either.
Mark Nottingham: All right. Thanks everybody. And if you have more feedback or want to talk about this more, I'm around.
Shuping: Okay, so the next speaker, Henk. This is the last item because we got the presenter onsite, so you go first.
Henk Birkholz: Hi, I’m Henk. I'm presenting something about AI. I'm deeply sorry. Um, yesterday we tested the waters a little bit in the HotRFC, so there are some of this feedback already incorporated into this dispatch presentation. We are talking about how you interact with agents. There is a content there. This is a typical conversation, and so we have this mouthful of a title at the moment, that’s a Verifiable Agent Conversation Record, and there's certain uses for it. Next slide, please. Oh, I can—I can do the next slide myself? Oh, I don't. Okay.
So uh, yeah, we have a problem here. So the agent is telling you to some extent at the end of its tasks, even if agentic AI might be a huge mesh of nodes that will be working on a problem, that it did something. But did it really? So you have to kind of understand that, and those conversation logs are pretty heterogeneous. And they are always model specific at the moment—like at least, at least frontier model specific. And um, and so what you really would like have—because sometimes they only live for a few seconds—you want to have some provenance and some authenticity of their behavior, which you have to capture somehow. And you have that interaction with a human that is somehow also in the mix. So a typically what you do is you have human-agent interaction, you have agent-agent interactions, and those explode exponentially, um, at least. So there's no real IETF standard to do the conversation for the AI agents, but we will come to other solutions on other slides. Next slide please.
So that's the motivation here. Um, the background is that we currently have a draft online that gives you a lot of mix and match why you want this record. Um, one of the reasons is because you really want the thing to do what it is intended to do and nothing else, so that's pretty clear. There's also compliance and regulation requirements, there's a gazillion ton of that. There's a good overview in the first part of the document that shows the why some members are in there and what they are about. I think bringing them up here is a little bit much, but the most important part is that you want it to be verifiable in the end. And thereby there has to be added some authenticity proofs to them.
Um, in the HotRFC you might have remembered that we already had some bigger boxes here that were like believability with RATS and the transparency with SCITT. But I think for dispatch, we want to go with the content first. And I think that's already a good chunk of content that is I think viable for dispatch. Um, there is an internet draft out there. If you can read that on the slide, it's dark blue on black background. Great choice decision here. There’s an implementation because we had aggregated session logs from all—all is a little bit much—from the majority of giant I want to say big frontier models and saw what's—what's like similar, what's different and created a super set. What we are doing is we're defining all of that in CDDL as most of the input is JSON formatted, which unfortunately is quite verbose. So another goal is to migrate all of that towards CBOR, and as CBOR-packed at the horizon shining in the weeds, hopefully very soon, we can even be more concise here. The point is that the conversation stays a memory accessible and is not just compressed. Compacting is keeping it operational in the memory. Next slide please.
So what we're hoping for is to put this work into a new home obviously. Um, there is a preferred option because we want to add not only the format but at the end we want to have the believability coming from the bottom: so on which platform is it running, does the platform provide the right capabilities to give you authentic records and also the transparency on top, I already said that. But there's also the VCON working group, which has a draft in there and there's a call container and it looks a little bit similar. Um, there is zero CBOR support, there's a CBOR reference in the—in the reference list but it's not used. And the only thing that is related to CBOR in that VCON draft, which is absolutely fine by the way for the way if you just want to talk in English text about it, is by Rohan, and but there's no CDDL, there's no machine-readable interpretation of the format. So if it would be a part of VCON, we would have to ask a lot of tweaks and changes to VCON. That is from our point of view. But this is in the first place if there is a dispatch interest. So um, maybe I'm a little bit getting a little bit ahead of myself.
Summarizing all of this is it's pretty straightforward. Agent conversations derail, they develop schizophrenia, they get dementia, they they sometimes even get off track and start other tasks if they explode in the agentic AI mesh. For the security of the outcome, and especially in critical and sensitive applications like IPR leak, PII leak and so on and so on, you really need to have it auditable. But if you're fast enough and this is small and compacted, you can even have it inline, so to speak, like in situ analysis, like with a security event and information management system, you can find patterns of derailing, you find can find matters even of scheming—I’m using a lot of buzzwords which is all AI, I'm very sorry about that—and so um, but the real point is you really need to understand did it do it? And that's it.
So this is in our dispatch question. We have a relatively rich document, if you look at it it's I think also quite informative and could be a basis for dispatch.
Shuping: Okay, so please keep in mind this is a dispatch session. Let’s focus. And for people who wants to comment, clearly state about your dispatch outcome suggestion first. Thank you. Richard.
Richard Barnes: So from a technical point of view, I don't think verifiability in the sense you've got here is verifiability in the sense you put on the objective slide because just because something signed doesn't mean it actually happened. Um, but um, from the dispatch point of view, I think we need to know who are the vendors or operators of agentic systems that are interested in doing something here. Because if there’s not a community, there’s nothing for the IETF to do.
Henk Birkholz: So I think I heard the last part at least. Who—which vendor is actually interested in this? So I think that is a good question. So what I didn't put into the slide deck here which we had in the HotRFC that there's a single source of truth which is the original session log. I think that has to be retained for at the very minimum if nobody is converting to this. There is the concept that this is interesting to a lot. We are talking to I think two good candidates that are interested vendors. I will not name drop verticals here, but but there’s a—there’s a good chance that this could become the structured output in a foreseeable future. Um, who's really interested is the user of the LLMs, it's not necessarily the provider of any frontier model but it is the users of them. And there is a very strong interest, and I would say I mean normalization helps support on from that front. Would that answer your question? Because I really only got the very last of it acoustically.
Richard Barnes: I mean the answer I heard was that there are no vendors here today standing up to say they'll support this and be that source of truth that you need if this is going to if these signatures are going to be meaningful. So I think we need to have some people in the room saying they're going to do like if something is done here then that they’ll use it before there’s any juice here.
Peter: Hi, this is Peter from Huawei. Actually answering Richard's question, maybe usually we could have security vendors providing secure solutions to do the auditing. The dispatch outcome I think previously I missed the previous authentication and authorization item, but the outcome of that one is still pending and maybe if we have a thorough, you know, AI discussion, AI security discussion, maybe that this item will be a good piece to be discussed in general in, you know, in a whole AI security venue. I'll leave the technical suggestions out.
Eric Rescorla: Yeah, agree with Richard. It's not clear to me what this actually delivers operationally. Most of these things are like web interfaces, I don't understand where their verification is happening. Um, but I think the primary the predicate question here is about vendor support. If no one is able to stand up with you and say they're going to do this, then this doesn't do anything useful. And so like this should be dispatched to like press pause until some vendor is ready to come in and say they're going to do it. More than one I would think.
Henk Birkholz: Could someone in the audience please repeat echo please? Sorry, I can't hear nothing.
Speaker 5: Dispatch to press pause until we have a vendor who's actually going to implement this because there’s no point defining something that nobody will do.
Henk Birkholz: Sure.
Thomas McCarthy-Howe: This is Thomas McCarthy-Howe. Wow. Sorry about that. That’s fine, I can hear you, that’s great. So um, I recommend that the dispatch decision for this is to put it into the VCON working group. We have several vendors that are working on this exact problem. Um, I am after I after I leave this particular session I'm going to Dallas where there are 120 people including half a dozen vendors including mine, VCONIC, which is very interested in this work. Um, in addition I think your comments around the lack of CBOR support for VCONs may be a gap that VCONs have anyway. I'm a big fan of JSON as you know, but I also think that this is very very close to what we’re trying to do with VCONs.
Henk Birkholz: I think so too, yeah. So I read it all and if I would have found something to latch onto, I would have actually started there, but I couldn't.
Thomas McCarthy-Howe: Understood. Yeah. So by the way, there are there are there is the community that's working on this right now. Yeah. And they're substantial.
Henk Birkholz: Yeah, so again, that is why after the HotRFC, um, I brought—VCON on this list. Um, and if VCON is able to move and expand to using CDDL and be a little bit more agnostic on the on the representation, I think there’s not a big blocker here. But we are leaving out the authenticity part a little bit, but we can talk about that when the format is done. Yeah.
Osama (remote): Hello. Do you hear me now? Yeah. Okay, so Osama at TU Dresden. So I agree with Richard and Ekr. I would like to have hear some vendor come in and say that yes we need it. I think Thomas was one of those who was saying something, but it needs to be very clear that what the goals are and who is interested in kind of using this.
Henk Birkholz: I unfortunately didn't get that. Could someone please repeat?
Speaker 6: Same as Richard and ekr.
Osama: So what I'm saying is basically that there needs to be some vendors. I’m agreeing with Ekr and Richard that basically you need to have some vendors which are showing up and to say that this work is required in the IETF and not just a waste of the effort. Does that make sense?
Henk Birkholz: So as a response, I would say I will talk or better align with the VCON folk as the next step, because the—the—um—and talk with the vendor pool there to get an immediate feedback maybe even this week and and yeah, that would could be a part of the I want to say interim dispatch we probably do for the remote presentations that we couldn't do today, but yeah. Thank you. Okay, thank you. We have cleared the queue. Okay, thank you very much. Oh my God. Super, thank you.
Shuping: Okay, thank you. So now we go back to the agenda for the remote presentation. Yusuf is still not here. So Mauro. Hello, can you hear me? Yes, please. Okay, do I have control of the slides? No. I pass it to you. Okay, thank you. Okay, hi everyone. This is the MTA Hooks presentation. MTA Hooks first what this is. This is a protocol for mail filtering and processing. So MTA MTA Hooks hooks to an MTA to do mail processing decisions. Some examples are spam filtering, virus scanning, also enforcing policies and so on. Currently this is done with something called Milter that was a part of Sendmail. It's a very old protocol, binary based, and the only way to use this is in a C library called libmilter. So there's no specification for this, this library is not documented, at least the new version is not documented. And if you want to hook to a mail filtering on any other language that is not C, you're out of option. You need to reverse engineer the Milter protocol. So currently Milter is being used by a lot of products open source and commercial, for example Rspamd, ClamAV, SpamAssassin and so on. MTA Hooks started based on user demand in Stalwart mail server because people wanted organizations mostly wanted to filter and take make changes to the to the emails that arrive to their system or redirect them or whatever they needed and they found Milter too complicated. And so that's why MTA Hooks started.
Okay, so MTA Hook is very simple to implement. It's HTTP based. It uses JSON and CBOR for serialization. This the the filter can request which protocol to use, which serialization format to use. And it is in some ways similar to JMAP, so you can request a JMAP representation of the email. So there's nothing proprietary here. And also works similar to JMAP in the way that you can request modifications by sending patch objects. So there's also a not only inbound—Milter is limited to inbound—MTA Hooks is also doing outbound, so you can perform actions on outbound delivery.
So the protocol itself is quite powerful but it’s also simple because you receive like a you receive like a JSON or CBOR representation of the envelope, the message and so on. And the scanner reply with a set of patches to perform on the on the on the message. Patches like modification and also actions. By sending a patch you can also change the routing decision like reject, forward and so on. You can have multiple scanners and like in Milter for example you only receive limited information in MTA Hooks you get everything: DMARC, DKIM, TLS info and so on.
So MTA Hooks was already discussed at IETF 123 during the Mail Maintenance session. The chairs suggested that we start a BOF for this, and then I contacted the AD and he suggested this dispatch. So as I mentioned before, MTA Hooks is implemented in Stalwart mail server. There are also some filter implementations on GitHub, and also Rspamd which is a popular spam filter plans to implement MTA Hooks. And also there was interest during IETF 123, also this was discussed at FOSDEM this year and there was also people interested in this.
So the idea is quite new here, so it could be a BOF or if it's an option an AD sponsored. So yeah, that's it. Waiting for your questions now.
Barry Leiba: Hi, this is Barry Leiba. We've discussed this before, Mauro. There was a thing years ago called OPES, Open Pluggable Edge Services, that was an attempt to sort of generalize and standardize the Milter thing. That flopped for a number of reasons. And this this is resurrecting that kind of thing again, saying okay well now we’re going to put it over HTTP and we’re going to use JSON and you've got the right acronyms there, but I think the real answer to this is not to—I think this is a lot more detail than we’re ready for. I think what we need to do is have a BOF about the problem and how the best approach to solve the problem is, rather than having a specific answer at this point, because I don't think this kind of thing is going to be what we come up with if we really analyze what we need to do and what the best way to do it is. So the answer summary: a BOF to talk about the underlying problem and come up with the right path to the answer.
John Levine: John Levine. Here I am, sorry. I had leftover muting from earlier. I'm basically agreeing with Barry. I think Milter is awful, I think this is better. But I think realistically there are three there are three major open source mail programs, so we need we need a BOF and we need to get at least two of the open source MTA people in the room because I mean if people are willing to implement this in Postfix and Sendmail, then people will use it, you know. And if they aren't, then we’re wasting our time, you know. I hope they are, but I guess I'm saying agreeing with Barry and channeling Richard here.
Mauro: On the other hand, I don't think Postfix they're not willing to add an HTTP stack to their server, so they haven't implemented DMARC or anything. So I don't think they’re a reference point of reference, they're pretty much stuck in the past. So if we’re waiting for them we won't move anything to anywhere.
John Levine: I mean they don't have to make it a part of the of the of the core Postfix software, but they like if this doesn't work with Postfix, it's not going to be useful. I mean, so you know, so you need to talk to them. You can't just insult them.
Bron Gondwana: I'm definitely in favor of a BOF or something like that. We are doing something similar inside Fastmail for our own mail flow. Plenty of people are doing things like this right now. There’s definitely interest in it and there’s interest in having it be compatible across systems. I would say I'd take in JMAP given how JMAPpy it is, but it’s outside our charter at the moment. I do think a BOF is the right place to start with, and I think a BOF in Vienna makes sense.
Alexey Melnikov: Um, I'm slightly I think I'm going to disagree a little bit with Barry about how close this is to OPES solution, but I suppose we we still need to have a BOF to figure this out. As a vendor, as an MTA vendor I need something like this. We have a separate protocol which is serves the same purpose but implemented differently. I think I'm not convinced by the argument that Sendmail and Postfix as an implementation is a gate to pass, but again this is something we can discuss at the BOF. So yes, let’s have a BOF.
Arnt Gulbrandsen: Arnt speaking privately. This is picking up implementations, whatever Postfix and Sendmail are doing or Exim. Postfix and Sendmail are not actually a gate. Rspamd's going to implement it, for example, and that is the major, sorry, the major open source spam filter now. I would say that perhaps Rspamd is the effective gate. And I know a major ISP that really wants this and uses Postfix and has a history of contributing patches, so strongly in favor of the BOF.
Jim: Jim Reed. I agree with the previous speakers. I think we’re far, far too early to consider creating a new working group. We must have a BOF before we can proceed because in my view the problem space hasn't been properly defined. And we also need to get a clearer indication of interest and support from the major implementers of mail systems, not just Postfix and Sendmail. Sendmail used to be the reference implementation for SMTP, but those days are long, long in the past. We’ve also got to look at one of the several of the very big commercial vendors, I'm thinking particularly of Outlook, Google and a few others besides. They have to buy in as well before we can come up with a coherent solution. Cheers.
Shuping: Okay, thank you. We have cleared the queue. Mauro, that's it. Thank you.
Mauro: Thank you, thank you.
Shuping: Okay, so our next presentation is A-IDP. Ionis, you are ready?
Ionis: Hello, hi. Can you hear me?
Shuping: Yes.
Ionis: Great. So, good morning everyone. Hi I am Ionis from Google and today I am presenting the Agent Interaction and Delegation Protocol or AIDP. This is a control plane protocol designed to bring security, auditability and interoperability to the rapidly involving landscape of software agents. As agents move from experimental toys to production ready entities, we need to standardize the way to govern their actions. Next slide, please.
The motivation behind the AIDP is twofold. First we are seeing real world adoption of agents in high stakes sectors, yet there’s a significant compliance gap. Regulations like the EU AI Act mandate safety and accountability, but we currently lack the standardized technical tools to enforce these requirements. Second we must address the intrinsic risks of LLMs. Hallucinations are architectural, leading to non-deterministic execution. AIDP acts as a safety fuse, decoupling this unpredictable reasoning from deterministic system execution to ensure that even if an agent logic fails, the system’s integrity remains intact. Next slide, please.
AIDP provides a comprehensive framework for agenting governance. It defines the intent envelope for governed action requests and the capability-based authority model that supports delegation chains with strict subsetting. Crucially it introduces the execution boundary as a mandatory enforcement point and observation binding, which links execution results directly back to the agent’s reasoning loop. The protocol also handles replay protection and revocation semantics, and while it includes an HTTP binding, it remains transport agnostic. Next slide, please.
It is important to note that AIDP does not replace existing protocols. Instead it builds on established concepts from GNAP, OAuth, SCIM and capability systems like Zcap and UCAN. While those protocols handle authorization and identity, AIDP focuses specifically on the agentic execution lifecycle. My goal is to bridge the gap between authorization and execution by ensuring deterministic control loops and robust observation binding. Next slide, please.
Regarding to my goal for today, I’m looking for the right venue to integrate these principles, perhaps within the GNAP working group or a potential agenting networking BOF. Actually I didn’t know what to write here. My primary objective is not about credits or protocol binding, but to contribute a standard architectural framework for agent governance to the IETF community. I’m offering this work as a baseline for you to use, adopt and integrate into broader IETF wide discussions as you see fit, in order to prevent fragmented security models in the age of AI agents. Thank you. I look forward for your feedback.
Eric Rescorla: I fear I'm just a broken record today, but who else says they're going to do this? Do you have any vendors or other people who say they're going to implement this?
Ionis: No, at this moment I do not. I'm looking for feedback and in the future we'll look for implementation. It's a next step.
Eric Rescorla: Right. Okay, so I think the answer to that becomes pretty clear: like the IETF shouldn't do anything with this until like there's more energy behind it. Sure.
Osama: Osama at TU Dresden. In the first slide or the second slide, I think you were talking about the AI Act. Do you exactly know what’s the link between which which clause of AI Act just require that kind of auditability and safety? And you also claimed basically I think in your slide somewhere that agents are already used in some high risk sectors or something like that. Could you give me a concrete example of that to be used in the IETF context?
Ionis: Well, EU AI Act mandates that besides data governance, the use of AI in general and agents maybe more specifically safety. What AIDP is about, it's decoupling reasoning from action. An example on this what I thought about is maybe in the health sector. It's quite crucial to prevent an agent giving information that should not be given.
Osama: So you’re giving the example of the health, so just to clarify, I am the co-chair of the Open TRE Research Environment, which is the Trusted Research Environments at the GA4GH, which is the Global Alliance for Genomics and Health data and we are strictly against this AI agent use because this is not usable in that sense. So I don’t really see your point of raising this health thing. And I would refer you to basically the Catalyst BOF which is happening in this IETF, so probably that's a good place where you can get some feedback and discussions going.
Ionis: Thank you.
Shuping: Pong, please.
Pong: Yes, just a question that is this work related to the AI protocols work, or do you mean you want a new potential BOF here?
Ionis: I’d prefer to leave it to the IETF community. I don’t have a specific suggestion. I just need feedback for this work for now.
Shuping: Okay, thank you Ionis. That's it. Thank you. Let's move to the next. David, please.
David: Um yeah, I'm David Chaudhry presenting from Writer’s Logic with two drafts that were being submitted to RATS, but there was some discussion on which working group they would they were going to fit into. Um, before I address the—can you move to the next slide for me? You can control now. Oh, okay. Thanks. Um, before I get into it, I wanted to address a framing question that was on the list because um, which determines where the work belongs. Um, because there was some confusion on whether this protocol was about AI detection, which it was not. Um, I actually published a paper several months ago that um, no... So detection asks is this human or machine. And that's NP-hard. I published a paper this year proving that time-based detection is completely broken. Um, there was an 99.8% evasion rate and if you can type it you can fake it, you know. Um, what the PoP protocol is about is attestation. You know, does the edit sequence occur in the order under the hardware constraints tracked, you know, in conformance of RFC 9334? And we use behavioral signals, keystroke dynamics, typing workload, air topography, um but um these aren't used to they're not used to detect AI, they're there to raise the cost of forgery. Just like just like um an analogy of a padlock, you know. If you have a padlock doesn't really keep anybody out, it just makes the cost of getting in higher than the perceived value so they move on to another target. Um, and that's what I was attempting to do with this draft, is to make it you know, there's always going to be there's no way we can I don't think that we can ever really prove that a human wrote it, but we can prove that we can prove that, you know, that there was somebody at the keyboard typing, you know. And um, let me move on to the next slide.
I think my slides got a little mixed up because I had I had submitted some new slides but I guess they didn't make it. Um, I know I wanted to respond to ekr raised two criticisms on the list that I hadn't gotten back to yet. Um about the compute cost. Um, there's one second every 30 every one burst, one one-second bursts every 30 seconds, an autosave, um but not a sustained load. Um and and as far as behavioral tests, um I agree that I I I confess that the estimates in paper are analytical, not from the not from a controlled adversary experiment. And um, layer one is about temporal binding. Um, it's it's the formal guarantee, and the normative core. Layer two is behavioral analysis, which is informative defense in depth. And and um, and then the higher levels are with hardware considering it’s not hardware that currently exists on any any existing consumer hardware, but we’re thinking about um like having like a like a YubiKey or something like that that people could purchase and add to the add to to create the actual functionality.
Um, we previously submitted to the W3C and and they they thought that, you know, C2PA was on on on track for their current their current working group, but um they submitted put us into a community working group. Although I was brought on to um C2PA as a contributing member to ensure that that their protocol is built in such a way to support the PoP protocol and text text attestation because they're currently only multimedia. And so we already have adoption set up in place for the C2PA for this. Um, it's spread actually spread out across several different layers because we also have a decentralized trust project that's about getting ready to launch um with the Linux Foundation as well. And I think that the slides say um RATS. I I picked RATS primarily because I structured it with layer one fitting the RATS charter, the CBOR tag, the CBOR code COAST-EHE compatible results, RFC 9334 entity roles. PSA attestation is a precedent. Um, but layer two um... shit... sorry, I'm just I'm not really going by the slides because I don't remember where the details were on these. Um, sorry. Um, but we we have links here to the drafts and they've actually on GitHub, um github.com/writerslogic/draft-chaudhry-rats-pop are the actually working drafts and they've diverged significantly from from what’s currently on datatracker. Um so I would ask that if people if people would like to see the the changes that have happened to check those out. And I've also um, got a there’s been significant discussion on RATS. I think that I reviewed the existing charter and and because there was discussion on it not because there’s a trust inversion in the attestation that the user is an adversary situation, but I think I I I'm I feel like a new a new BOF would have to recreate a lot of the work that's already been done in RATS, and um but just simply by and and the original charter was set up to adopt other attestation scenarios. So um, I I I would I would ask that they consider I’d ask them consider that. Um and um that’s about all I’ve got except I don’t know. Thank you. But if you any feedback on that would be um I don't know if anybody has any feedback on that, but...
Shuping: Okay, so Martin.
Martin Thompson: Yeah, I I don't think we should do this at all. Um, I think this is a problem that doesn't really have a purely technical solution, and I think David's presented the gross waste of resources version. Um, the alternative is that you have people lose control of their computing devices and I don't think either of those is the right way to approach this from a purely technical solution. The problem is real, but uh we shouldn't be looking at technical solutions for this in the IETF is not the right place for that yet.
Eric Rescorla: I largely agree with Martin here on the technicals, um though I’d note that this actually has both the gross waste of resources and the loss of your computing device property. Um, but in terms of dispatch um um routes, there's absolutely no way this can be AD sponsored, that would be insane. Um um I don't think it belongs in RATS, um I agree it may use RATS, but it's very common for us to do um things that pick up on one working group for another. Um, the only way forward for this, if any way, um is there would be a BOF where you can like actually demonstrate the other people besides you want this and that it is a good idea. Um I personally am skeptical of that, but that's the only way forward that possibly could exist other than dev null.
Shuping: Okay, thank you. Thank you David. And let's move on. Our last one. Ross, please.
Russ Housley: Hello, I'm here to talk about security protocols that are optimized for non-web and PQC. There are many operational technology and IT cases where restricted bandwidth and or computational limitations impose significant restrictions on solutions, and the imbalance between the sending and receiving devices is a significant concern when you're dealing with the PQC algorithms that have huge public keys and ciphertexts and signatures. Um, today's most of the security protocols we have are optimized for the web environment, not those kinds of environments. And so I'm interested in working on solutions where the overhead associated with the PQC algorithms can be addressed in some way other than the use of long-term pre-shared keys, which is something I've worked on previously. It's my goal is is to have security protocols that are optimized for that situation.
The key management requirements are we need it to work in layer 3 and 4, we need to be able to asynchronously update the keys, support for PQC, forward security and post-compromise security and formal analysis. I have a slide on each of these requirements at the back of the deck. I won't be presenting them because of the five-minute constraint that was imposed on the presenters. So, um I'll just move along and say that continuous key agreement is an important aspect especially when you're looking at the formal analysis. Uh, it means that you start a session once and then perform asynchronous updates by either party involved in the protocol. Um and I I also desire that we be able to do group protocols, not just peer-to-peer protocols, um so any of the senders in that group should be able to update their keys. Um, MLS is is an example of a protocol that does continuous key agreement, so it meets all of those requirements on the previous slide plus does these things, uh which allows for much more control regarding when key updates happen, it helps us with interactive key updates after that initial establishment of the session, and it allows us to do things where the PQC overhead is amortized across many messages. Signal is an example of a protocol that already does that, and it allows us therefore to bring the overhead associated with PQC down to the level that we're used to with the classic crypto protocols.
So, one approach to implementing would be to make some kind of continuous key agreement an alternative to the handshake that is used today, another would be to design a separate security protocol. So looking at TLS as an example those two approaches: one is documented in an internet draft I wrote about using the MLS handshake as an alternative, there's been a little bit of discussion of this on the TLS mail list and got some good feedback so if this approach is taken there's some definite changes to be made there. The second approach would be to use the MLS handshake on its own port followed by the use of the TLS record protocol, and that has been documented by Conrad in his draft is also available.
So my hoped outcome here is that we get a new mail list to continue this discussion, refine the requirements and perhaps pick the approaches and then have a full BOF session in Vienna to see if there's consensus for this work.
Jim Reed: Thanks very much. Just a quick plug, there’s a side meeting tomorrow morning on post-quantum crypto for DNSSEC. It’s at 8:00 in the morning for those who can be motivated to get up that early in the day.
Eric Rescorla: Sorry about that. Yeah Ross, you and I talked this offline. Um so I guess you know, um I have no problem with there being a mail list in terms of dispatch, that seems totally reasonable, but the purpose of that mail list should be to flesh out the actual argument that you have not made here, which is that there are substantial performance problems and what they are and how they interact with the current systems. Because that is like sadly lacking in all these drafts and in your presentation. And so um, and an answer to that should be the predicate for a BOF. Um so Barry Martin asked in the—huh? That’s fair. Okay. Um I think Martin asked specifically, specifically like given that TLS already has a working group item to like do basically exactly the thing you're talking about here, which is say EKU, I think we really need like an analysis of why EKU is insufficient, and it would be very helpful if you provided one.
Russ Housley: Okay.
Richard Barnes: Yeah, so this is Richard Barnes. I'm an MLS enthusiast, uh but also completely baffled at what the problem is here. Um like I think there are maybe some cases where this some of this work could make sense, but we need much, much more sharpness than non-web and PQ. Like so yeah sure let’s form a mailing list, talk is cheap, um but let’s focus that talk as Ekr says on defining what the actual problems here we are to solve. And in the interest of full disclosure, um I would like to note that Cisco has some IPR that applies to several of the drafts here, um there have been disclosures on those drafts, um so you can check those out.
Speaker 7 (likely John Levine): Yeah, I find this work very interesting. I think it’s potentially very useful for several different use cases. So I think, but there’s a lot to discuss. I think the hoped outcome here with the mailing list and a BOF is the perfect dispatch outcome.
Osama: Yeah, I agree with John. I specifically talking from the point of Ekr, which was about the key update, I don't see that draft going anywhere. I have been doing the formal analysis of that and I have strict reservations against that, so it could this work could actually fill up these gaps. So I’m in support of this work.
Shuping: Okay, thank you. And thank you Russ. Thank you all. And it’s a pity we got the chaos and but we managed all the presentations. And the chairs will work with ADs to deliver the dispatch. Thank you. Round of applause for Shuping. Yeah. So the ADs are going to have a—we’re going to consult with each other and try to figure out what to do. There obviously were some people that were not able to participate meaningfully in the session, so we’ll—we’ll discuss and get back to everyone. Thanks.
Transcript concludes