Session Date/Time: 16 Mar 2026 06:00
This is the verbatim transcript of the OAuth Working Group session held at IETF 125.
Mike Jones: All right. Um, we have a full agenda. I'm Mike Jones. I'm here as a, in-person proxy for the OAuth chairs. Hannes, a chair, is with us. Deb, our Area Director, is with us. I am going to be brutal about ending presentations on time so that we do get through those that are scheduled. Uh, if any of you finish early, uh, that would be wonderful. Um, with that, please take it away, Hannes, for the chairs' presentation.
Hannes Tschofenig: You are muted, Hannes. Thanks, Mike. Hi, everyone. Um, welcome to this IETF 125 meeting. Um, sorry that I couldn't be there—job change-related. Mike, can you skip to the next slide? So this is the Note Well, which you have seen before. Um, I would like to request you that you act professionally in all the discussions and follow the, the usual process rules. Next slide. Um, if you are in the room and you want to go to the microphone, please make sure that you use the MeetEcho client to join the queue, and so we can identify you and have you correctly noted in the meeting minutes. Oh, speaking about meeting minutes, Mike, do we have someone?
Mike Jones: Uh, we do not yet to my knowledge. Uh, this is the time during which you will be rewarded with the thanks of the working group for going to the meeting chat client and or to the meeting note-taking client and helping us out. Preferably, I'd like to have at least two people doing this so that you can tag team each other. Who in the room is willing to go to the note-taking tool and participate? We can't have the meeting without this.
Hannes Tschofenig: I can definitely take some notes, so um, that's no problem. But uh, if there's someone else, that would be great.
Mike Jones: I think there's one person in the queue who might be willing to help. Hanling Wang.
Hannes Tschofenig: That's great. Or just use Ecker's AI tool if I would know how to do that.
Mike Jones: So, Hanling Wang, you're acknowledged.
Hannes Tschofenig: Okay. Perfect. Uh, next slide. So quick update. Um, you may have seen from the list that we had two documents in the RFC Editor queue, which is The OAuth 2.1 Authorization Framework [Note: Speaker likely meant Browser-Based Apps or similar based on context of current RFC queue], the browser-based applications document, and also the best current practices for the trust device flow security. Um, so that should become RFC anytime soon. We have also one document in IESG evaluation, which is the token status list, and two other um documents in AD evaluation. Next slide, Mike. As you said already, Mike, there's a pretty full agenda, and I won't read this to you. Everyone knows on who is going to talk next. So let's um keep it going.
Mike Jones: Aaron, you are up. Let me try to find your slides. There we go. And do you want to use the clicker yourself?
Aaron Parecki: Can you put a timer on the screen as well for me?
Mike Jones: I don't know.
Aaron Parecki: Okay, I'll try to try to manage it.
Mike Jones: How much time do you want to give him? 10 minutes each. I don't know how to do the timer function.
Aaron Parecki: Thanks. Hi, Aaron Parecki. Um, got a lot to cover today, but I'm looking forward to it. Uh, we're going to start with OAuth 2.1, which I'm sure by now you're all tired of hearing about; I am too. Um, that was a photo from the, uh, one of the metro lines I took on the way over here. So, um, really quick, the reason for OAuth 2.1 is because the current state of OAuth 2.0 has gotten kind of messy. It's described in a bunch of different RFCs and extensions, and some of them are very nearly complete now. We've kind of collected a set of what we consider to be the core, uh, what we mean by current best practices of OAuth 2, which is, um, the path toward OAuth 2.1. Um, so this is the goal: to capture the best practices that are in these documents, uh, laid out in these documents and not to define any new behavior.
So, um, the good news is that two of the ones that have we've been waiting on for a while are very nearly uh, nearly complete. RFC 9700 was published a couple of, I guess, a year ago, and Browser-Based Applications is now in the, in the queue for publication. Um, I wanted to quickly recap a couple of the changes uh since the last draft and then talk about some of the current open questions I'm hoping to resolve. Um, thanks, Filip, for a couple of these PRs. Um, these were a couple of the, um, errata from RFC 6749 have now been applied into this. That seems like a good chance to do that kind of work. Um, there was some effort to sync the language from RFC 9700 around open redirects, so that's now been ported into 2.1. Um, there's also additional context for the JWT client authentication and a mention to specifically follow the recommended practices in JSON Web Token Best Current Practices, which is also an in-progress but nearly complete document.
As always, every time we go through any of these, there's always room to improve, so lots of editorial clarifications and updates that hopefully make it easier to read. There are a handful of open issues on the draft. Um, I feel like, other than the couple I've flagged here, I feel like we've actually talked about quite a lot of these in past meetings, even if we haven't yet completed the text to address them. Uh, if you do, please do give these a read and chime in with your thoughts on any of these that are still uh tagged as being open. Um, but a couple of them I want to call out today for discussion with the few minutes we have here. Uh, one of them is issue 233. If you are in the slides, these link out to the issue on GitHub for the full context. I realize this is a lot of text on the slide. Um, I'm not going to read it all, but TL;DR: um, RFC 9207 defines the iss authorization response parameter, and one of the reasons for doing it, and one of the reasons it became uh it was so quickly an RFC, was because it's a relatively straightforward mechanism to, uh, solve a couple of clear mix-up attacks. And the alternative solution to that, which is, uh, is using distinct redirect URLs per authorization service. This is essentially when a client talks to multiple authorization servers, as many do, um, you need to mitigate this kind of mix-up attack. You can do it either through iss or through distinct redirect URIs. Um, some ecosystems that regularly interact with multiple ASs do not have the option to use distinct redirect URIs. For example, clients using OAuth Client ID Metadata Document or clients that are MCP clients because they don't have a pre-established relationship with the authorization server. This... so the question here is should this, should iss be incorporated into 2.1 as a required response parameter for the authorization server? The main justification for doing that is that, um, it's easy for the authorization server to add it. It, it is, um, not a complicated mechanism. And for clients that don't need it, they can just ignore it, the same as they would ignore any other parameter coming back. Um, the main benefit of doing this is that ecosystems that have this problem, um, that need to solve this mix-up attack, don't have to, don't have to make the decision on their own about, uh, which, whether to require it or whether to leave it un-unresolved. And many of them look to the OAuth working group as experts on how to do OAuth securely and would benefit from a clear stance on this parameter. Um, I'm going to go through the other slides before I invite comments, but please do feel free to queue up and I want to have some discussion about these. Um, but I do want to also flag the other ones so that we don't spend too much time on only one of them.
Um, the other main question, the other big question here I wanted to call out is, um, PKCE. PKCE defines, uh, S256 and plain as challenge modes, plain being there is no transformation of the PKCE challenge. Are we still online?
Mike Jones: Yes.
Aaron Parecki: Okay. Um, so the question here is should we forbid the response, the, uh, plain challenge in OAuth 2.1 because it, it is not, it does not provide the full protection that an S256 does for PKCE. Um, the main, one benefit would be there's no ambiguity for doing so. Clients can just do S256 and not have to figure out whether to do it. The main justification for doing it is that one of the primary reasons for including plain in PKCE at all was because of the constrained environments that OAuth was being deployed in in 2015, some of which did not have S256 functions. However, it is now 11 years later, and that is not really the case anymore. Anything that can do HTTPS, in particular the requirements of HTTPS that OAuth 2.1 requires, will have a hash, a hash function on the device somewhere, so it's not really a burden. Oh, are we totally offline?
Mike Jones: We, we were good until a few seconds ago. I was showing the presentation on my screen.
Aaron Parecki: Oh, okay. Um, so that's, that's the spiel on PKCE. I'm going to... Oh.
Hannes Tschofenig: We can still hear you though.
Aaron Parecki: Okay, great.
Mike Jones: Keep, keep going.
Aaron Parecki: I'm going to keep going. Um, but I do need to look at my next slide. Unless anybody has comments, in which case I think the audio is still working. So any discussion on those two is welcome until I get the slides back.
John Bradley: Uh, this is John Bradley. Uh, definitely get rid of plain.
Aaron Parecki: Thank you.
Kaishuai Luo: Kaishuai Luo here from Chinese University of Hong Kong. Yeah, so for the, uh, mix-up question, mandating issuer parameter, uh, my take is that I support it, but, uh, I think we might encounter some difficulties for a client to actually use that return issuer to mitigate mix-up, mainly because, like, for an issuer environment to be trustworthy, you need to, the client needs to make sure that, wow, that identifier is fetched from metadata beforehand. But not every client with the server actually use, uh, metadata in the first place because I think it is too optional in OAuth 2.1 for now. So that would, that may hinder, uh, real-world clients from using that mandated identifier for mix-up mitigation.
Aaron Parecki: So you're basically saying that only including it in the response is not enough; you have to also tell clients to start the discovery from that identifier in the first place?
Kaishuai Luo: Yeah, there might be some out-of-band, uh, opportunities, like, to make that identifier trustworthy, but that is not documented in any spec. So, yeah, in real world, that is a, um, big barrier, I think. Okay. Um, but I think, say, for MCP or some other FAPI, I believe other ecosystems, since that AS metadata is already enforced, it is already mandated, then for those ecosystem, uh, mandating, um, yeah, iss is actually like, the two combined together, you can have a robust mix-up mitigation.
Aaron Parecki: Yeah. So, okay, that makes sense. So the... I realize that was, those were my two issues, so yeah, I don't have another slide. Um, yeah, that's a good point. So, um, maybe the answer then is to include more guidance on, um, like make it... my main goal here is I want to remove the ambiguity and the decision-making process from the communities that will have this problem and give them clear guidance. So if that guidance is the scenario you just described of when you are driving the discovery from the authorization server issuer, then the issuer response is required and then clients validate it, that is a clear end-to-end picture, but that doesn't necessarily make it required for every, everyone then.
Speaker 1: I believe there's a distinction between just using multiple ASs that you hardcode and having discovered authorization servers that the user drives the discovery of. Um, we've been through this with JSON Web Token Best Current Practices where comparing issuer values only really works if the client goes through RFC 8414 discovery or OpenID Connect one and, uh, subsequently after fetching the metadata document also confirms the issuer that it gets from the response is the one that it expects based on what the input to discovery was.
Aaron Parecki: Okay.
Mike Jones: Okay, one more, and then we're going to switch presentations.
Aaron Parecki: Sorry, was that one more question on the queue?
Mike Jones: I think the queue is now drained; the person didn't remove themselves. Oh, okay. Um, so that was, yeah, those are the two issues I wanted to discuss. Um, there again, there are some other open issues on GitHub. Please feel free to chime in on those, um, but with resolution to these two, I am going to, um, post the next version of that and see you next time.
Mike Jones: All right, OAuth Client ID Metadata Document.
Aaron Parecki: Okay, how do we do this with the room not being able to see the slides?
Mike Jones: Um, I, I hate to say this, but if you join the client in the room, uh, you can see the slides, but please keep your audio off.
Aaron Parecki: Great point. Yeah, if you use the MeetEcho light client or it's called on-site tool in MeetEcho, you can grab that on your computer and see the slides. Yep. Okay, thanks. Great, and you can see the slides from my... Yeah, and I've... and I've got this one too. All right.
Great. So, yeah, Client ID Metadata Document. Um, you can't see the nice photo I took at sunset of the city, but hopefully you can see it on your computers. Um, I want to go really quickly through the... Oh, you have to give the control to the slides again, to the clicker. Okay. Um, really quickly want to recap the motivation for this and what this is before getting into a couple of the questions. Um, so the idea with Client ID Metadata Document is it's a way for a client to publish the metadata about itself at a URL using the vocabulary from that's defined in dynamic client registration. So it's a JSON document hosted at a URL that contains properties like the client name, redirect URIs, etc., etc.
Mike Jones: The clicker is gone from my menu; I can advance the slide manually.
Mike Jones: Okay. No, yeah, the clicker's on the switch, so you'll get it eventually. Oh, it's on the... it's connected to the room stuff, right, which is not online. All right. Next slide?
Aaron Parecki: Yes. Okay. Thanks. Uh, that's what I just said. Oh, yes. So after the client publishes its metadata in a JSON document, it then uses that URL as the, um, client ID in an authorization request. Next slide. Why is this needed? It's needed in the cases where pre-registration of clients isn't possible because the client developer does not have a relationship, a prior relationship with the authorization server it's talking to. For example, open-source apps talking to self-hosted services, or, as we all know and love, AI. MCP clients, uh, have the same problem where the MCP client can talk to servers it has not seen before. So, um, next slide. What is new since last time we talked? Next slide. Um, this draft is now actually referenced by MCP, the MCP spec. That, um, spec has its own standards org that, uh, and lifecycle, and they now reference this spec as the recommended way for, uh, MCP clients to talk and do to do their OAuth flows. Next slide. Oh, that was November, that that was published.
Changes since this version: a lot of the changes, so the last time we met was, um, the call for adoption was, like, in progress, so I was holding off on actually applying changes to the document until we, uh, got that, got through that. We now it is an adopted draft, and I've gone back through all the feedback that we gathered during the call for adoption and have applied, uh, most of it. A lot of it is guidance and security considerations, so things like, um, when an authorization server supports both registered and unregistered clients. There we go. Um, additional SSRF security considerations, things like, uh, authenticate the user before you fetch the metadata to, um, and avoid fetching special, special-use IP ranges. Uh, next. Thank you. Uh, an important one, uh, is prohibiting following redirects when fetching, in other words requiring HTTP 200 response since there should not be a valid reason to follow redirects.
There are still a couple open issues, but I wanted to again call out a couple of the key ones here to discuss. I'm going to again try to summarize them very quickly and then open them open the room for discussion on any of these if you feel like it. Um, there is a PR number 50 linked in the slides, which proposes, um, that URLs like client_uri, logo_uri have to be HTTPS URIs. Um, one of the reasons for doing this was to avoid issues like, uh, when there are weird URI schemes like javascript: or a shortcuts scheme if the authorization server ever actually renders these links into a web interface somewhere. However, that does prevent optimizations like including a data URI with an image that's kind of embedded in the metadata rather than linked to. So the question is: is this PR a good idea, or should this instead be more like a security consideration around fetching data from untrusted sources, or should this be HTTPS or data URIs are the only allowed schemes?
Um, so that's one issue. Then we seem to have lost slides again. Um, the next one to, uh, cover is jwks versus jwks_uri in the metadata. Um, there's currently no guidance on this; basically both properties are valid properties in dynamic client registration. There's two ways the client can publish its keys: embed the key in the metadata or link to it. Um, should there be a recommendation? Or should this be left to ecosystems to decide whether they want to make a recommendation or requirement? Or should this be a requirement like authorization servers have to support both so that clients can choose? Essentially, there is no guidance on this right now. I feel like there should be some guidance or requirement somewhere on this, not sure where this actually lands. Um, and then the third one is, uh, content type header, which I am not a content type expert by any means. I know there are people here who are, so please, your, your input is welcome. Um, this is on the SSRF question of how can we make it, uh, less of a problem when the server goes and fetches the metadata from a URL it's never seen before. If the server includes a content type header, then the server can early reject requests that might be malformed, like if someone puts in a image URL as a client ID that's never going to work, we can avoid the server having to respond with a document before it can reject it. Um, there's quite a bit of discussion on this already in an issue and PR, um, but this gets into the, the kind of hairy details around content types around is it application/json, is it a more specific CIMD [Note: likely meant Client ID Metadata Document's specific type], is it... you can read the details there. Um, that's definitely looking for support on that one.
And then the last one is client versions. This is also, uh, a little messy. There is, uh, there is a software_version property in the dynamic client registration vocabulary. So technically, a client can publish a version number in its metadata right now, but there's no discussion on what that means or how to deal with it or how to process it. So again, we can just say it's not the job of this draft to do anything just like it isn't the job of DCR to do anything about this, or we could try to, um, describe some mechanism to link, uh, versions of clients using this property. There's many different ways that could work; some of them are interesting, some of them are odd. Um, so input is welcome there. With our two minutes, if anybody has some burning points, fiery points to make on any of these four issues, your comments are very welcome.
Michael Fraser: Michael Fraser from Radium. Um, since there was nothing else in the queue, this isn't to do with any of those four, I'm sorry, so I'll keep it quick. Um, this is the point I made earlier, and I'd like to clarify because I made it poorly. In OpenID Federation, we have a very similar sort of scenario, I'm not going to talk about policies here or anything, but we have the scenario where a document will leave metadata, um, but it's a client id, it's it used the basic client stuff, but it also includes guidance to go and look at the OpenID Connect Relying Party Metadata choices document for metadata that has a single value but might exist in an ecosystem where a service can support multiple things. So for example... closer to the microphone, Michael... Sorry, token endpoint authentication mechanism. It's a single value, but it might exist in an ecosystem where a server can support one or another or another. So, while nothing in this document prohibits anything in that, it might be worth including some specific guidance highlighting that this is a good idea in such a scenario. Yeah, thank you.
Aaron Parecki: Okay, thank you.
Mike Jones: Anyone else?
Brian Campbell: I queued, I'll use the 36 seconds, maybe it didn't show up. Uh, is on the jwks/jwks_uri question: is there any real reason to support uh just plain jwks? Some of the original thinking, and I know it's a little bit different in this context, but the original idea was to separate something that's likely to change often, which is keys, from sort of like the base set of metadata and allow for lookups based on changing key IDs or whatever to occur more frequently independent of the things that might change very infrequently. Um, and it... this kind of smashes those together, but it, it feels like just sticking to that would be better. That's sort of a half-formed thought, but...
Aaron Parecki: Yeah, I agree, it seems cleaner. Um, I think the, if I remember the discussion that's happened on the GitHub, the reason for it in the first place was that it's a possible optimization and that there it's already in the wild in some places. So... yeah.
Brian Campbell: Fair enough. Uh, and just be careful with the media type stuff. I saw bits of that, but don't... if there's optimizations there, great, or things that are helpful, great, but don't do things that are going to cause interoperability problems, like requiring a specific media type, um, to be sent one way or the other. I, I get a bad feeling from that.
Mike Jones: All right, OAuth 2.0 for First-Party Applications.
Aaron Parecki: All right. Uh, we're going to do the same thing here. This, I think, um, should be a quick one, I hope. So, um, I'm not going to go into the whole background on why this draft exists, but it is in the slide deck if you are curious. So, um, OAuth 2.0 for First-Party Applications essentially is, um, a draft that enables developers who are building native apps, or, uh, in particular first-party native apps, to have a better experience than using the web, uh, in-app web view that appears in mobile apps that is currently recommended in RFC 8252. Um, and the reason that this is, uh, useful for us to solve is because people are doing terrible things without this. Um, very, very wonderful hacks around the solution. So this has been in the works for a while now. Um, we've been discussing it at these meetings since October 2024. Um, it's gone through several iterations, got a lot of feedback on it. There's been actually quite a lot of implementations of it as well that we've seen. Um, since the last time we met in, uh, in the last fall, we have gone through, uh, and addressed and closed 34 of the issues that were open, which is, uh, very nearly all of them. Um, many of these were, um, slight language tweaks that we've incorporated; some of them were, um, a little more functional. So I've called out, like, the major ones that we've, uh, changed since then.
One of them was adding response_type=code as a required parameter since it's required in 6749 and this endpoint's supposed to kind of mirror that. Um, a significant change is around the, uh, requirements for the, the well, the text that says how auth session, uh, is bound to a device. It was previously "must"; the comments were basically "you can't make it a must if you don't tell us how to do it," so it's a "should," and there is now a reference to DPoP as a way to do that. Lots and lots of editorial clarification and, um, an important one that is not really normative is the introduction and the kind of framing the context for the need of this. The language we kind of toned down and said it's intended for first-party applications, but it can be extended for third-party use cases if you are willing to describe the reason why it's not problematic to do so. And that was actually based on the presentation from Yaron last fall around, uh, the native app-to-app federation, which we will see a presentation on shortly, um, where he is able to now extend this draft to enable that use case.
So, um, there are two profiles of of this already in the works: there's the passkey authentication, passkey method of using this, um, which is currently sitting at a pull request, which needs to be brought into its own doc. But that's going to be that's its own document, you know, as a profile of this. Um, then there's also the native apps using federation doc, which, um, was created out of the discussion from last IETF. Um, and there's a session on that later today as well. The last remaining issue that we were not able to just close out by making quick changes was the question of whether this draft should actually be defined as a extension of pushed authorization requests or not. Um, after doing some thinking with the the group of editors, we've landed on this take. The, in order to do this, the response from PAR would actually need to be extended to include other parameters, things like the error codes or the, or the authorization code. Um, PAR does not provide that mechanism. PAR is a very specific, tightly scoped definition of "there's parameters here that get encapsulated in a request URI that essentially references those values." So we recommend that we do not attempt to extend PAR for, uh, for this draft, um, for that reason as well as this actually has quite a lot of deployments and it seems to be successful as is. So with all of that, um, we feel like it's at the point where we can ask for working group last call on this. There are already several implementations of this in the wild; there is a lot of interest in developing interoperability profiles based on this, and we have addressed and closed all but that one issue on PAR. Uh, so I'm hoping to close out the PAR issue, uh, here today and, um, hopefully ask for working group last call.
Mike Jones: Um, this is where I'm not the chair. Hannes, do you want to say something about the idea of working group last call?
Hannes Tschofenig: Um, if the authors/editors of the document think it's ready for working group last call, we'll issue one. Um, of course, it would be nice to resolve that open issue on the bar first. Um, and I guess that's why Brian is on a microphone for that.
Brian Campbell: I realize my enthusiastic hand gestures probably didn't go on record, but I very much support the, uh, the decision that was made in that last issue of recommending against PAR or just not at all. Yeah.
Mike Jones: Thank you, Brian. Anyone else?
Hannes Tschofenig: Doesn't look so. Uh, Aaron, what are you planning to do on the on the PAR issue now? In in light of moving forward to working group last call.
Aaron Parecki: Yeah, if, if there's general agreement that there's no interest in pursuing reworking this draft to extend PAR somehow, um, that's great. Essentially, no action is needed, and we will close the issue and we are happy with the current state of the draft.
Hannes Tschofenig: Okay.
Mike Jones: Does anyone in the room or online want to speak against that path forward? Seeing no hands in the queue. Okay. All right, it's a plan. Let's... now that I know how to take control back from the clicker, I will end this presentation and we will share the next one, which is the Identity Assertion JWT Authorization Grant, correct?
Aaron Parecki: Yeah.
Mike Jones: And let me give control to the clicker. You're... your show.
Aaron Parecki: There we go. Thank you. All right. All right, last one for me, then I'll stop talking for a while. We'll see, we'll see. Uh, this is a view out my hotel window from earlier this week. All right. Identity Assertion Authorization Grant. Um, again, real quick on the background because I, um, we don't have time to get into, uh, the whole motivation for this, but this, um, is an extension of the or a profile of the identity and authorization chaining across domains draft. Um, wanted to show you kind of the timeline of how these have been progressing, uh, for a while now. The core of the problem is to extend, uh, single sign-on in an enterprise context to API access in an enterprise. And yes, we all love our AI use cases, so I've used it as an example here. However, it is not only for AI use cases, and that's very important; it applies to any kind of app-to-app connection, uh, in under an enterprise scenario. Essentially, the problem is that when one of these apps that does single sign-on wants to get data from another app, they do an OAuth flow with the user and the IDP is kind of cut out of the picture and doesn't see anything. It's also a lot of prompts that the user has to click through in order to agree to these, uh, these connections, these OAuth connections.
Essentially, we extend the idea of single sign-on to API access. So you can think of it as API, API access single sign-on. Um, the two main benefits is it reduces user friction and it enables the enterprise control of that data sharing between apps, which is a very important thing that customers care about. Um, the building blocks for this are Token Exchange and the JWT Authorization Grant, uh, which are combined into the identity chaining document. That's where this kind of combination of them is defined, and then we've profiled that further to define this thing... Oops, you can't see the slides in the room anymore. Well, that screen's down; that one's working, great. Um, the... this document is the document that defines the concept of this ID JAG, the ID assertion JWT authorization grant. I'm not going to go through the whole exchange, but here it is; you can review that, uh, on your own. Um, this is what the ID JAG looks like; it's an essentially a JWT that represents the enterprise IDP saying that it's okay for one app to access a user's data in another application. Um, so what is new since the last time we met? There are now a handful of new in-progress implementations; some of them are open source, some of them are closed source, um, and MCP also references this as an authorization extension in that last November update.
Couple other changes in the doc. Um, couple of the changes in the doc are, uh, based on the most common feedback we've gotten from implementers as they've been working on this. One is the question of what about DPoP? Um, it's possible to apply DPoP in, uh, in the token exchange and JWT authorization grant. It doesn't really require, uh, very much new in order to do that. Um, it does require mentioning that the ID JAG can contain the DPoP, uh, proof hash thing; I don't remember exactly what it's called; it's in the draft. Um, another question is: how does this work with Rich Authorization Requests? Um, and again, it's kind of a natural evolution to apply RAR in this way, but it does mean that we need to define to say that it's okay for this ID JAG to contain a RAR authorization object, which there's nothing stopping you from doing that, but now there's an example of how to do that in the draft. Um, one of the bigger changes is is, um, adding a refresh token as an input subject token type. So essentially, now a client can either present an ID token because they just did single sign-on, or present a refresh token that they would have gotten through OpenID Connect single sign-on, um, as a way to continue to get access through this enterprise, uh, IDP even if the ID token's already expired. And again, we did this for two reasons: one because, um, one of the first questions we got was what happens if I want to get access a day after the user's logged in and the ID token's expired? We don't want people to be validating or in- accepting expired ID tokens; that seems like a bad path to go down, so we said let's apply refresh tokens. Also, um, this enables exchanging a SAML assertion for a refresh token and then you kind of get out of the SAML world fast and into OpenID Connect and OAuth and you can now use this with a SAML SSO connection. Um, there's another, uh, one of the relatively big changes is this, um, mechanism to support multi-tenant systems, which is a very mind-bending concept to get around and takes a long time every time I try to load up the issue into my my head. Um, so hopefully it is now captured in the draft. Uh, so here's... I should have just gone through these when I was when I was, uh, describing that. Here's the DPoP example, um, here's the RAR example, here's the refresh token subject example, and here is the multi-tenant example with this very helpful picture. Find me later if you want me to go through this; it makes sense if once you think about it long enough.
Um, so yeah, with that, the, um, it's, it's... so this one requires this one actually defines a new claim; it defines these new two new claims. The, um, subject token input type. Again, the refresh token subject token type is not new; it's already there, it's always been an input to token exchange. The use of it here is new, so it's not exactly a normative change, but it kind of is; it's at least significant. Um, other than that, all of these things are, um, like these have all been editorial or just adding examples. So, uh, and the remaining issues on this are questions about the draft or editorial clarifications, which I admit can always be improved, um, but at this point we are not expecting any major changes in this draft. Um, and I don't have any open issues to discuss here; um, wanted to just present this state of things, and if anybody has comments on these recent changes, comments are welcome.
Mike Jones: So, meta question: what do you think the next steps are for the draft?
Aaron Parecki: Uh, next steps for the draft: I would like to do at least one more editorial pass on it because again, you can always improve the language and the explanations. If anything is confusing about this as you go and read it, please let me know, file an issue or just come find me. I will I want to clarify that. Um, but I I don't I don't expect to do any more significant mechanical work on the draft. So I'm hoping to do some editorial cleanups and then, um, working group last call on this probably in a little while.
Pamela Dingle: Quick question, and it's really early in the morning here, so if I've messed this up, I apologize. Is there some name disambiguation that needs to happen with something called XAA?
Aaron Parecki: Oh, yes. Um, so early in the draft, you'll see a mention of this referred to as Cross-App Access as, like, an informal name for what this enables. Um, the XAA acronym is a shortening of the term Cross-App Access. This is not really well defined in the draft, probably could be mentioned better. Um, but the it's in the it there it's in the intro and it's when it talks about the problem statement of, like, what we're trying to do is enable this kind of Cross-App Access in an enterprise. So, um, that's a good call-out. I will take that as an issue to try to better clarify that and maybe actually use the acronym in here since it's kind of just evolved as a natural occurrence of people talking about it.
Hannes Tschofenig: Yeah, I I yeah, I think so. Um, yeah, thanks, thanks for the work. It I thought it was really interesting when you talked about the mix-up attack. Uh, at the time when it was published, it wasn't really... it was ahead of its time in some sense because this, the use case that it targeted or exploited with multiple authorization servers wasn't all that common back then. But then it took us years to work out the solution with the, uh, issuer identifier, and it turns from your presentation it sounds like, uh, we prematurely came to a conclusion that that is the right solution, uh, which is sounded a little bit like you are saying, well, probably not, uh, which is a bit unfortunate.
Kaishuai Luo: Um, I will say it's still the right, uh, solution as long as, or if, everybody's standards-compliant. But in reality, if the client is not or the AS is not, then you will still get in trouble. And, uh, so we have some kind of, like, alternative solution for this.
Hannes Tschofenig: Yeah. So, so definitely we need to have more look at this because, of course, if companies and deployments don't decide not to use your variant either, then we are also toast, right? Um, so that's, that's a little tricky. Um, so I guess we definitely need some reviewers to look into this. Also, um, I think the the title of the document needs a little bit of, uh, um, sharpening because I think it has a, a very generic, uh, name that may be missed by many readers, I fear. Um, but maybe there are also some ideas from reviewers on how to, um, to find a, a better, uh, more modern title.
Kaishuai Luo: Thanks, Hannes.
Hannes Tschofenig: Mike, if you find some people, uh, in the room to review that one, I think that could help us move the document forward because there's obviously... this is complicated stuff; uh, there's a lot of details that need to be looked into, so it's, uh, probably not something for beginners, I fear.
Mike Jones: Can people both raise their hands and put their name in the chat so that the note-taker can capture your name as a potential reviewer? Aaron, you seem like you'd be a good reviewer; are you willing? Aaron, uh, succumbed to peer pressure. Thank you. Brian, you seem like you'd be a good reviewer; are you willing to succumb to peer pressure? Okay, thank you. Um, anyone else? And Antoine, thank you. Yes, thanks.
Okay, we have two. And Antoine, thank you. Great timing. We will move on. Thank you for your timely marathon sequence of presentations.
Mike Jones: All right, Updates to OAuth 2.0 Security Best Current Practice. And I will give you control of the clicker.
Kaishuai Luo: Sure, thanks, Mike. Good afternoon, everyone. Um, I'm Kaishuai Luo from the Chinese University of Hong Kong; I'm a PhD candidate there. So today I'm going to present updates to OAuth 2.0 Security Best Current Practice, on behalf of my co-authors: Tim, Pedram, and Hannes. So here's a bit of background history. Um, this draft documents several new attacks since the RFC 9700 [Note: Speaker likely meant RFC 6819 or similar early security BCPs, as 9700 is very recent] has been published. Um, we held interim meetings on these issues, uh, last year and also discussed at OSW and the past IETF meetings. Um, the first issue is the audience injection attack, and the second is some security issues on account linking or, as Aaron put it, app-to-app OAuth connections. And, uh, in last October, um, the working group adoption call is issued, and here we are after the adoption. So the expectation that's accumulated based on previous working group feedback is that, like, first of all, it's not meant to replace 9700. Uh, it's meant to coexist with it, um, because there could be multiple security BCP RFCs under the same BCP number. And as best current practice, probably we need not hold until all of these new attacks come out, but we can keep a small focused doc to publish.
So let's go into the details. Audience injection attack. So basically, um, an attacker-controlled authorization server can reuse a client assertion at an honest authorization server, uh, so as to impersonate the client. And these, uh, spec updates are partially handled by the JSON Web Token Best Current Practices document which is submitted to IESG for publication now, but that spec only talks about how RFC 7523 shall be updated. It does not mention how the attack actually works; it does not specify other affected standards and alternative mitigations. For example, not using the issuer identifier as the audience value but to use the exact URI of the target endpoint. So this document is a security-specific, security-centric perspective for the problem, including both the attack as well as the countermeasures.
And the second issue is, uh, we call it Cross-Toolkit OAuth Account Takeover or COAT. So the background on this is that, like, existing security guidance on mix-up attacks in RFC 9700 relies on the issuer identifier to uniquely identify an authorization server and to avoid confusion. Uh, for example, we can take a look at the right-hand side figure. Um, so first of all, the client needs to store an issuer identifier, say attacker.com, and now if an honest AS returned the off code together with an iss=honest.com, then the client should figure out, oh, there's something going on, there's something wrong going on, so it could avoid the off code being forwarded to the wrong attacker-controlled AS. But in real-world deployments, we find that actually the issuer identifier may not be available if RFC 9207 is not used. Uh, the stored issuer at the client side may not be trusted if the RFC 8414, the AS metadata, is not used, but the client instead relies on manually configured AS endpoints. And the issuer identifier also may not be unique. So these hinder real-world vendors from applying the issuer-based defense. And actually, based on our investigation, none of the 20-plus vendors, client vendors, we investigated did. So... but on the other hand, real-world attacks are actually very prevalent because the attack the core attack precondition of mix-up, that there are multiple authorization servers connecting to a client and then one of them is malicious, can be easily satisfied in certain open ecosystems. For example, certain cloud platforms for app integrations has an open marketplace for SaaS integrations, also called connectors these days. Or, uh, most recently in agent AI, there are these, uh, multi-tenant token vaults where each tenant can equip its applications, uh, or AI agent with custom toolkits. So basically, malicious authorization servers can be introduced via these integrations, connectors, or toolkits, causing unauthorized access or account takeover. And, uh, to this end we have found, like, more than 20 vulnerable vendors across the industry, um, including Microsoft, Amazon, uh, Google, uh, and a lot others.
So in this document, Section 2.2, we try to provide a self-contained section that modeled these, uh, common latest deployment scenarios and provide a generalized attack description under the name "Cross-Toolkit OAuth Account Takeover or COAT." And we provide a handy countermeasure that an OAuth client can readily apply to protect existing OAuth deployments, meaning that, like, they don't need any OAuth extension RFCs, uh, but those that are already compliant or only compliant with RFC 6749 can already deploy these as mix-up countermeasures. Uh, basi- um, specifically, it means that it's not using the issuer identifier but instead a client-assigned identifier, uh, for each integration, connector, or toolkit. So it could be always available, trusted, and unique. We embed that identifier in the redirect URI and then match it during OAuth flows at the client. One minute.
So in the draft, we further delineate in what cases the issuer identifier, namely the existing mix-up countermeasures, are still useful and preferred and, uh, correct some misconceptions. So the third issue and final issue here is called, uh, Cross-User OAuth Session Fixation. Um, so a bit of background here: session fixation is actually a known threat in a lot OAuth ecosystem or OAuth-derived ecosystem, such as OpenID for Verifiable Credentials [Note: Likely meant OpenID for VC presentation or similar], OAuth Cross-Device Flows, as well as OAuth 1.0. Um, basically, an attacker can trick a victim into authorizing an OAuth flow that's initiated by the attacker. So what gets fixated in the victim's user agent is an attacker-controlled auth session. So after the victim's authorization, actually, it is the attacker that gains access to the victim's protected resources. Well, in theory, um, The OAuth 2.1 Authorization Framework authorization code grant actually mitigates session fixation because it's not inventing the concept of auth session in the first place, but only leaves the state parameter in OAuth that can somehow maintain application state. But even that state should be securely bound to the user agent's session. And as a result, in RFC 9700, this session fixation threat is not even there; it's not discussed, and it only mentions CSRF, uh, where the state parameter is used as a CSRF token.
So in reality, we see that many vendors actually introduce stand-alone auth sessions because they need to carry application state, especially to track, like, which end user at the client would eventually get the access token. And these auth sessions are often introduced either in the state parameter or they would use some stand-alone URL query parameters in some other requests, uh, in the OAuth. So, um, and it turns out that, like, during the process, clients often fail to bind auth sessions to their user agent session, that would cause session fixation. And it's not just, like, careless implementation flaws, but we find that it's often because of the clients are unable to bind securely, do the binding themselves because oftentimes OAuth responsibilities and session management are actually decoupled at the client; they are handled by separate components at the client. Uh, either because they are handled in separate web origins, uh, they are handled in different user agents, say a mobile app with a confidential client wants to request authorization from an external user agent. Or simply because, um, in the token vault's use case, like, the OAuth responsibilities of the application is outsourced to a centralized party. So in these use cases, session fixation is feasible by simply sharing the request URL that contain that session to a victim user. And we have found even more, 40, more than 40 vulnerablevendors across the ecosystem, and this is just a lower bound.
So in Section 2.3 of this draft, we try to model, also similarly model these common deployment scenarios to provide attack descriptions of session fixation as well as propose countermeasures. Um, the crux is that, like, a client needs to validate the binding of the auth session to existing, any existing user session at the client. But in order to do so, the client may need... may further need to, like, return, first return to the web origin or user agent where there is the client's original session. And, uh, in the spec, we further delineate, uh, the attack's relationship with, say, CSRF and why PKCE cannot solve the problem alone.
One minute. Sure. So here are the document updates, uh, since working group adoption. Um, basically, no new contents are added, but we made several clarifications, um, about the text. And, uh, yeah, path forward. Uh, first and foremost, comments on the draft content are greatly appreciated. Uh, here's the DataTracker link, and, uh, we also have a GitHub repo where you are feel free to leave comments there. Um, yeah, so, um, just recap the expectation. I think, um, we plan to collect some feedback on the draft and incorporate, uh, resolve other issues there are before the next IETF meeting. Um, and hopefully, we can discuss any remaining issues there. And, depending on the feedback, see what's the, uh, appropriate next steps. So thank you, and, uh, that's all from my presentation.
Mike Jones: Hannes, would you like to say a few words?
Hannes Tschofenig: Yeah, I I yeah, I think so. Um, yeah, thanks, thanks for the work. It I thought it was really interesting when you talked about the mix-up attack. Uh, at the time when it was published, it wasn't really... it was ahead of its time in some sense because this, the use case that it targeted or exploited with multiple authorization servers wasn't all that common back then. But then it took us years to work out the solution with the issuer identifier, and it turns, from your presentation it sounds like, uh, we prematurely came to a conclusion that that is the right solution, uh, which is sounded a little bit like you are saying, well, probably not, uh, which is a bit unfortunate.
Kaishuai Luo: Um, I will say it's still the right, uh, solution as long as, or if, everybody's standards-compliant. But in reality, if the client is not or the AS is not, then you will still get in trouble. And, uh, so we have some kind of, like, alternative solution for this.
Hannes Tschofenig: Yeah. So, so definitely we need to have more look at this because, of course, if companies and deployments don't decide not to use your variant either, then we are also toast, right? Um, so that's, that's a little tricky. Um, so I guess we definitely need some reviewers to look into this. Also, um, I think the the title of the document needs a little bit of, uh, um, sharpening because I think it has a, a very generic, uh, name that may be missed by many readers, I fear. Um, but maybe there are also some ideas from reviewers on how to, um, to find a, a better, uh, more modern title.
Kaishuai Luo: Thanks, Hannes.
Hannes Tschofenig: Mike, if you find some people, uh, in the room to review that one, I think that could help us move the document forward because there's obviously... this is complicated stuff; uh, there's a lot of details that need to be looked into, so it's, uh, probably not something for beginners, I fear.
Mike Jones: Can people both raise their hands and put their name in the chat so that the note-taker can capture your name as a potential reviewer? Aaron, you seem like you'd be a good reviewer; are you willing? Aaron, uh, succumbed to peer pressure. Thank you. Brian, you seem like you'd be a good reviewer; are you willing to succumb to peer pressure? Okay, thank you. Um, anyone else is welcome to do that. Um, I do not have a deck that I can find for the SPIFFE authentication topic. Was none submitted?
Hannes Tschofenig: I I approved it, uh, so there should be the deck there, but it may be a lag in refreshing it or so.
Mike Jones: Then the title may not correspond then. We're burning time now.
Hannes Tschofenig: I can see it at the all the way to the end of the list.
Mike Jones: If you can see it, you, uh, submit it for presentation now. Okay. Hans is doing it, so just approve it. There we go. Okay. All right.
Arnt Richard Johansen: Okay, I can't see it either. I'll try to refresh.
Mike Jones: Okay, I'm going to let you work that out in the background. Let's proceed to the RAR metadata presentation. You're on, you're on.
Yaron Sheffer: Hi, I am with you. Can you hear me, and can I get clicker control?
Mike Jones: I'll reset the timer to 10 minutes.
Yaron Sheffer: Okay, how do I control the slides or just take me to the next one? Okay, yeah, got it. So hi, everyone, Yaron Sheffer from Vienna, and um, this is about RAR metadata, and we like RAR, we use it in banking. Um, we know it's being used in healthcare; I'll show some examples from HelseID from Norway in my presentation. Um, Takahiko Kawasaki from Authlete wrote about Cedar policies in RAR, so it's it's great draft. And the limitation we want to discuss today is that it has very limited metadata. Um, the RAR RFC only provides that the authorization server can say which types it supports, but there's no indication about what these types mean, how are they constructed, and then what's the relationship to resource servers, which types do they expect and for which resources. And so we propose adding metadata through a new authorization server endpoint that describes those types. So per type mentioned in the types supported, you'd have an instance here, uh, in the response of this metadata saying that, for example, payment initiation has a certain description, additional documentation URI, a schema either directly or as a URI, and examples.
So this is the authorization server side of things, and we also propose additional resource server error signaling. So through a normative error, the resource server could say, "I am rejecting this as forbidden because of insufficient authorization details," and then together with the resource metadata URI, RFC 9728, this could lead to additional discovery where the client calls then this resource URI and in the response, uh, obtains a new attribute that details which authorization details types are supported by the endpoint. So each resource can say, and with and with certain operators, "I require one of these types" or "all of the other types" or different constraints, and then this way client can go to the supporting authorization servers, discover the details type metadata endpoint, call it, and understand how to construct those types. So this is a metadata discovery.
And if we look at the example from Norway, uh, Rune Rune Grimstad was very kind to support, this is their actual requirement, so for a specific endpoint that he provided, it would need two RAR types in combination: one identifies the vendor, and the second identifies the type of request being made to the healthcare system. They have a specific challenge where they have 300 healthcare providers that connect to this open API platform and write software, and all need to know these relationships of how to construct the RAR types and which RAR types each resource endpoint expects.
And then the next part, um, is is an optional part. This is a different pattern that says: what if the resource server, in addition to saying, "this is an insufficient RAR kind of error and here's my resource metadata and you can learn and discover how to construct," but what if it would actually provide an informative response body with the actual RAR object that's actionable? So if the client takes this object, makes a new OAuth request with it, and the end user approves and the token is being issued, this is going to satisfy the endpoint upon next call. So this is an optional informative mode for full remediation. So the idea is that the client just uses this in the next request. We in Raiffeisen Bank International use this pattern already. Um, we didn't want the clients to to deal with learning the resource domain RAR objects and what they what they mean, but we just say, "You failed, this is why you failed, and that's what you got to do to go and fix it." Um, we find that it has additional benefits: it enables the resource server to enrich this structure; it might have ephemeral data such as transaction IDs or flags that guide the authorization server and say, "you know, the risk level is so and so, so I'm instructing on different ceremonies that I as the resource server attach to that transaction." It can, um, add cryptographic checksums to to this RAR payload or do whatever it can. So in this, in this type of optional pattern, the resource domain wants to control the RAR body and and we use it in Raiffeisen, and it's also garnered interest from MCP working group working on fine-grained authorization. So the idea is that an MCP client would attempt to call tools with an initial OAuth token obtained at login time, and then if the resource being called requires an approval, it will instruct on remediation and say, "You failed because of authorization required," and there's a hint, "this is a RAR type, and this is what you need to request." So this type of pattern has taken already some interest there.
And and so yeah, this is the feedback we got: so HelseID from Norway is interested in the metadata discovery use case; the metadata actionable RAR object use case we use it in Raiffeisen and the MCP working group on authorization on fine-grained authorization is interested in it as well. And that's all I had in terms of slides, and I'm since this is a new draft, I'm requesting feedback on this.
Mike Jones: Has anybody read the draft? Justin, you're acknowledged.
Justin Richer: I have not read the draft, uh, but I have a question for the presenter: is there um, have considerations been made regarding having this turn into an unwitting oracle, um, especially when you're returning detail from the resource, uh, in what is effectively an insufficiently authorized state?
Yaron Sheffer: I'm not sure I got the question. What do you mean by turning it into an oracle?
Justin Richer: Oh, okay, so um, I have software and I'm not allowed to get the data, but by telling me what I need in order to get the data, I you there's a risk of leaking internal state information and basically giving me additional leverage that I might use in order to, uh, pursue an attack.
Yaron Sheffer: Yes, so um, the draft has security considerations talking about the external authorization model represented in the in the RAR objects versus the internal one. So the basic idea is that not to expose too much, right? To be aware of those of those risks. And um, I I assume you're referring to to this pattern here where you the the authorization server actually provides the actionable one. Okay, yeah, then of course of course the the risk is here, but the assumption is that, um, if an ecosystem uses RAR, then there are objects out there and there needs to be discovery of how to construct them anyway. So the level of of documentation is there, so the first pattern already standardizes the discovery path, and these objects already need to be designed with with security in mind. Uh, of course, once these objects require more, you know, present present higher risks, then...
Hannes Tschofenig: Did I lose you, or was it Yaron?
Mike Jones: Yaron, we have lost your audio.
Yaron Sheffer: Do you hear me now?
Mike Jones: Yeah, now I can hear you. Continue.
Yaron Sheffer: Okay, sorry. Uh, where did you lose me? Where did you lose me?
Justin Richer: Uh, I think that you sufficiently answered the question that I was after. Uh, obviously as a new draft, this is one avenue that we can pursue if the working group picks this up. Uh, overall I do think this is great; this is addressing a bunch of stuff that we very intentionally punted on during RAR in order to keep it tighter and simpler. Uh, the discovery issue is its own, you know, can of worms, so um, I'm uh, I'm excited to see this addressed in its in its own, uh, in its own body of work.
Pamela Dingle: Hello. Uh, is there a a plan to address the common schema issue here? Because the way that this works, um, the clients, every client would need to know every custom authorization details payload for every API they call. So there's a proliferation issue I think, but has that been discussed?
Yaron Sheffer: It's not been discussed, but happy to explore it further with you. Happy to connect. Um, my understanding is that the proliferation issue exists the moment that RAR was put out there, and each ecosystem has its types. This this draft attempts to put forward a metadata and discovery of, so it does not challenge the proliferation itself but tries to to put order into things.
Pamela Dingle: Okay, thank you. Look forward to discussing it.
Aaron Parecki: Just really quick. Um, I did review the draft. Um, generally I think this is something that's needed. Uh, one thing that made me a little bit nervous is the whole expression syntax bit where it's defining the all_of or one_of or and, or things like that in JSON. I'm wondering if there's some other existing thing that can be referenced that does that instead of defining it all in this draft.
Yaron Sheffer: Happy to look for something like that. Uh, will do.
Mike Jones: Okay, Yaron, proceed with your next presentation if you can share that; that would be great.
Yaron Sheffer: Okay. Um, can you share it for me and give me control again, please? I think I found it. Okay, there you go. Do I have control?
Mike Jones: Um, no, that's why I was asking you to share it.
Hannes Tschofenig: You can you can get him control. I can give him control and then remind me to take it back.
Mike Jones: Okay.
Yaron Sheffer: Okay, thank you. Um, so as Aaron reminded previously, this is a profile based on OAuth 2.0 for First-Party Applications, developed together. And the recap comes to the two previous IETFs where I came up with a draft tackling app-to-app flows with a federation across trust domains resulting in using the browser and a degraded non-native user experience. Our challenge comes when a a client app here on the left side needs to achieve an authorization flow with an app here on the right side, but they're not directly connected; there's um an N number of brokers, meaning authorization servers that federate to the next authorization servers. Since these trust domains A until N do not are not served by any app, the experience is no longer native and the navigation is through a browser, and this contains a number of challenges. We've encountered these challenges in our setup in Raiffeisen Bank International but also learned that the Swiss government has this. In our use case, we have 10 subsidiaries here on the right side in Central and Eastern Europe, and we're trying to reuse client apps for various reasons: stock trading, crypto trading, gold trading, whatever. And we want to build them once and use them across the countries, but then to consume the IDPs and the login flows on each app, we use a broker and then our flow falls to the browser. This was our challenge. Working on it, we got contacted by the Swiss government, and without going into detail, they have the same pattern with apps on the left side, authentication apps on the right side, and a broker. So we're in the same pattern. And in the previous IETF, we presented again our draft, and it contains a new endpoint, OAuth as an API, and then Aaron was kind enough to say, "Have you considered first-party apps?" And we've interacted, and um, we said, "Okay, but ours is not a first-party use case, it's with federation." But Aaron said, "My draft is already direct OAuth interaction as an API; we could use that endpoint and build a profile upon it," and so we did.
And this draft builds and extends FIPA with new responses: there's the federate response where a client talks to an authorization server in a FIPA request, and the authorization server here in two says, "I want you to federate, here's my OAuth session, here's where I want to get a response, but you shall go to the next authorization server and here's here's my FIPA request to it." And the client complies and calls the next authorization server on behalf of the previous, and that one, let's imagine that goes through the actual first-party app ceremonies and authorizes the request ending up with an authorization code, but this code is intended not for the client but for the previous one. So the client kind of acts like the browser would in a 302 redirect flow, but just building those ceremonies onto first-party apps using the security mechanisms in there. So then when the when the client provides the code, then the original authorization server is happy and and can finish the flow. So with this with this option, first-party apps becomes a native interaction that can federate across authorization servers.
And then the next command is redirect to app. So when the client makes the request, if it's a native mobile native app client, it would include the native callback URI that's an indication of where it wants to get responses, that's the client's own deep link. And any authorization server that does federate must forward this parameter, so it's it's kept in the in the context. And then the authorization server or anyone in the flow says, "I want you to use the app," so I don't want to serve you the challenges directly, I want you to use an app that I'm instructing with the deep link, and the client does so. And then after this this app interacts, it provides on this callback URI the authorization code, and then the client can feed back the code to the authorization server and complete the ceremony with the help of an app. So these two flows already achieve our use case and the Swiss government use case because we can federate across authorization servers natively and we can interact with additional apps natively and all within the constructs of first-party apps.
Um, and then another third type of interaction we introduced is the insufficient information. This is when the authorization server is not sure where to federate to; it has more than one option and it can challenge the client and say, "I need more information, and I'd like you to prompt the user and ask them for their email," because once I know the email, I know the trust domain and I know which IDP to choose and I'll know the next instruction. So there is some language here to prompt the user and say enter your email or in the next example, a multi-value choice, so "choose your bank" and here are two values, and the client then renders this choice, gets the response, gives it back to the authorization server, and this way goes back to the one of the previous responses: "Okay, so if I know where you're going, I know where to federate you or which which app to use."
Um, and another security realization Aaron and I have made is that when when a client is is federating, so in this in this flow, this client is calling AS2 which it has no relationship to; it has no standing in it. So it's a client of AS1, but it's making calls to AS2, and AS2 knows one but doesn't know the client. So how does client authenticate, right? Because first-party app says client should authenticate and and there needs to be a strong trust relationship. And so the understanding was that the only thing the client actually has here to show is a PAR request URI. So whenever there is a federation, the usage of PAR then then the request URI generated by PAR is an indication that there was a trust relationship and a request was created here. So whenever federating, it it must be done using PAR; that's the one thing. And when when the client then obtains a when AS2 obtains here a request and it knows through some metadata that this is a federating AS1 is going to federate clients, it should be careful about the challenges it serves to the client. So it would be a bad idea if AS2 would say to the client, "Please give me username and password," because we don't want the password, for example, to flow through the client. So that is the the consideration about about these clients.
And yeah, that's that's all I have for you. Happy to take questions and comments and happy to ask for reviews.
Mike Jones: Standard question: has anybody read the draft?
Justin Richer: I think that you sufficiently answered the question that I was after. Obviously as a new draft, this is one avenue that we can pursue if the working group picks this up. Overall I do think this is great; this is addressing a bunch of stuff that we very intentionally punted on during RAR in order to keep it tighter and simpler. Uh, the discovery issue is its own, you know, can of worms, so um, I'm uh, I'm excited to see this addressed in its in its own, uh, in its own body of work.
Pamela Dingle: Hello. Uh, is there a a plan to address the common schema issue here? Because the way that this works, um, the clients, every client would need to know every custom authorization details payload for every API they call. So there's a proliferation issue I think, but has that been discussed?
Yaron Sheffer: It's not been discussed, but happy to explore it further with you. Happy to connect. Um, my understanding is that the proliferation issue exists the moment that RAR was put out there, and each ecosystem has its types. This this draft attempts to put forward a metadata and discovery of, so it does not challenge the proliferation itself but tries to to put order into things.
Pamela Dingle: Okay, thank you. Look forward to discussing it.
Mike Jones: Aaron, and Pam, are you willing to review the draft?
Pamela Dingle: Yes I am, absolutely.
Mike Jones: Let the record show. Aaron.
Aaron Parecki: Just really quick. Um, I did review the draft. Um, generally I think this is something that's needed. Uh, one thing that made me a little bit nervous is the whole expression syntax bit where it's defining the all_of or one_of or and, or things like that in JSON. I'm wondering if there's some other existing thing that can be referenced that does that instead of defining it all in this draft.
Yaron Sheffer: Happy to look for something like that. Uh, will do.
Mike Jones: Okay, Yaron, proceed with your next presentation if you can share that; that would be great. Okay. Um, can you share it for me and give me control again, please? I found it. Okay, there you go.
Mike Jones: All right, I found something with SPIFFE in the title; that's probably it. Okay. Proceed.
Arnt Richard Johansen: Thank you. So this is an update on the SPIFFE client authentication. Um, yeah, on behalf of my co-authors: Stein, Scott, and Peter presenting this today. This got adopted um between the last IETF and this one. Yeah, small recap. Um, SPIFFE stands for Secure Production Identity Framework for Everyone. It's part of the Cloud Native Compute Foundation and it targets an identity framework for workloads. In SPIFFE, um, our credentials are referred to as SVIDs, Single Verifiable Identity Documents. And until now we always had two SVIDs: the X.509 format and the JWT format, and since not so long ago we are adding the Workload Entity Token format which comes out of the WIMSE working group at IETF. Yeah, it's used in the industry, mainly in enterprise environments. Added a couple of links there.
Yeah, so today if workloads act as OAuth clients, um, there is this following situation: that they um, can use SPIFFE to um talk to other workloads, the most of the time used with mutual TLS with X.509 or just basic bearer of of JWT. But if they want to start talking to an OAuth authorization service server, most of the time they end up using client ID and secret basically. Um, client, sorry, um, private key JWT um, yeah, off. Yeah. And it's a bit of a shame because these this secret needs to be manually provisioned and rotated. It's long-lived, it's a bearer credential after all. Um, but the workloads, they have those shortly rotated, automatically provisioned credentials already, and they just simply cannot use them so far.
Yeah, this is how often configuration looks like. One is on the left is Java Spring Boot and on the right is some .NET. And most of the time you would see there is some client secret or some client secret reference to a vault. Yeah, so our goal in this draft is to have a profile for OAuth that allows you to use your SPIFFE credential, your SPIFFE SVID you already have, you already rotate every hour, um, to use against your OAuth authorization server.
Yeah, so the proposal um proposes three parts, um one for each credential format. Um, for the JWT SVID we basically profile 7521, um which is the OAuth assertion framework. This is the client authentication part of it, not the assertion part of it. Um, yeah, the JWT SVID is a bearer JOSE token, carries subject, audience, and time to live. The X.509 one profiles 8705, that's the mutual TLS client authentication part already adopted. And since last IETF and since the since the adoption we also added the Workload Entity Token profile which uses the attestation-based client off draft which is currently I think already quite far ahead if not even in working group last call. This um Workload Entity Token carries the cnf claim and has proof of key capabilities, so is a proof-of-key bound sorry, key-bound token, and we use the attestation-based client off to provide the proof.
Yeah, since last IETF and adoption we kind of went around the other OAuth work and thought, okay, where can we where do we need alignment? One of it as I just mentioned is the attestation-based client authentication, which is currently a draft. Um, thanks to Yaron and Taka, we also added support for the CIMD spec. And we also added token_endpoint_auth_methods_supported, which is the authorization server metadata is the authorization server metadata property, one of those, and we are proposing three types: spiffe_jwt, spiffe_x509, and spiffe_wet.
Yeah, um, this is now our goal since we've got adopted: to look around OAuth and see okay, where do we all where do we need alignment? Yeah, and that's also our goal and some sort of timeline. This whole thing got proposed or started being discussed early 2025. Now we are early 2026 at IETF 125, and our goal is to finish alignment with all the other parts of OAuth and also finalize the draft by the next IETF. Yeah, so if you have interest in this work, please reach out, please provide a review. Um, yeah, we're trying to use momentum in the space and want to get this yeah go through soon. Yeah. And that is it.
Mike Jones: Thank you. Who has read the draft? Aaron has read the draft. Who is willing to read the draft? Uh, two hands went up. Please put your names in the chat for the note-takers. Others are always welcome.
We are uh five minutes behind time, so unless there is any other comments, I'm going to end it here and we will move on to the additional hash algorithms draft, which Aaron is going to present um in person.
Aaron Parecki: Thanks. All right. Hi, Aaron Parecki again. I'm actually presenting this on behalf of Filip, who is remote and... oh, on the screen there, fantastic. Hi, Filip. Uh, so the core problem here is that there are a lot of places in OAuth that hardcode specific mentions of SHA-256, some of which have extension points defined, some of which do not. Um, however, there's often no mention of other hash algorithms. These are a few places where this happens. Um, this is normally this is mostly fine for the general for most things. Um, and in the places where, for example, PKCE that uses SHA-256, we don't expect that the actual hash method is broken in any significant way that it causes any real problems. However, there are some places that define requirements on cryptography and in particular CNSA 2.0 that specifically prohibit the use of SHA-256 because in some cases it is considered to be too weak. And it's a blanket ban on the hash algorithm. Um, and they require more bits. So essentially what it means is that in those cases when you have to follow those regulations, you are unable to even use PKCE as it's described today with SHA-256 or MTLS or DPoP.
So the this draft, all it does is it defines SHA-512 alternatives for where SHA-256 is currently defined, um, so that in places that can't use 256, there is an option and a well-defined mechanism to use 512. Um, it does not deprecate SHA-256, it does not try to step into saying that you shouldn't use it elsewhere; it's specifically for the use cases of when you can't use it because of your regulatory regulatory environment. Um, there is a mention of it even saying that you should not use it unless you absolutely have to. Uh, the there is negotiation and metadata in most of these places to allow this kind of multiple algorithms coexisting um, if we ever did need to backward or deprecate 256. Um, so here this is like a summary of essentially what's in the draft, of where the what mechanism in OAuth uses SHA-256 currently and then what it would be with 512 and where. Um, for discovery and negotiation, there's the AS metadata that calls out um where like which code challenge methods are supported, but it there is some missing places where discovery and negotiation is missing in the current draft, so those are also defined, uh it's like defining these in the appropriate registries for this. So there's a few things added to RS metadata, there's a thing added to the DPoP registry, and essentially it says if these are absent, the default is SHA-256, which is essentially the current state. So...
I see some Brian's making some faces, which makes me nervous. Uh, but yes, essentially the the action here is call for reviews; please give this a read. Um, again, keeping in mind that the scope of this is for the people who are unable to use 256 SHA-256 for regulatory reasons because there is a broad statement that says you can't use this method. This should provide them a path to use the rest of OAuth as opposed to, for example, not using DPoP or using the plain PKCE method which it sounds it sounds crazy but essentially that is the only option otherwise because if you can't use SHA-256, you have to just not use it. Um, so we're trying to give people a better path for for that when they have to. That's the end of that.
Mike Jones: Now Aaron has asked the key question. I just volunteered in the chat. Um, I think Flemming uh raised his hand, is going to the microphone.
Speaker 2: Uh, hello, uh I have a question regarding the this draft, I actually read it this time. Uh, in fact, it's very specific to SHA beyond 256 and I think that maybe the problem with the rise of regulatoryism, you need to use another additional hash and then you go back to doing this such of draft again. And I think the very interesting part in this draft is the negotiation of the hash algorithm that you want to use in the authentication and I think that you should we should have a draft with fixing the parts of the different protocol methods that don't have this negotiation capability and then leave open to someone else the responsibility for designing whichever hash-based method he wants to use because then you can have something that is done in a very small scope rather than requiring all the machinery of negotiation once again for an additional hash mechanism. Okay, thank you.
Aaron Parecki: So, additional hash mechanism, I already got two back-channel messages about adding SHA-3 and and adding shake-based options. Um, the the point behind keeping the draft limited was to a, just open up the way for CNSA 2.0 to continue, which was a choice between SHA-384 and 512, because under CNSA 2.0, SHA-3 is only allowed for um signing firmware and and hardware; um, it's not allowed as a general hashing algorithm. Um, and the extension points and the discoverability is there um, that it can be then reused with any future algorithm.
Speaker 2: But and I think I would even narrow down deeper and just do the negotiation and specifying the policy and requirements on both ends to have a sort of ladder of hash mechanism that you want to use, a sort of and use SHA-256 as mandatory to implement or very baseline algorithm for some cases and then do something specific for the hash off. Okay, thank you.
Mike Jones: Antoine, you're next in the queue. Brian, you are apparently next in the queue.
Brian Campbell: I was going to maybe go the other way. Sorry, closer to the mic. Sorry. Some of the faces I were making were largely around the negotiation pieces, and if I understand the goal here, it's to fit within the constructs of a very tightly regulated world. Um, and I would like to understand the actual rationale behind it if you could explain that to me another time. But um, I don't think the negotiation pieces are necessary or even realistic to follow and I would drop that entirely and just allow for the the new places you've identified to carry the new hash. Um, and if you need it, you just do it, and if you don't, you ignore it. Um, and just for whatever it's worth, those were designed as extension points through the inclusion of exactly what you did, a new claim, a new parameter. Um, that was an intentional decision to facilitate agility and extensibility through the providing something with a new name, not necessarily a deficiency around like not having a method in the way that PKCE did. Um, I had something else I was going to say but that's all.
Mike Jones: Justin.
Justin Richer: Uh, big plus one to what Brian just said. Um, the the structures that you guys are extending were intended to be extended exactly how you're extending them. And um, so this is uh, I don't see this as a um a gap in that it lacks negotiation, I see it as a feature that it lacks negotiation. Uh, because the reality is in a lot of these cases, especially in highly regulated environments, you don't want negotiation, you want a single configuration and it's set a certain way and it only ever does that. Uh, too loose algorithm negotiation got us uh JWTs without none that were being accepted in places where they shouldn't be because hey, the algorithm is right there, I can just plug that into my list of supported algorithms and it just goes. And uh, yes, that is an extreme example, it's reductio ad absurdum, however, it is the pattern that was being intentionally avoided here, uh with uh especially things like ATH and uh in DPoP and whatnot. Um, so yeah, I agree that we don't particularly uh need to build out negotiation for these pieces, but I do think that adding um adding these hash definitions is uh is fantastic and I think we should do that.
Aaron Parecki: So as we're running out of time, I will quickly speak to why that negotiation is there and that is me coming from the point of view of having a client that I do not want to have a specific flag that says only use SHA-512. So it needs those hints in order to figure out which one to use. But I get the feedback, I get the point, and um I would welcome you to really put it on the list so that we can discuss further. Okay, thank you.
Mike Jones: Thanks, guys. Um, and our chair reminds us that what we're trying to do in the next set of presentations is to determine reviewers. So if you can in the chat pre-review or pre-select yourself to review, that would be super to move us along. Um, next we have the uh OAuth Extension for Multi-Agent Collaboration.
Yuan Ni: Hello, everyone. I'm going to present a proposal for OAuth extension for working for the multi-agent collaboration scenarios. Giving you the control. Okay.
Mike Jones: And if you can do this in shorter than 10 minutes, it would let us probably do another presentation.
Yuan Ni: Okay, I'll try my best. So, here is a background of our research. The core idea is the currently the AI models often involve some complex tasks; it is exceeds the single agent capability. So to address this, there may be multi-specialized AI agents collaborate with each other to accomplish one task. So in this scenario, often there is a leading agent to coordinate the sub-agents to form a specialized task group. So take an example, the task a may be a real-time health advice, so it is not a single task, it needs the collaboration of the data collections, data's prediction, and some advice generation agents. So this collaboration may introduce some authorization challenges. For example, if each sub-agent in the task group apply for a access token individually, it will cause some inefficiency because of frequent interaction with the authorization server. And second, for the authorization server, managing the permissions for potentially dynamic task group is a little difficult. And what's more, it may involve some lack of clear traceability for the actions performed by the task group. So, traditional authorization procedure is for a individual client application or let's say the clients with the same permissions. However in on the right side is a multi-agent collaboration workflows. We can see the leading agent may receive a intent from the user and resolve the intent to generate a task, and then it can use the task to discovery some sub-agents and resource or resource servers from the discovery services. This part is out of the authorization but then the leading agent may assign the task to the sub-agents and the leading agent and sub-agent will form a task group. Uh when executing the task, this task group may need to invoke some resource servers for requesting the resources, so authorization is happening here.
So our solution is a to propose a task group authorization. We have two methods: the first one is a static task group where the leading agent may select the sub-agents ahead of the task execution. And the second method is a dynamic task group authorization where the leading agent may choose the sub-agent one by one and the new sub-agents may join the task group. So let's see the static method first. The core idea is the leading agent may act as the applier of the access token and request that access token from the authorization server on behalf of the whole task group. So in the this is the access token of the static method. We want to add a new claim maybe called the applier to indicate the applier of the access token; in our proposal it's the leading agent ID. And what's more, there may be multiple subject, audience, and scope pairs, which indicate each indicates a sub-agent's permission. And now let's take a look at the dynamic task group authorization. So in this case, the leading agent first obtains a access token and then it will select a sub-agent dynamically. After that, it will generate a task credential and send the access token with the task credential to the sub-agent. And the access token and task credential is cryptographically bound with each other. So the access token in this case is here is example a. We want to add a new additional claim called attribute to indicate the attribute of the sub-agent; in our case, it's a task ID. And what's more, the sub claim in the access token may be reused to index the keys that will be used to endorse the attribute.
So what is the task credential we involved in this procedure? Here is an example. The task credential is lightweight credential issued by the leading agent to the sub-agent, and it will contain the leading agent's ID, task ID, sub-agent ID, and maybe the hash of the related access token. And what's more, the signature of the task credential is generated by the leading agent using the key we just added included in the access token. And after the sub-agent received the access token and task credential, it may provide these two things to the resource server to request a service. So on the resource server side, it needs to verify the access token first, like OAuth defined, and then it needs to verify the task credential's validity or signature. And then what's more, it needs to verify the relationship of the task token and the task credential. So for example, it may extract the public keys from the access token and use it to verify the signature of the task credential. And what's more, access token may involve a task ID; the resource server should should verify that the task ID is the same as what is included in the task credential.
So you may see that our method of dynamic task group is inspired by the DPoP method. Here is a comparison of the two methods. So the DPoP is cryptographically binds the access token to client public keys, and in our method, the access token is bound to a client's attribute. And in DPoP, the credential is public key credential authorized issued by a CA, and in our proposal, the credential is a attribute credential issued by the leading agent. And then this is the conclusion of our proposed two authorization method: the static and dynamic one. So the static one is very suitable for the predefined task or a predictable task with a fixed task group. And dynamic method is suitable for a dynamic task. Both of them can improve the efficiency. So here is the advantages of our proposal like improved efficiency, flexibility, and simplified the management. So to conclude, we propose a multi-agent collaboration authorization methods, like two methods: one is the static one, another is the dynamic one. And also we have some questions like: is there any existing authorization method in OAuth suitable for the multi-agent collaboration? And is it feasible to extend the access token with applier or attribute claim? Or we can reuse some existing parameters? That's all, thank you.
Mike Jones: Peter.
Peter: Okay, in your task you're trying to use an attribute to describe a task, and to me that requires additional syntax or parsing mechanisms, so can you explain that a little bit, how do you use an attribute to describe a task?
Yuan Ni: Okay, a task may be described with a natural language sentence like "I want to I want health advice from the my agent." Also, it can be a unique identifier. And in our proposal, the attribute is the attribute credential is issued by the leading agent because the leading agent can decide the task from the intent and assign the task to the sub-agent. So the leading agent may be the issuer of the attributes.
Mike Jones: Okay, we're at time. Who is willing to review this draft? Peter, Aaron. Thank you. Others are always welcome. With that, thank you.
We will move on now to the Agent-to-Agent Profile for Transaction Tokens.
Yuan Ni: Hello everyone, I'm Yuan Ni. Today my presentation is about OAuth Transaction Token Profile for Agent-to-Agent Calls. Transaction token is a short-lived, signed JWT which is designed to maintain and propagate the identity and context throughout the service call chain. What we want to do is to extend this mechanism to an agent-to-agent scenario to protect the user and agent identity, authorization context, as well as the agent-to-agent context. So why is this necessary? The OWASP Top 10 for agentic AI applications has illustrated that there are several critical risks in the case of multi-agent workflow, such as the goal hijack, context poisoning, and privilege abuse. We believe transaction token is a powerful way to solve these problems because, firstly, transaction token can preserve the user identity and authorization context to avoid any deviation from the user's intent. Moreover, it is short-lived and down-scoping, so it can prevent privilege abuse and matches the agent nature. Last but not least, we want to use transaction token to protect the agent-to-agent context to prevent context poisoning and rot.
How to realize these goals? Transaction tokens has introduced several changeable and immutable data fields. Let's first make a quick overview of them. The first one is scope; it is the immutable purpose of the transaction. And the second one is transaction_context; it contains the immutable details throughout the call chain. And the last one is request_context; it corresponds to the changeable environmental context. What we do is to just encapsulate the agent-to-agent message structures into the transaction token claims. For example, we map the task ID and the user input into the immutable data fields, such as the scope and transaction_context. And we map the agent thinking, status updates, such as context_id, next_id, to the request_context.
So here's comes to questions we are really want feedback on. The first one is where should the scope comes from? The first option is just a put the task ID into the scope, and the second option is we let the scope derive from the external access token's scope and put task ID in the request context. The second question is should we introduce a new claim to carry the agent-to-agent context, for example, the agent_context, or just directly use the existing request_context? Finally, I want to mention that we have made a discussion with Ashley, the author of the transaction tokens for agents. We both interested in transaction tokens for AI agent applications, so there will may be a potential for emerging in the future. That's all for my presentation.
Mike Jones: All right. I know there's advocates for transaction tokens. Is one of you willing or several of you willing to review?
Hank: Yeah, hi, this is Hank. Um, not immediately in the next two or three weeks, but I will do it in the next six weeks.
Mike Jones: We have a volunteer. Thank you. Anyone else? And I'm going to change decks now. If you're going to volunteer, you can do so in the chat as I've been asking people to do. Thank you very much.
We're going to move on to the OAuth Agent Operation Authorization presentation.
Speaker 3: Okay, I'm not sure whether my co-author Suresh is here, so I assume not, so I will do the presentation on behalf of our co-authors. So, what is the problem we want to solve? Um, so the first one is the authentication framework for the agentic environment. So for example, if you imagine open-source-like agent, you can now know what the agent actually do because the agent has some capabilities to extend their skills. So we cannot do the authentication based on pre-configured policy. So the policy could be dynamic, generated by the agent; that is one thing. Another is that the agent itself due to the hallucination of the LLM nature maybe not accurate reflect the user's intention. So we need some kind of mechanism for the user's consent with what the agent actually want to do. So that is one thing. Another is the delegation, so the delegation including the what which agent delegated which user and do what kind of behavior and also delegate the intention of the agent, whether the user agree with the agent's part of the user intention. So that is the delegation.
So our solution here is not to reinvent any new wheel. We understand that OAuth working group and community has built a lot of very good standard, and we read very carefully of the draft that Brian published days ago, and we want to follow that architecture. Um, we want to define minimal extensions about the claims and the token format to enable the interoperability of the agent ecosystem. So the mechanism is quite simple: we reuse the pushed authorization request standard and put a agent operation proposal claim there. So in this proposal, the agent, for example, want to propose to bind the user identity and the agent identity together and to bind with a proposed behavior, for example buy something cheap on November 11th, um something like that. And also, um, so with this mechanism, the authentication server can first verify whether this is a policy is allowed, and secondly can redirect to the user for user's authentication and consent and authorization. And after that, the AS will generate a evidence claim, which is as a witness of the AS, um, the binding is um approved. So it basically says that okay, the authorization both the authorization server and the user agree that this agent on behalf of this user and do this behavior and this is allowed.
So the security model here, um, first we think that due to the hallucination and prompt injection, we cannot trust what LLM says, that is the drift from the user intention. Um, but here we assume our assumption is that the agent runtime, the agent code is can be trusted.
Mike Jones: Okay, we're going to stop so we can get one more presentation. Who is willing to review this draft? Peter, Aaron. Thank you. Put your names in the chat. Next, we will move on to the OAuth Resource Access Authorization (RAR) Lifecycle [Note: Corrected title based on common naming patterns].
Min: There we go. Yeah, thanks. Okay. Mingchen from China Mobile. Uh, we have two drafts here is that uh the first is for we want to extend the RAR with two new members is because the binder unambiguous authorization request and the city lifetime lifetime bonding. So we added the two new member here: the first object is process_context and the second is the lifetime_bonding. Uh have a quick look here is that process_context we add uh the parameters here is assurance level, compliance framework, and risk signals here with IP address, device ID, and lifetime_bonding here is added type, task ID, notification endpoint URI method, and also terminal states is very uh simple introduction here because we have another draft.
Really, we're at time. What would you like people to know in the last minute?
Min: Yeah, I want people to know that now the OAuth is not enough for the AI agent's authorization. Yeah, we have to do more things is for the fine-grained authorization access.
Mike Jones: Okay, that that's a good statement. Who would like to review Mingchen's draft that's about that? Aaron again, and I saw Hank's hand move, so let's put Hank down.
Hank: No, not some big promises on this one.
Mike Jones: Okay. And Fleming, yes?
Okay, thank you all for moving with us at such a pace; we have a lot going on. There's at least three other drafts that we have not discussed; they are in the DataTracker, nothing prevents you from reading them and reviewing without presentations having occurred. Thank you all, have a good afternoon.
Min: Let's see. Thanks you all for advertisement. We have side meeting tomorrow on uh 10:30 on Hunan side meeting room. Thank you. The topic is AI agent authentication and authorization. Welcome, join us. Thank you.
Hannes Tschofenig: And and a thank you from my side as well. And in the next couple of months, we'll schedule some some virtual interim meetings to discuss these topics and of course try to wrap up our working group documents as well. So that will be fine. Thank you all.
Mike Jones: All right.
Hannes Tschofenig: Hannes, will you will you call for more topics on the interims and not just these ones? Because there's more that we didn't cover.
Hannes Tschofenig: Of course, of course.
Speaker 4: Okay, thank you.
Mike Jones: All right, active session. Thank you. Bye-bye.
[End of Audio]