Markdown Version

Session Date/Time: 20 Mar 2026 01:00

Mark Nottingham: Okay, so it's 9:00 now. Shall we get started? This is the HTTP Working Group. If you've wandered into the wrong time, we won't mind if you leave now. Next slide. Tommy, do you want to run the slides or shall I? You're running the slides. Fantastic.

Tommy Pauly: I'm driving the slides.

Mark Nottingham: So, this is the famous IETF Note Well. These are the policies by which we operate here at the IETF. If you're not familiar with this, please do take some time to read it. You can find this using your favorite internet search device by searching for "IETF Note Well". This covers things like our intellectual property policies, which are increasingly important, as well as guidelines for your behavior, especially around professionalism and harassment and similar topics. So please do take a look at that if you're not familiar with it. Also, this session is being recorded, and so there's a privacy statement that talks about that. Next slide.

So, we have a number of topics today. We've rearranged the schedule a little bit to accommodate remote presenters. So we were going to start with the other topics, and then we're going to go into our active drafts, and finally, we have a follow-up of some related work that isn't a suggestion for work here, but folks thought that it might be interesting to talk about because of the overlap with expertise in this group. So do we have any agenda bashing? Okay. Hearing none. Oh, Ben. Benjamin. Go ahead, Ben.

Benjamin Schwartz: It's at the discretion of the chairs, we could switch the Unbound DATA and CONNECT drafts. I think that there's an argument that the logic flows better in the other order. But I'm also happy to leave it this way.

Mark Nottingham: I don't see any reason why not. Thank you. Just remind us if we forget at the time.

Tommy Pauly: Yep. Unless anyone has objections to that, that seems perfectly reasonable.

Mark Nottingham: Okay. Anything else? Let's go ahead and get started then. First up we have Dick Hardt with HTTP Redirect Headers. Go ahead.

Dick Hardt: Hello everybody.

Tommy Pauly: Hey. Do you want to drive the slides yourself?

Dick Hardt: I don't know how.

Tommy Pauly: So you should have a Share Slides or Request Slides button that looks like a little document.

Dick Hardt: Yeah.

Tommy Pauly: And then I can grant that to you. Oh, you want that's the share your screen. This the slides should already be uploaded too. So if you click the other one.

Dick Hardt: Oh, Request Slides. Okay.

Tommy Pauly: Yes. Great. And so you should be able to pick your slide deck and then you can advance it how you want.

Dick Hardt: Okay. Confirm your selection. Got it. Great. Okay, HTTP Redirect Headers. Myself and Sam Goto at Google are working on this. And um, we're going to go over the problem, I'm going over the problem, talk about the solution at a high level, show how OAuth works before and after, talk about how deployment could happen, go into some more details, and then there's a bunch of open questions at the end that uh, hopefully we have a little discussion on.

The problem, um, this became quite acute at the last IETF in the OAuth working group as people talked about and shared some attacks where uh, people could essentially pull the code out of the redirect and reuse it—out of the redirect from the... sorry, it's like 1:00 in the morning here, I'm still it's too early for coffee. Um, they could pull the authorization code out of the redirect and then uh, reuse that. Um, so that inspired me to start thinking about this problem and what are some of the other related problems, one of them being some other attacks around where the either the client or the AS get redirected to, but not from who they think the redirect is from. Let's see that doesn't do what I thought it would be doing, does it? Oh, yeah.

So, solution one is is some new headers: Redirect-Query and then Redirect-Origin and Redirect-Path, which is a modifier for the origin. So how it works is the server when it's going to do a redirect also sends a header called Redirect-Query that has the same parameters as what would be that are in the redirect. And the browser says, "Oh, I see you sent redirect headers and I'm going to send those to the target, and I'll also add in the origin that I got it from so that the other side knows the origin, and the receiver now gets the parameters in the headers um, and um potentially not the URLs, not the URLs if the URLs are sensitive." And the JavaScript and page doesn't have access to these particular headers and so it's only something that can come from one server to another server. So you can kind of think of it that we're sort of set setting up a channel between the first server that's doing the redirect and the other server with the browser mediating the parameters.

So in OAuth today, the client which is the first server redirects to the AS with parameters in the URL, the AS redirects back with a code in the URL, and that code is exposed in logs, page, and then you know the malicious activity tends to happen from you know JavaScript of some kind or an extension or something like that that can read the URL. Where with the redirect headers, it the client sets the Redirect-Query and it's also passing the query in the query string. The browser detects that, forwards the parameters in the header that it got as well as Redirect-Origin. The authorization server says, "Oh, I got the request in a header, I'm going to send the response only in a header," and then the browser sends that to the client in the header and the OAuth authorization code no longer is in the URL.

Um, so people are like, "Well, how are we going to get this happening?" But the deployment doesn't require any coordination. Each of the parties can adopt and when they see the party downstream has done support, then they start to add support. So it starts off with the clients doing something, and then if the browser detects that the client sent the headers, the browser sends headers. And the server, if it gets headers, it knows that both the client and the browser had it and so then it knows to only send the response in the header.

Um, the origin now is a browser attested value and um helps mitigate a few of the attacks that are out there. And the path is a way for when there's more than one tenant on the same origin, they're in different paths, for the app to say "This is my path" and then the browser verifies that it actually came from there and if so then it appends the origin to the path. Um, I don't think anything that isn't covered otherwise you know you now have an origin it reveals it but it reveals it only in a redirect which you know the other side is already getting as a redirect URL um and you know it it can't be stripped out.

So open questions, you know this of course doesn't work if the browser doesn't do anything but if the um the servers don't really know about it then nothing happens either. This is a problem really driven you know the people that care about it are over in OAuth. When I posted about this to the OAuth list there was um a lot of support um but OAuth working group isn't the right work working group for this and so we're over here in HTTPBis. Um, so my thinking is we define the spec here and then we coordinate with the Fetch people to get the browser behavior. I think that's a reasonable split and seems like that's happened in the past for things like this. I don't know the should I stop for questions or should I just go through the next couple of questions or...

Mark Nottingham: Prob- finish your presentation um and then we'll we'll go to questions I think.

Dick Hardt: Okay. Um another question that came up in discussions on this is why not form post? Um you still are exposing the parameters, it just is a more sophisticated attack because the parameters are in the page, but extensions still have access to the page since in form post you're pushing down a page that has an auto-submit of the values to the client. And the, I think even a bigger issue is that most deployments use redirects, their infrastructure everything's already set up for, sorry they use redirects so they're used to getting a GET instead of a POST, and changing that requires a fair amount of infrastructure and routing changes for a lot of a lot of things. And we did a little test and see like what would it take for this to roll out to clients and really if you get some of the top libraries in it when people update their library it just happens for them, you know the apps don't really need to go and do much work besides update the libraries. And I don't know where that went, um I think it was some earlier draft. I reached out to a number of the library developers and they're all excited about this and keen to add it in.

Why not structured fields in the dictionary? You know, the at the other side you're already parsing a string that's the query string in the URL, so parsing the same way means you got the same encoding, right? It's just the same string but in a different place as opposed to having to parse a string, turn it into structured fields.

And with respect to Redirect-Origin and Origin, you know they're slightly different in how they're working, and Redirect-Origin only happens with Redirect-Query, which is only on you know these types of redirects.

And why not the Sec- prefix? In chatting with some of the browser people, they think Sec- should only be set by the JavaScript um and so we went with Redirect-Query, but then as I thought about it when I was putting the slides together, well maybe what we should do, we don't necessarily need to have these matching, we could have Sec- be in all the request headers and Redirect- sorry, in the response headers.

And then there's a question around header size. Um most things that are really large are already using form post instead of redirects um but you know sort of an open question around you know as we're moving things from the query to a header do we start to blow up headers or not. And so you know get feedback from the group, see if we're interested in adoption. Chrome has expressed adoption. Sam circulated it with a bunch of people on the team. Better-auth is one of the people that express OAuth library support um and then get the providers to adopt it. I have five minutes left. Okay. Questions?

Mark Nottingham: Okay, um I don't see anybody in queue yet. There we go. Got Martin.

Martin Thompson: All right, hi. All right. Um let me see. Name's Martin Thompson. Uh I and I would just, quite a few people from the from the discussion I encourage you to read that Dick. I think not a lot of us are convinced why this—I think it just moves the bits around and uh that doesn't really help in in a number of ways. Um Justin's comment I think is probably good: the best way to protect information in the front channel is don't put it there. Um and I also David also points out that uh the browser extensions will be able to read and set all of these values, so that that's not giving you anything uh whether or not you have Sec-. It doesn't doesn't really matter. Um so that's another one. Um so I'm not I'm not seeing much much value to the protocol ecosystem as a whole for this one. But I think there's a there's another problem here which is that honestly this is work that could be done in the OAuth working group if they thought this was necessary. Uh the this working group doesn't have a monopoly on uh the definition of header fields. Um I think in this case we'd probably want to consult but um generally speaking I'm I'm kind of negative on this one. So I can't say much more than that I think.

Dick Hardt: Thanks Martin. I just realized that I don't have the chat open and I'm trying to figure out where I see the... Oh, there's a button.

Tommy Pauly: There's an icon at the top top left. Yeah.

Dick Hardt: Oh, I'll go read it after.

Mark Nottingham: Yeah. You don't need to do it left. So, um I'll just uh uh emphasize what what Martin said, uh we we don't have a monopoly on creating HTTP headers. We might have opinions about them and we can consult and and provide some best practices, and you had some questions there about things like structured fields and what you know implementation support and the length of headers, but but uh the expertise of this group is mostly around making sure that the HTTP aspects of the protocol are appropriately defined. And and Dick, when we talked about this we we also talked about what WG. There they they do effectively have a monopoly on what gets into the Fetch spec, which I suspect is what you're very much interested in. And so if they have similar concerns, it's it's probably not going to get into the Fetch spec.

Dick Hardt: Um right. Okay. Um the I mean we did the the people that had done the security analysis and talked about the threats in the last OAuth I'd reached out about this idea and people had been from the OAuth side had been fairly supportive. Right. The OAuth working group is super busy, so it gets flooded with stuff. So.

Mark Nottingham: I suspect um that when and and without speaking for them the browser vendor folks uh when they look at this through the lens that they have, one of the things that came up in the chat was about this the ability to use this as a tracking vector. And that um is is would be a big issue to tackle I think if you wanted to take this forward with them.

Dick Hardt: Okay. Um.

Tommy Pauly: All right. Do we have any other comments, feedback? See no one's in the queue. Okay.

Mark Nottingham: Don't hear any, yeah.

Tommy Pauly: All right. Perfect. So we're switching slides? Yeah one more from Dick.

Dick Hardt: Um. Next slide is the next presentation is HTTP Signature-key. Um so going to talk about HTTP Signature-key, what's missing, how this works, and you know a number of different schemes around doing it, and then um as I was implementing this I thought of another one that would be super useful and I'll talk about that at the end.

Um so why HTTP message signatures? Um limits of existing things: bearer tokens, I think everyone knows why those are problematic. Uh DPoP is pretty narrow in where it can be applied, right? It's really set up to work with OAuth. And MTLS uh while useful within an enterprise has lots of problems when you're in the you know open world where things are getting terminated at proxies and you know often you really want this information to be at the application layer and not at the transport layer. And so HTTP message signatures gives you proof of possession, you get some um message integrity depending on what you've signed, and it works through proxies and CDNs. You know when this work started, I was highly skeptical overall whether you could actually get it to work because canonicalization is really hard having worked on XMLDSIG way back in the day, um but uh it seems like they have addressed all of those things and it seems like it's a great piece, but um people aren't really sure about how to get the key. And so how you get the key has been sort of one of those things and um that led me to look at how do we move the key, some inspired by things I was doing in MCP and seeing similar work happening in the web-auth-n, and so I reached out to Thibault and we are proposing a more general structure that isn't agent specific. Um and so that one is Signature-Key that's like, "Here's your key to use for uh verifying the signature and based on the input."

Um so there's a label just like there is in signature and signature input, um you the label, the type of scheme and a bunch of parameters. The parameters are dependent on the scheme. Um the initial these are the initial schemes and so you select which key you use depending on the label. Um there's some challenges if you want to do this um with more than one um signature um because you want to have the signature key be one of the things that's signed which means you kind of need to know ahead of time what all your signature keys are, um open to ideas people might have around that.

So the four initial schemes: HWK, which stands for Header Web Key, gives you a synonymous identity, right? Using the thumbprint as the identity, the key is right there in line. It's Header Web Key in that you're basically taking existing key definitions that are JSON web keys, taking those same um um URL-safe base64 encodings and using those same strings in um the a structured header. JWKSURI I think I'm going actually go through details on all these but JWKSURI is identified as a URL. JWT is delegated, it's a JWT where you then resolve that to a JWKS. And then X509 because Thibault thought that we would need to do that because lots of people have X.509 infrastructure.

Um so the Header Web Key, it's self-contained, um so you're not saying, "Hey here's my URL," you're just saying "This is me," and so you get you know TOFU that uh could help in rate limiting um where you you see okay this particular key's been behaving itself for a while so I'll let it have more rates or I'm not going to limit it or whatever you might be doing on trying to figure out who's making the call. JWKSURI, you now have a the ID, you have the type of well-known that it has so that the sender knows how to go and parse what how to create a URL pull down the file and then in this JSON there needs to be a JWKSURI that then it can fetch and find the KID that was referenced in the signature.

Um. And the well Martin they're there's different ways of getting keys, right? You don't want the public key, you have to say where you get the key from. The other one is the um including a JWT directly in the signature key, which is a way of uh delegating where a um entity that has a resolvable URL um hands out a key pair to a delegate and it's the delegate making the call and the delegate says, "Hey I'm a delegate of this because here's the JWT you can go verify the JWT and in the JWT is the CNF claim that has my key inside of it." And then X509's like, "Here's where to go get it and here's the thumbprint of it."

Um and so you can envision how something might move along. So you'd have sort of just the HWK where it's just the key, it's just a bare key. Other ones where you want to know well who is it that's calling me and so you can resolve it back to a URL so now you have bound a um URL to the key and you know oh it's foo.com or example.com that's called me and so I can act because I know it's example.com. And then having a JWT which would enable you to have more scale where each workload could have its own ephemeral key that in some process it is managed go and get the JWT signed by the JWKS and so it includes that in the header.

Mark Nottingham: Um Dick, I just I see you have a fair number of slides left and we've got about eight minutes so just keep that in mind.

Dick Hardt: Okay. Thanks. And we want to discuss it as well of course. Yeah. Cache let's see. Privacy. So HWK anonymous. Um you know there's somebody fetching your URL lets you know that they fetched it but you know you called them so um not sure how significant that is. Um IANA registrations. So the the JKT JWT is a self-issued key delegation. Um so I came across this as I was looking to get a key pair in a mobile app, you of course can use the secure enclave to sign things and um but it's slow and so you don't want to use that to sign all of your requests but if you use the enclave to sign a JWT and inside of the JWT is an ephemeral key that you generated then you can send that JWT down and the other side can know okay here's the thumbprint of this and bind everything to the thumbprint of the key that's in the enclave enabling you to continue to rotate the ephemeral key that you're using to sign all of your requests and so then you're using a key that's in your software to sign all the time. Um. And so you get TOFU but you get a lot better scale and performance than you would if you were just using an enclave, so you get all the security of enclave sort of tying it to a device where the root key now isn't um exposed. I think I've described all this in the last one and you know it straightforward sort of header where you're putting the key of the enclave into the JWK, you've got the key being used for signing in the CNF and the issuer is effectively the identity of the key of the thumbprint of the key that was used to sign the JWT. So think of it as a self-signed cert. Um why a separate header versus signature input? You know that's already a fairly long parameter and enables you to sort of separate the key from the input. Um that was that was it.

Mark Nottingham: All right, thank you Dick. So, uh uh reactions, thoughts? I- if we could, obviously there's a lot of of of detail in in this proposal. Um I'd like people to try to focus on the general question of whether uh this working group should be uh looking at carrying key information uh uh in in for HTTP message signatures in the message. Um and and so we can you know see if there's interest in this topic. Uh anybody in queue? I see lots of discussion in the in the sidebar. Ah, David Schinazi.

David Schinazi: Hi Dick, David Schinazi, Google. So one of the big caveats here that scares me is um for Concealed Auth, which we uh published/shipped in this working group last year, uh for cryptographic reasons we had to send the key in in the in the header and the first thing people started doing was just checking that the signature from the key matched what it was instead of actually checking their database which you know is a variant on the alg: none problem, but that's not that's not a deal breaker to be clear but it got me thinking like what bit of information do you actually need here and at least for the use cases I can think of you just need the key ID um and then you kind of because you're going to have those keys that you trust in a database somewhere of trusted keys and you don't want to try the signature on all of them so having a pointer into that makes sense to me. Um but then well first off am I right there or are there cases where like a key ID is insufficient?

Dick Hardt: Uh most of all the use cases I'm working on the key ID is not sufficient because the receiver doesn't have the key to begin with.

David Schinazi: Uh can you describe those please? Or at least one of them?

Dick Hardt: Sure. So the one I just was working given that I was working on this I thought this would be useful have a mobile app um it's calling home to go and register itself and then the user's going to log in which then binds user to that particular install of the mobile app. I want the mobile app to make calls securely you know using HTTP signature to the server and then be able to bind that to stuff I do over in you know a web session or something like that but I want the server to know I know it's this particular install of the mobile app. And in doing that I start off doing a attestation with the platform to prove that I'm a particular app install, right? So I get one of those and but all of the calls I make the call I make to my server to get the challenge to go to there, that first call is using the key that I've generated and so my first the way my code works right now is I'm signing with an enclave key and it's like okay well that's that takes like 10-15 milliseconds every time you want to sign something with it and so I want to get a derivative. And so the key is not known to the server beforehand, right? It doesn't know about the device at all. And so it becomes the identifier that's used in all the future signing so that I know it's that same thing again that I talked to before.

David Schinazi: I see, but how do you prove that this new key is signed by your enclave? Like how do you get the server to trust...

Mark Nottingham: You- we're we're kind of tight on time here.

David Schinazi: Okay. I'll I'll take it to the list, uh thanks, but I want to understand that part.

Dick Hardt: Sure. Okay. Is there other people in the queue?

Tommy Pauly: Yeah we got Martin.

Martin Thompson: Oh, yeah. Mic's on. Uh just quickly there are cases David... Yeah you're on. There are cases David, uh I was looking at this device bound session credentials thing where essentially and the web-auth-n case is another one where you essentially have an unknown entity that wants to be able to bind a key such that it can have a sort of continuation of session. And I think Dick's example is a little more convoluted than that but essentially what you're doing is you're saying this is the this is the credential that we're going to fall back on and within to create effectively a session.

David Schinazi: I see from the first time it's what creates that. Got it.

Martin Thompson: And then within that session which gives you the continuity you do things like logging in or establishing credibility which is the web-auth-n case, right? Uh so there are use cases for this I think this this design is far too complicated and and generic is probably the right the wrong thing to do for this case but yeah.

Mark Nottingham: Thanks. So, it- it- there's been quite a discussion in the um Zulip, in the chat. Um so I'd encourage folks to to continue that uh on the mailing list uh and and see if uh uh there's convergence at all here or or whether we can figure out what the next steps are. Tommy, makes sense to you?

Tommy Pauly: Uh yeah, I agree with that. But yeah, thanks for all the engagement we've had on the chat. I think that's good.

Mark Nottingham: Yeah. I just uh uh it's it's great when we have uh uh that active discussion it's just it it needs to transfer to activity on the list.

Tommy Pauly: Exactly. Let's move it up.

Mark Nottingham: Okay. Uh if there's nothing else on that one I think next up is me with no hats. How do I do this again? Do you have slides for this Mark? Uh yeah, they're uploaded. I'll just, you know.

Tommy Pauly: Do I need to give you slide permission? I've never really used these tools. Which button do I press? Share slides at the bottom. I can get it for you if you need. There you go.

Mark Nottingham: Ah here we go. Okay good. Lovely. Oh, sorry I canceled your timer Tommy.

Tommy Pauly: Okay. I'll I'll get the timer going again. You go ahead.

Mark Nottingham: Okay awesome. Okay so this is uh draft-donnelly-httpbis-preliminary-request-denied. There it is. So, uh just a bit of context, uh browser prefetching is a thing, uh they are doing this to as a performance optimization. Uh this is being driven by a lot of work in the W3C's Web Performance Working Group, and so this is triggered by things like hints in responses and response headers and now we have a spec called Speculation Rules, which is a little rule set that browsers can use to figure out when they should fire off one of these speculative requests. Um and it's a pretty fundamental um performance technique now.

Uh so the problem here uh is that um you know prefetching is driven by information available to the client at the time of the prefetch request, um and uh those prefetch requests contain a header indicating what they're doing, that's Sec-Purpose: prefetch. But in some cases the server makes a decision that it doesn't want to serve that prefetch, uh not because of any particular error condition like authorization or or you know server overload or anything like that, but because it generally doesn't believe that the prefetch is going to actually help performance in this particular situation. Um it has information available that you know regarding congestion or other circumstances that uh the prefetch isn't going to help right now and so it denies the prefetch effectively.

Um and so the question is what status code should the server use when it does so? Um this has been deployed now for a while and uh you know there's been a long discussion uh when in the Speculation Rules specification uh at the WICG about what to do in this case. And they ended up uh specifying that any non-successful so over 299 status code uh will you know the browser will interpret that as the speculative fetch failed. Um but but common practice and and kind of the suggestion that was made in that discussion that they resolved upon was to use 503.

Um as we have deployed and developed this and and now I'm speaking with a Cloudflare hat on, uh and we have customers using this, uh we've discovered that uh lots of 503s in your logs tend to freak people out to be honest. You know it's it's an indication of a server-side problem and that's not great and and especially when you have web operations people and you have different kinds of monitors and triggers and dashboards saying oh look your 503 rate is you know rising or changing. That's not a great situation. Uh and in fact you know there's been a long-ranging discussion of other potential status codes one could use in this situation. They all kind of share this problem that it's it's misleading and people associate other conditions with them and it's not specific enough to oh a prefetch failed or a prefetch was thought to be not useful to by the server.

And so the proposal is to define a new status code for this purpose, um just to to disambiguate what's happening so that that uh you know operationally on the server side folks are have comfort. If you're looking at a web inspector for example in the browser you know what happened. Um and the an original proposal was some 4xx status code, we're trying to be good citizens and not just grab a a status code but let let that be allocated. Uh with the semantics of "Preliminary Request Denied". Uh in discussion Lucas suggested an alternative which was "Purpose Declined" because we do have that Sec-Purpose header. That's kind of a nice uh uh it's complementary to to the header that triggers this. Um and so uh we have a very short draft that we've put in and we're asking for adoption. And I think you can bike shed the name, I think you can bike shed the status code, but beyond that uh there's not a lot of decision to be made. Um just one more note it would be great if we could do this relatively quickly because there's there's a fair amount of pressure uh out there to just find some other status code to do this and it'd be great if folks didn't start squatting on it. So, that's all I have.

Tommy Pauly: All right. Thank you. And we have a queue building up. Okay. Let's talk. Starting with Ted.

Ted Hardie: Ted Hardie. Dispatch question: we should adopt it so we can get on with the bikesetting, bikeshedding. Thanks.

Mark Nottingham: We love the bike sheds.

Tommy Pauly: Here's here's someone who just wants to paint. Okay, Yoav.

Yoav Weiss: Um I love the proposal. I think we should be adopting it. I ran into this problem myself in various projects many times. Uh prefetch is often used on mouseover event in certain frameworks and when you mouseover on logout you don't necessarily want to logout, so there is a bit of logic there. It's a wide problem and yes having a sane error code makes sense.

Mark Nottingham: Nice.

Tommy Pauly: All right. Marius.

Marius Kleidl: Um I have a clarifying question on this. If we have a status code to um show that the request was denied, would browsers or other clients then stop using other status codes above 200 um as a failure of the um prefetch, or would they still continue treating all of them equally?

Mark Nottingham: Uh we haven't uh raised that with them specifically. Uh we we went to the WICG folks and asked if if you know they would support uh uh this and the informal kind of response you know without determining consensus over there was yes that that seemed like a good thing. Um I I I think that's probably a separable discussion. It it might be interesting if you think that the browser caching a another error response would be an interesting use case. Um but um I think it's separable. The default is is they'll continue doing what they're doing as which is all non-successful status codes.

Tommy Pauly: Niddy.

Nidhi Jaju: Uh Nidhi Jaju, Apple. just to say we don't have any concerns from the Chromium side. I think this would be useful and we should adopt this.

Tommy Pauly: Perfect. Lucas.

Lucas Pardue: Um for me a plus one to adoption. This is like does cause real pain and I can't see the pain going away. It's just like it'll be recurring as new folks enable prefetching or whatever and and come back. I think it raises a question of what what signals would we use from a client to then decide there's a server that we would decline their purpose. Sec-Purpose is great if you're in browser land but are there other types of clients? Do we need to just make the the guidance in in this ID uh generic enough to say that it could be any purpose and here's an example Sec-Purpose but it could be other things too? But again that's bikeshedding, let's just adopt and have the discussions. Thanks.

Mark Nottingham: Lucas you're reminding me of the semantic uh knife edge that we also have with like 401 whether or not it's tied to the specific authorization headers or not but yeah sure let's let's bike shed.

Tommy Pauly: Guoyue go ahead.

Guoyue Zhang: Um Guoyue Zhang, Apple. So I support adoption and I like to see some clarifications on like the retry and caching behaviors. Uh but otherwise this uh is a good starting point.

Tommy Pauly: Okay and that drains our queue. We're ahead of time, so that's wonderful. I didn't hear any concerns with adoption. I hear lots of potential for bikeshedding, um but I think you got to rip that band-aid off as fast as you can. So, uh yeah I think in this case we can start an adoption call and then I think pretty quickly assuming that uh goes well we're going to want to let people's painting efforts uh work themselves out so that we can move on from this quickly. This seems to be you know technically quite straightforward but something where we just want to decide how we couch it. Gloss or semi-gloss? So many options. Okay. Okay, great. Well thank you for presenting that. All right. Thank you. Uh next up I think we we have uh Ben doing the switched ordering is that correct? So template driven HTTP CONNECT.

Benjamin Schwartz: All right. Hi everybody. Um quick reminder I'm still talking about this thing. The slide is the exact same as last time. Nothing on this slide has changed. We are still talking about Connect-TCP which is this masked style um proxying for TCP. Uses the capsule protocol. Um notably it runs over every version of HTTP. Well, it runs over HTTP/1.1, 2 and 3 I guess there there might be some other versions. Uh there's essentially one substantive change since the last time I talked about this draft. Um it we discussed this change at IETF 123 um and then there was a pull request, so now the change has landed. Um the change is about abrupt closure signaling requirements. And specifically it's on this very minor point which is uh that uh if you are talking specifically about Reset-after-FIN in a you're trying to represent a case where the half-close succeeded but the full-close failed in TCP, AND you're using HTTP/1.1, then uh this change slightly adjusts the way that this is represented. Um and this is a very minor point. Among other things, the draft already specifically noted that you really shouldn't be using HTTP/1.1 anyway if you care about these details. Uh and also this Reset-after-FIN behavior is like not even really observable through the POSIX connection APIs. So like you'd have to be really down in the weeds of TCP to to see this change. Um so this is very nitpicky and I'm not going to read through all this change, I'm just going to say previously we said you should send a TLS error alert in this particular uh corner case and it turns out when I tried to implement this that sending a TLS error alert is like not a thing that TLS libraries necessarily let you do. But it turns out they often will let you just shut down the TCP socket without doing the TLS shutdown which is called a close notify or formally a closure alert. So uh this is the new text. It's strictly looser than the old text uh in terms of allowing more behaviors and that turns out to be easier to implement in my opinion. Uh and then there's also the uh some more longer explanation about why HTTP/1.1 is really not great for this and uh you know beware use a different HTTP version please. Um so that's as much as I want to say about that. Uh that has landed and been published. I'm not going to claim that we have total consensus on that but I think we have pretty good rough consensus. If anybody wants to um continue that debate, let me know.

Uh the new problem and maybe last thing with this draft is that um we I discovered uh due to an unrelated conversation on mailing list that there's an issue in the in the text of this draft currently related to proxy-status trailers. So first I want to talk about RFC 9209, that's the Proxy-Status draft. It says Proxy-Status may be sent as an HTTP trailer field and it gives some nice examples of why you might want to do that. Um so when I was working on this text I said I agree, that sounds like a compelling reason to send a Proxy-Status trailer. Um and so the text of Connect-TCP says uh you may send a Proxy-Status trailer um to explain why something failed. Um it's just meant to be a reminder and sort of a little bit of a clarifier around the preceding "should". Um so I thought that was just like harmless reminder. And then somebody pointed out to me that actually trailers are not allowed with CONNECT. Uh so in HTTP/2, there's this uh and HTTP/3 they they both have language about this uh effectively saying you may only use DATA frames once you've started doing CONNECT. Um so uh there are a few things that we could do about this. Um probably the simplest thing would just be to say uh "you must not send Proxy-Status trailers" uh because the HTTP specification says that you're not supposed to. Um and if we do that then we could potentially also follow up with some kind of new capsule draft that conveys this information essentially one layer down and there are lots and lots of different ways that you can convey information in a capsule and I'm not going to go through them all. Uh another option we have is to interpret Connect-TCP as an extension because the HTTP/3 spec says that extension frames may be used as specifically permitted by an extension. So we could say okay Connect-TCP is an extension and we can say it like does currently explicitly permit the use of this trailer. Um so maybe it actually is fine. Uh and HTTP/2 doesn't have a sentence like this but I think it would be fair to say you know HTTP/2 is extensible pretty much the same way as HTTP/3 uh so we can just you know live with it. But this is a little bit of a head-scratcher like what about Connect-UDP? Can you send Proxy-Status trailers with Connect-UDP? Because it doesn't specifically address this uh and the HTTP/3 sentence says that you have to specifically address this. Um so then like another option would be to uh like say Extended CONNECT allows trailers uh like updating Extended CONNECT like maybe all of Extended CONNECT or maybe just like certain Extended CONNECT tokens that opt in and like in theory that's a sort of breaking change but I'm pretty confident nothing would actually break. Um anyway. So think we need to do something here. One more thing I will say before we we open it up is I think the question that we are getting at here is um a collision between two different things. The first being how much do we actually think trailers are real part of HTTP and should be supported? And the second being how much do we think CONNECT in like at least after HTTP/1.1 or especially Extended CONNECT is like a normal HTTP method that should have normal HTTP method stuff. All right. Um we have a long queue here. Uh we can have a little conversation here but the next topic will also address this and so I want to I would also be happy to defer the conversation.

Mark Nottingham: Maybe in the interest of time if you just have one more slide.

Benjamin Schwartz: No more slides.

Mark Nottingham: All right. So then do we want to just go to the queue?

David Schinazi: Yep, David Schinazi, bikeshed enthusiast. So first off thanks Ben. I think you've managed to tee up an amazing bikeshed because this is about how people feel about HTTP and architecture, so this is going to be amazing, I'm excited. Uh two minutes let's go. Oh no no no it'll be quicker than that. Um just create a new capsule called trailers. No. Oh yeah you encode them in Binary HTTP it works across HTTP versions.

Yaroslav Rosomakho: Yaroslav, capsule enthusiast I suppose. Um I'm looking at IANA registry for HTTP proxy error types and actually very few of them can really apply to um as trailers. So most things such as TLS certificate error or DNS timeout they they don't really make sense as trailers. So I think that having a dedicated uh abnormal termination or whatever we decide to call it capsule with its own structure that would be fit for purpose would make much more sense than attempting to uh use Proxy-Status here.

Lucas Pardue: Yep, as much as I love this topic I I don't really have energy for trying to relitigate Extended CONNECT and stuff. There's already enough kind of layering violations sometimes that we need to do or concerns about parses at the framing layer, the semantic layer etc. I think it opens up a whole kind of worm. So capsule, yeah great, like they're easy enough to define and and do. Let's let's go that way. Thanks.

Kazuho Oku: We discussed this last IETF and we I thought we agreed that we probably use capsule, so I prefer doing it as capsules.

Mike Bishop: So, uh no AD hat on, just HTTP enthusiast. Um CONNECT is kind of a weird one because after the CONNECT succeeds the data channel is no longer HTTP. It's not an HTTP response, it's some other protocol. We still have all the framing around it. There's no technical reason you can't send trailers in H2 or H3 other than for consistency with existing HTTP we said you can't do that. There's no reason that we couldn't have ever sent mid-lers other than those don't exist in H1 and for consistency we didn't expose them. I feel like if we want to change that and leverage some of the capabilities of H2 and H3 to enable this, fine, but I don't think it needs to be in this document. I think this document should just point to the existing rules that say you can't send trailers and if at ever point if at some point the midlers and/or Proxy-Status on CONNECT crowd is passionate enough, that can be a separate draft that says well you can send header blocks at more times in H2 and H3. Don't deal with it here.

Benjamin Schwartz: So so I want to push back slightly and say I don't think that you can do this after the fact. Uh that is if we are defining an extension that enables a new permitted behavior then you can't change the definition of the extension later. Um you're too that's too late. You'd have to define CONNECT-TCP2. Um but uh I I'm not going to argue with the substance of the point there.

Eric Kinnear: Eric Kinnear, Apple. Yeah kind of seems like settings are a way we negotiate differences there. Um but my main point was yeah can we just say must not, no trailers, don't do it, after you do Extended CONNECT you're having a different conversation and if you want to define a way to send additional messages in that conversation that's what you're supposed to do and if some number of years from now we see lots of people have defined the same shape of doing that then fantastic that's work for us to do that.

Martin Thompson: Just adding my voice to the don't fix it now crowd. Uh what Mike said about reminding people that you can't do trailers is probably sufficient for this draft. I'm going to disagree with Ben. Um you can always fix these things later it just becomes harder and and I'm not convinced that it's hard enough. Um Mike sketched something out it didn't seem particularly convincing, David sketched something out it seemed somewhat more reasonable. I think we could probably work something out if we really needed to, so I'm I'm for the don't fix it now.

David Schinazi: And very briefly I'm also for the don't fix it now. The capsule idea was just as a if later we want to fix it that's how I would propose we fix it, but for now what I would recommend for this draft is just to remove the trailers text and be done with it.

Mark Nottingham: All right, number one wins. Thanks everyone. Okay, thank you Ben. And you want to continue with your next presentation?

Tommy Pauly: Uh actually next it's going to be Yaroslav.

Mark Nottingham: Oh, I'm so sorry. That's right it was Yaroslav. That's right. Yep, because that that was our uh agenda bashing from the beginning.

Yaroslav Rosomakho: All right. Hello. Hello everybody. Um my name is Yaroslav and uh today I would like to uh do second attempt at presentation Unbound DATA in HTTP/3, this time Unbound DATA for CONNECT in HTTP/3. So previously um the previous proposal from David and myself was to introduced Unbound DATA frame for HTTP/3 generically and uh based on the feedback that we've got from working group we decided to limit this proposal that it would apply only for uh CONNECT. Uh so to recap data frames in HTTP/3 are kind of meaningless. Um in a sense that the data frames are encapsulated in uh QUIC streams that have its own framing, data frames do not have to correlate with uh QUIC framing, and then internal segments of whatever you are transferring within those data frames doesn't align with data frames at all. So they they they really the the only reason that they exist really is trailers, so that you know when your data um ended and uh trailers begin. Uh there could be future extensions in HTTP/3 that would allow to interleave something between data frames, uh but to my knowledge today on bidirectional client-initiated requests uh such extensions do not exist. Um so this proposal was born based on a conversation that we had on a web transport implementers forum. Uh the question was why do we need data frames? And uh internally at Zscaler we use similar optimization proprietary and I thought that hey maybe it's worth uh making proposing this as something more generic that would be uh used in as a cross-vendor standard.

So the drawbacks of data frames is again if there are no trailers they don't really have any meaning in HTTP/3, they add overhead complexity, they create unnecessary state. You when you transfer bytes you need counters to figure out where data frame starts and where it ends. Some people cache everything until data frame ends and that creates potential security issues. And uh in practice uh for those who implement HTTP/3 based on some strongly typed libraries that means that once CONNECT starts you cannot just hand over QUIC stream to the to you know whatever whoever the consumer of the library is you have to give HTTP/3 stream and that can create some type additional typing challenges. Uh and as I've mentioned inner segments they do not correlate with data frames they still must carry their own length, so such as capsules for example. So the proposal is to introduce a simple zero-length indicator Unbound-DATA frame that uh means that for the rest of this QUIC stream everything is just pure data. Um it's an optional capability that could be negotiated through uh settings. If you don't want to use it well then don't negotiate it. Um the Unbound DATA can be sent at any point in time after the headers. So if you feel like it you can send few regular data frames before beginning Unbound DATA. And uh in the latest revision of this proposal uh it's only allowed in CONNECT. So in other in other requests where trailers are potentially allowed don't use that.

Um so this is what it would look like so we have CONNECT with its own headers then potentially we could have one or more data frames then there is this zero-length Unbound-DATA frame and then the remainder of the stream is just pure data until this stream finishes. So how much does it save in terms of throughput? It really depends, depends on you know how do you do zero copy things. Sometimes that can be actually quite significant. It certainly significantly simplifies the whole state machine, especially when you do stuff like mask proxies. Um and again with strongly typed ecosystems uh QUIC stream could be released to the consumer of your stack which again uh can be a significant benefit. Um again in practical example if you're doing something like uh proxying between TCP and HTTP/3 CONNECT, um the absence of data framing can actually be a difference between can you fit the original packet that you've collected from your input ring and encapsulate it in HTTP/3 CONNECT without additional memory copy which again in certain cases can be significant. Um so we've already discussed trailers in CONNECT I don't think I need to go there. So uh this is the proposal. Again this applies only to HTTP/3 and only for CONNECT. Any thoughts, questions, suggestions, uh feedback, appetite for adoption maybe? Okay.

Mark Nottingham: Again there's been some chat. Ah, Ben.

Benjamin Schwartz: Hey, Ben Schwartz again. Uh so I was critical of this uh because of my the concern I had before about trailers, um but since we have clear consensus not to uh try to do anything with trailers and CONNECT together I think it's fine. Um I think uh my question is do we really need to mix data and unbound data on a single stream? Uh do we need that capability? Uh 'cause I don't know, it seems cleaner to just have like streams that use normal data frames or like streams that are just unbound data stream? I don't know. It's just seems a little like an unnecessary additional configuration to support.

Yaroslav Rosomakho: Uh I agree, um I in my for my use cases I don't see a need to do any kind of data before unbound data. Any other opinions?

David Schinazi: I like to keep things simple and it's actually simpler to allow data. Uh if we didn't allow it I would have to write more code uh to check and disallow whereas right now I have my state machine that goes "Oh data frame there, data frame there, unbound data? All right flip the boolean that I'm now in unbound mode and shove the bits as they go." This is simpler to me than prohibiting it.

Benjamin Schwartz: Okay. Yeah I can also imagine um essentially a setting that's negotiated that says all CONNECT requests are only unbound data, but um anyway I don't have any particular objection to this arrangement.

Yaroslav Rosomakho: Yeah, CONNECT is many things um and you could do multiple all sorts of CONNECT for all sorts of purposes and you might have some where you want that for performance reasons but some might be non-performance you know related uh with different consumers where you don't want this. So I think having flexibility is a good thing.

Tommy Pauly: I I got in queue uh I think mainly echoing what David said I I think there's a benefit of the simplicity to not kind of force it to be the whole thing. Uh I think in practice yeah you would have your CONNECT streams just use unbound data the whole time. But you you're always going to have to in your implementation handle receiving a data frame uh because someone may not support on the other side and so you may as well allow it. Um I have no idea why you would choose to mix them but it doesn't hurt to allow it. And so I think this is good and this with no hats.

Lucas Pardue: Yeah I think keeping um the loose coupling here and allowing both data and unbound data just gives us more flexibility. Um we can retrofit that to other things that are being done without having to make them consider if they're now going to make this an unbound data stream. So I don't know just think off the top of my head maybe backport this to WebTransport in some way and getting some of the efficiencies there that people might like without having to go and do even more protocol specification work. Um yeah I I like this. I I can see why there's some people arguing like well it doesn't really probably doesn't really save you much but that's seems to me like an implementation matter and like in in my experience we tend to allow people the tools to optimize for their implementations rather than trying to dictate to them like oh no it's easy don't do it that way. So I like this idea. I would like us to spend more time in this working group trying to polish it up and make sure it's right for the different um things we want to do. Thanks.

Benjamin Schwartz: Ben Schwartz again. One more one more thing I do want to highlight is uh I think I'd like to figure out a way to make sure that this doesn't get used on non-CONNECT requests, um or think carefully about the implications of that. Because there's the original proposal would have allowed this on any method. Um so uh there's nothing technically preventing it from being used on other methods. Well there are many ways in HTTP/3 to send frame types on streams that should not have those frame types, you know, like trailers on CONNECT. Uh so yeah I don't think we're introducing anything dramatically different here. So so I'll just I mean as a particular example I think um as you probably as you already have like it probably needs to be a stream error to receive one of these things on a stream that is not with a CONNECT method. Right, right. That the text needs to be prescriptive about this kind of thing. Yes.

Mark Nottingham: Okay, um seems like we've exhausted the queue and it seems like there is um at least some amount of interest in in this, so I think Tommy and I'll have a talk and uh figure out what our next steps are.

Yaroslav Rosomakho: Thank you very much.

Mark Nottingham: All right. Thank you. Next up I think we we have um Resumable Uploads.

Guoyue Zhang: All right hello everyone I'm Guoyue from Apple. So we are talking about resumable uploads again this session. Uh there were quite a few editorial changes um since Draft 10, but the only main behavior change was that we allowed we now allowed the server to skip ahead in the case that you already had some data potentially through like other mechanisms, so the client doesn't need to repeat uploads uh very from the very beginning. Um and these here are the three main things I want to talk about today and these are the things we had some discussions on the GitHub uh through issues and also some some things on the mailing list, we uh haven't gained consensus yet, um so we'd like working group help.

The very first one is the guidance on client retry behavior in the case that resumable upload failed. The current spec is very simplistic, it just says 400 is not retriable 500 is retriable uh but people pointed out these cases might actually happen in the in the real world through different like middlewares, through different mechanisms that's aware or not aware of resumable uploads. So the proposal the current proposal is uh these are all recommendations of the client retry behavior: first absolutely do not retry if you receive upload-complete: true. And then you can retry if you receive a 409 Conflict and upload-complete: false OR Content Too Large regardless of upload-complete being false or missing. And then lastly 429 Too Many Requests, um and all the 500 code are still retriable assuming upload-complete is not true. Uh so is that the set of behaviors people are happy with? Any...

Mark Nottingham: So, um just just from my perspective, I I think this sounds like a reasonable start. My concern would be if we get into respecifying retry behavior for different status codes um and so if this is really just guidance pointing to other places where it's normatively specified that's fine, but if this becomes normative, it's it's probably going too far.

Marius Kleidl: Marius Kleidl. The text currently has just like a catch-all phrase that you can retry if you know the semantics of the status code allow this, um but I agree that this is probably just too unhelpful for people who look for guidance when implementing this. So um some more non-normative guidance would be helpful I guess.

Mark Nottingham: I think the key distinction here is that you can't retry the same request. These are status codes that say the thing you sent me I can't deal with, but you could change what you're sending me and try again. Whereas 500 series the thing you sent may have been just fine, if you try again later I might be able to handle it. Um but then there are other 400s that you presumably can't do anything about. So I think we need to point to the definition of the specific status codes and just say that the client can adjust their request as appropriate. Yes, yeah. That makes sense.

Guoyue Zhang: Yeah. So I think we will open a PR to add these as non-normative guidance and clarify what you need to do to not just a direct retry but uh like maybe retrying with a some other chunk, retry with uh another upload uh like lookup. Right. Uh the second topic it kind of came up during the retry discussions is that the possibility of early responses. So the server can reject uh the ongoing upload even if it's incomplete. So today upload-complete header is defined as being the server if it's sent out the response it's defined as being the server received complete upload, um but in practice we are using it to differentiate between the response from the initial resource and the response from like a temporary upload resource. So this uh became a problem like I think what we should do is just make a editorial change to clarify what upload-complete means on the response, uh rather than defining a new header or or changing the semantics um and this also the discussion on bidirectional streaming where upload and download can happen at the same time maybe they can both be resumable but I think that's out of scope of the current draft. Um so any objections on just making this editorial change?

Mark Nottingham: We don't seem to have any input here. Okay.

Guoyue Zhang: Yeah, sounds sounds good. And the the last thing we are back to this again so this has been discussed I think quite a few times previously. So I have two concrete proposals on how do we retrieve lost responses after complete uploads. So today we said you should the server should keep the resource available for reasonable amount of time but we do not define a way to recover it. It's somewhat useless uh and the question was brought up: is this a failure mode that we should address in the draft and also how do we actually make it useful, like how do we define a uh lexical way to retrieve the lost response? So my proposal I have two proposals. The first one is let's just say out of scope let's remove the mentioning of this at all. We don't recommend server to keep the the resource any longer than it needs to. If you do then uh you can do whatever you want but this can be completely removed from the draft. Uh proposal two is we should reduce "should" into "may" so it's would be like optional, it's less or just make it non-normative. And the idea is we should recommend so we this was brought up before that we should recommend using a zero-length append to retrieve the lost response because this is the most natural way to do it. It actually reduces one of the edge cases of the protocol. We can completely eliminate the completed upload problem type and just replay response again. Uh even though like zero-length patch is kind of weird but this is what client would naturally do. We would ask the server for how many bytes we've you've received, server will give us the complete length and now there's nothing to really patch so a zero-length complete patch retrieve the lost response. Uh this is what client already do the behavior is already there with like no additional effort on the client side. Uh so what what what would people prefer? Should we go with one or two?

Marius Kleidl: Marius Kleidl. So this has been a point that we got quite a lot of feedback from people trying to implement this, that they would like to have some um guidance or idea how to do this. Um this was a problem that they did actually run into. um so I think it would be great if we could offer something.

Mark Nottingham: So Marius just to clarify, uh do they need some way to meet this requirement just because it's in the spec or do they actually have a use case for this functionality?

Marius Kleidl: They have a use case, they want to ensure that their client application um can receive the response even in the presence of errors. So it's not just about transferring the file to um the desired server, but they also want to get a response back. Um

Martin Thompson: Yeah, so um this is not a problem that's unique to this um particular problem space. It is probably exacerbated by the fact that you invest significant resources in uploading something. I mean this resumable upload only makes sense when requests are very expensive to make. But as a general rule HTTP responses can be ephemeral and uh if they get lost then it might be the fact that it might be the case that there's no way to make the request again without expending whatever resources you expended on making the request and the server has basically done the processing, provided the response, and has effectively forgotten about everything that's associated with that. So my suggestion is option one here. Um and start some separate work if you really care about the problem. Uh because this this applies to a simple uh POST request that's only five bytes long that the that the client missed. Now in a lot of cases, you know, clients will just retry that sort of thing and it's relatively inexpensive, uh so keeping in mind that the cost of the making of the request uh you might want to think about that a little bit but there are small requests that have very expensive responses that if they're lost that's also a problem. I'm thinking a lot about a lot of the AI generated responses that that people ask for now. And so um having a solution that works for all those cases where a lost response uh exists um that would be useful outside of this context as well. And uh this work has gone on for a very long time so I would sort of encourage you to move towards completing it rather than taking on a whole new difficult and interesting problem.

Tommy Pauly: Thanks for that Martin.

Guoyue Zhang: Yeah, thanks. Yeah, okay. Yeah. I'm I'm out of time I just have one last thread. Uh we got some information and feedback and let let us know their implementation experience and currently today uh URL session on Apple platform supports in Draft version 6, which is not the latest, uh but hopefully the next version will support the the RFC version. Thanks.

Mark Nottingham: Lucas.

Lucas Pardue: Yeah just just a very brief kind of more like high level view of things, um we kind of said we were going to be done last summer and then we had a load of uh feedback, uh created a lot of issues from uh both well um from a couple of people. At the moment kind of at about 25 issues, broken those down into stuff Guoyue's presented today but we have like editorial work too. I think I'd be looking at trying to resolve the any technical aspects, do an editorial sweep, and then do probably a more bigger editorial refactoring um that I proposed last year, table of contents changes and all of that stuff. I don't want to do that until we've resolved all of the other things because it makes talking about sections and stuff really hard but I think just for the chairs I I would love, you know, this work's been going on a while like Mike or Martin Thompson has been said, I would love for us to be in a position like in the summer where we would like finish this personally. Um and that probably requires some work from the authors me, Marius and Guoyue to find the cycles to to make that happen. So I I'm committed to making that happen um if we can continue to get good feedback, which we have been doing. So thanks.

Tommy Pauly: All right. Thank you. Yeah, looking forward to getting this shipped.

Mark Nottingham: Definitely. Thank you. Uh next up I think we we have the HTTP Wrap-up Capsule? David. Or a wrap-up wrap-up?

David Schinazi: Yeah unfortunately not. Uh man you still stole my one joke for this one. Um hey everyone, David Schinazi. So as some of you might have noticed, we adopt wrap-up a while back, I forget when one or two years ago, and since then uh your endeared co-authors uh Lucas and I have made very little progress. Uh part of it was my main motivation for it was this little feature called IP Protection that I was working on at the time that has since been murdered and is no longer happening. So I don't have a use case for it in my codebase right now which is why I haven't been both implementing it and like pushing it forward. Um we wanted to kind of discuss this with the group and ask what the group wanted to do. We could leave this on ice as an adopted working group item that just stays there, we could unadopt it uh and wait for someone else to pick it up, or we could have if someone wants to stand up now and say, "Hey actually I have a use case for this, so I'd love to help," then that would be probably the best option. Um so yeah, like to ask the room here and the virtual room one there what they think, um see please let let us know what you think what we should do about wrap-up.

Mark Nottingham: We we do have a a document state uh called parked working group document. I think that would be in unless someone else wants to take the flag here that would be the natural place for this to go.

David Schinazi: Okay. But let's let's see if anyone wants to carry that torch. Oh, Yaroslav.

Yaroslav Rosomakho: Uh yeah I do have a use case for that. It's not like a burning life or death use case, it's more of a nice to have uh use case so I don't know how much enthusiasm is required to uh keep keep this torch. It feels to me very straightforward thing right so I want to uh shut down or you know migrate or do whatever so I'm signaling that I'm wrapping up and that is it uh unless I'm missing something. So but in general I think this is useful, I have some use cases for that, I'd like it to see progressing towards the successful finishing end.

Mark Nottingham: So Yaroslav, burning and and life and death aside, how do and I'm not sure what the relative levels are here, but how do you feel about implementing and perhaps editorial work?

Yaroslav Rosomakho: Uh yes to both.

David Schinazi: Okay. Right, yeah. If you have a life or death issue please don't use go to HTTP, that's probably. All right, that that's for DM. Really no. Awesome.

Tommy Pauly: Thanks Tommy. Yeah, um so I mean as an individual like I'd I'd be happy to implement and test. Uh I also do not have crazy motivation for it but it's it's good, like it's one of these things that like I don't think it's ever going to be anyone's top priority, um but it's nice and like I I could imagine a world where in the future people are like minorly annoyed that it is not there. Um so I I think it's okay to park, I think it's okay to try to get done. My question was you know looking at GitHub issues, there are no open issues here, so like what is left?

David Schinazi: Implementation.

Tommy Pauly: Okay, so it's just implementation. So like if we can get enough inertia just to say like let's just build the darn thing and test it are you okay you know like the authors can just stay on as authors and then we just ship the document and you you know you do your duty to run the process but we don't need to do other just no big changes that we foresee here.

David Schinazi: Yeah, absolutely and so I'm happy to do that and if if Yaroslav uh wants to join us as an author and help out, even better. Um it's just like I won't have time to implement it and I don't want to push something forward that no one is implementing, but if you and Yaroslav are implementing, yeah like I think we can get this done pretty quickly.

Tommy Pauly: Yeah, probably between the two of us we could make it happen. Yeah. Okay. Awesome. Thanks Lucas.

Lucas Pardue: Yeah, so part of the reason I jumped on board earlier in like even pre-adoption is because I could imagine expanding the scope and making the thing even more useful for some use cases I had in mind, particularly around kind of IPC use cases where something like Q-mux could help fit the gap and allow us to make more use of capsule protocols kind of outside of the internet and and intranets and stuff like that. Um but given where we're at, um I think constraining the scope significantly and just focusing on what the spec does now, whether we publish it or put it on ice is the right thing to do for right now. Um and if there's good ideas later on that you can make a different spec and make a different capsule, I'm more than happy to do that. And if we constrain that scope, I can find the time to stay on as editor and just, you know, dot i's and cross t's and do that, uh as designated expert in capsule land. David and I need the context anyway to make sure that things make sense. So um yeah whether it needs to be published right away or or we just put it on hibernation for a little time, I think that's fine but if we're clear, keep it minimal, I think that's what will help us the most.

David Schinazi: Great, just responding to the point about because just for context for everyone if folks have forgotten uh Lucas and I when we were discussing this some number of IETFs ago were discussing "oh maybe we can slide in more information here," um but I think plus one to everything Lucas said. Let's punt that to a different capsule later once someone has a business need for that or and for now we can just get that one out. One added benefit of getting this out quicker is that WebTransport can use it. Um they have a capsule that's WebTransport specific, that means roughly the same thing. Um we didn't think in a million years that this might get done before WebTransport but this is IETF, we can't predict such things, so uh I say let's do that, yeah.

Mike Bishop: So that actually leads into um what I was going to ask about because Alan had raised a similar question in the chat and I think it might bear on deciding what we do here. If WebTransport is not done and this is usable for the same purpose, asking the WebTransport folks who already have active implementations they're working on to just use a different code point and point to this would get us implementation and then we ship both that window's closing.

David Schinazi: Uh that's very true. Um so amusingly uh in that regards we already have implementation, like the fact that it's there's a the fact that it's using a different code point is somewhat a relevant bug, you know. Um and WebTransport has an action item that once the working group last call is done or once we get a bit further in the process all the code points will be shifted to smaller VARINTs. Um so at that point renumber anyway? Exactly. So at that point switching into this if this is ready because let's not delay WebTransport any more than it has been, but yeah I think that's a good point we already have some implementation of this conceptually. Uh I'll do some double checking um uh of whether it's a one-to-one mapping, I think it is, but I'll I'll double check in WebTransport.

Mike Bishop: So let me ask a different question then. If it is a one-to-one mapping or close enough that we can make it be, and you say the only thing that we need before this is ready is implementation and we already have implementation in WebTransport, then we're ready, right?

David Schinazi: Yes, with the caveat that I think implementing it also in the context of like chained proxies like like what Tommy deploys and what Yaroslav has would be useful as well, um just to make sure that it works for what it it what it's intended for. Uh but I I I can imagine a world where if like Yaroslav and Tommy are able to implement between now and Vienna, we just do the working group last call leading up to Vienna. Right. Yeah.

Mark Nottingham: I'd also just, you know, we we shouldn't over rotate on implementation experience. It's not required for Proposed Standard, of course we love implementation experience, it's fantastic, it's wonderful, but yeah.

David Schinazi: Yeah, well I just don't want an RFC with my name on it that doesn't work because no one actually ever implemented it. Too late. Fair enough.

Mark Nottingham: All right. Uh so that was great, uh it sounds like maybe we can get a little bump of activity on that draft and and get it over the the line. Sounds like some people had a nice uh uh thoughts about how to collaborate and get that forward so that's great thank you very much. Uh secondary certificate authentication of HTTP servers. Who was presenting this one? Do we have someone presenting for this? I think we had slides didn't we? Let's see. No, I don't I don't know, no, maybe not. This is also one that has been kind of languishing. Um just check. Oh right we did have an update out last time, did we? Mike Bishop is in the virtual room, you could uh ask him. Mike. My name is on this draft as a matter of legacy, I have not been involved in it for a while so um I do not have a good update on it. Okay.

Yaroslav Rosomakho: Uh yeah I wanted to ask about status of this work because there are loads of things I would like to do that would build up on it, such as client-side, such as requested origins, um so would be great to know what's the status of it? Is it going ahead, um or is it slowly and surely not going ahead?

Mark Nottingham: I I mean I think I perhaps Tommy and I will chase the authors uh uh for a status update to the list.

Tommy Pauly: Yeah and you know we had some update last time, I I think based on that there was a plan as I said you know let's not try to increase the scope too much, let's try to keep things down, but we just need we need the updates. So this may be a case where we want uh additional authorial help um to get it over the line.

Mark Nottingham: Yeah here. Yeah. But I I think you know we do have implementation experience uh of at least earlier versions of it and we have use cases, so I I think it'd be great to just finish rather than dropping. Lucas.

Lucas Pardue: Yeah I I just checked GitHub and for the labels that make sense uh logically there are no open issues. So like if there's authorial help, you know if it's like add some commentary on an issue or review a PR or something, we can crowdsource that, but if there's nothing like what is there to do? Um is it doing an editorial sweep for identi- similar to wrap-up here, yeah. Uh yeah we'll we'll go back and I'll I'll review the presentation we had from 124, so but yeah let's I think you know we'll just try to get this one a bit more energy and get it done. Yeah, they'll take that. Thank you.

Mark Nottingham: Thank you. Okay so finally we have uh uh work that is not adopted yet or or or a proposal for adoption. It's it's work uh in a related area. Alan Frindell wanted to talk about QPACK compression for MoQ. Uh so we are we are being invaded by the MoQ crowd uh because there's a a lot of history around uh header compression in this working group and he thought it'd be interesting to have a chat about that. Alan are you are you with us? Yep great.

Alan Frindell: Okay um do I need to request slides or you want to drive them?

Mark Nottingham: Uh why don't you request if you can.

Alan Frindell: I can't wait let me let me just see if this works. Oh you can present from the remote room. We'll see if this actually is working. Um hey everybody uh Alan Frindell from Meta and uh now for something completely different. Uh I as Mark mentioned this is sort of uh this is work that I think is sort of the intersection between what's going on in MoQ and um the QPACK experts mostly hang out here as many of them as there are. So uh I asked for time here to present um so appreciate uh the time. Um so for people who don't follow what's going on in MoQ and it's hard to follow what's going on there because it moves and changes but where things are right now um there's a there's a protocol called MoQ Transport which sort of sits at the same layer as HTTP. It is agnostic to what the applications are doing with it, it's just moving bits around but it's using different stream mappings but you'll see like it starts and looks a lot like HTTP/3. So there's a pair of unidirectional streams which exchange setup messages and individual requests like a subscribe or publish message uh can be sent by either endpoint and those go in bidirectional streams. Uh and these requests can contain key-value parameters some of which could be long or repeated uh and examples might be uh track names or uh authentication tokens. So this this sounds familiar in terms of uh what we might want to do compression-wise.

There's some differences though from uh from HTTP/3 that are important to keep in mind. Uh so uh requests have a request ID field in them that's explicit, it's an integer. Uh that are one-to-one to the stream and this is because we're designed to run over WebTransport and WebTransport does not expose transport stream IDs uh to the layer above, so we can't use that in places where HTTP has. Uh track names which you can sort of think of as um path in some way uh consist of these two distinct parts. There's this namespace and a name. A namespace kind of you think about it like a directory name but it it's a an n-tuple of these sort of blobs of bytes uh and then there's like finally a name which you can think of sort of like a path name it's just a single field uh of bytes. Uh another thing is that parameters have only integer keys compared to HTTP which has string keys, uh and the parameter values uh are all uh also bytes, the key tells the receiver like what it's supposed to do with it.

Okay so let's compress. Uh so when when I sat down to do this I thought like I want to try to reuse as much of QPACK as possible um but maybe like try to correct some whiffs in H3. So um I didn't feel this way when I was the editor of the QPACK draft but I think making it entirely optional um is probably the right call because QPACK I think is seen as a barrier to people who want to I just want to write a simple H3 implementation and it's actually the biggest thing you have to get over everything else is is really simple. Um and uh I'm going to toss Huffman uh because at least where we are now we really don't see that the compression benefits are justifying the CPU cost of Huffman uh and uh so I took those pieces out.

So in this draft um which I should have provided a link to but you guys know where to find it um you'll see like a lot of the things from QPACK are are the same. So uh in the setup frame we send the things that HTTP sends in the settings frame, settings frame like table capacity the number of blocked streams, we use a unidirectional encoder and decoder streams and uh the protocol messages carry compressed blocks that are uh of the QPACK format. They start with the same sort of insert count base header that is in QPACK and it reuses the instruction formats uh inside blocks and the encoder and decoder streams and it preserves this like "never index" semantic which is in HPACK and also QPACK.

Okay what's different? You guys can read that it's a little small. Um I there's a reinterpretation of the of what it means to reference the static table. So at least right now MoQ has no values that are common enough to warrant like a true static table. Um the keys remember are integers and then the values there's just doesn't have quite the same set of things where you really want to compress some long string value that you know is going to be common like I don't know cash control directives or whatever. Uh so the way the draft is written is that when you make a static table name reference that index is directly encoding the MoQ parameter key. So like if delivery timeout's key is 8 and then you say I'm making a static table name reference and the index is 8 you just take you don't look up that up in a table you just say like that's my number. Um and as such there's a there's a bunch of instructions that don't make any sense anymore. So uh you'd never do anything that references a dynamic name there's not really any concept of a dynamic name every name is a static name uh and there's no literal names either uh and uh yeah that's basically like so there's you know we took four or five or six of the instructions and said like these just don't make any sense uh in in this world.

So what is the reduced set that's allowed? So on the encoder stream you can insert things into a dynamic table that use a static name reference, you can still set the table capacity and duplicate fields. Uh inside a compressed block you can have a literal uh with a static name or you can have a index into the dynamic table which is either traditional or post-base. And then on the decoder stream you have insert count increment like section acknowledgment and stream cancellation are all the same with caveat.

Uh so some more differences so for the purposes of computing the size of things in the table I had to sort of pick a number for how big you treat the names I said they're all four bytes uh and everywhere where QPACK would talk about using stream IDs it uses request IDs which I mentioned we have to do that if it's going to work with WebTransport. Uh there's this little thing I did with uh there are pseudo parameters to communicate the um the track name and uh there's three different pseudo parameters for it one is for an individual namespace element or uh and one is for a set of elements and so you can use these to construct to do sort of partial paths in a way that like QPACK can't really compress a partial path today. Uh so if you had like a a prefix like here it's like "long path to resource one" and "long path to resource two" you can insert just the prefix into your dynamic table as a track namespace set and then you can compress against that. It's easy to write down, I think it's harder to think about how I would actually write an encoder to do this in kind of some sensible way but it seems uh like a nice to have.

Um parameters with integer values. Okay so one thing that's MoQ recently changed uh for these parameter blocks to be type-value, there's not they're not type-length-value. The receiver has to understand the keys in order to parse them. Uh but when I drafted up MoQPACK in this v0 I left the instructions there as TLV because the wire format is implicitly TLV. So as an example like if there's a delivery timeout and its value is 500 then you would have the instruction that says it's a static name reference, the value there the key which is 8 then there's like a a QPACK encoded length value and then the value is actually a MoQ varint. Uh now we could also I could have changed this and done it differently so that the the MoQPACK was like TV-aware where it like knew like okay key 8 is a varint so I don't need to know the length I'll invoke my varint parser but there was a tradeoff to be made here about whether you want to bake into MoQPACK knowledge of all these things or do you want to uh have things be more extensible at the cost of like double encoding length.

So that's pretty much it what's there? I mean I guess I'm presenting here because like again I want to hear from people who who do know QPACK a little bit better and like what do people think? Like is the is the fact that I'm making it optional and moving Huffman, does that seem like the right direction? Partial path compression? This TV vs TLV decision? Like is this a good use of QPACK or a bad use of QPACK? It's it's similar it's not the same so I don't think anybody could take their QPACK directly off the shelf and and use it this way um you'd have to make some modifications. Uh but I think when we were developing QPACK we saw like oh there's there's some value here that's beyond HTTP in this compression scheme which is like it's not super HTTP aware other than the values in the static table so is there one is something like this useful or do people want to take this opportunity and spend an extra minute or two to share their general QPACK thoughts and feelings? So, that's all I have if people have feedback.

Mark Nottingham: Great and and so this is really just a big ad for for what's happening in MoQ to try and get people in there.

Alan Frindell: Well I mean we welcome participation in MoQ so if people are curious what's going on please come by. If someone wants to chat with me about what the heck is going on over there, give me a summary, just grab me, I'm happy to explain it to you. Um you know we value the experience that is in this room and I know I don't see all the super smart people here showing up to our meetings, anyway. Yeah. Cool. Thanks.

Mark Nottingham: Martin.

Martin Thompson: All right, it's panning. It's panning. It's panning. All right, so um I I think this is broadly reasonable, I think you've identified the things that are different and this is not QPACK, it's nothing like it uh aside from the synchronization mechanism. And I think that's kind of the the key thing that we learned out of QPACK is how you can establish state centrally and refer to that state safely from dynamically evolving sequences of streams that could be delivered in a different order and and all those sorts of things. And so I think that's the key design component that you're reusing here. Um I would not fixate too much on some of the details of QPACK. I think you should probably not use the integer encoding in um that HPACK did. Uh it's probably more expensive on your CPU in the same way that Huffman is. So um I would encourage you to sort of focus um on picking out those concepts as you have that that are applicable but then you know you're not using names because you've got an IANA registry, you're not using um the it's not strictly byte sequences you've got varints in the values. this is a very very different thing in terms of the user interface and and all these other things, it's just that that synchronization component that you're picking up. And hell we had a lot of trouble getting that to the point that it was good so please reuse that but that's the only thing that I think that.

Alan Frindell: So I think what I'm hearing you say is that you should reuse it but you should use it spiritually as opposed to literally and um the only reason why I'm a little bit disinclined is that it means that I mean it's been a while since I've given 9204 the best RFC a read but my recollection is that um just look at the name at the top people uh my recollection is there's a lot of description of how that works and I don't know if I want like just recopying it all is like maybe the the piece that I was trying to avoid but I hear everything you're saying about like it's QPACK inspired but since you can't really use it directly it's not QPACK.

Martin Thompson: Yeah, yeah. There's no way you're going to be able to use the same software for this it's so much it's so much more different than than the other one. Reference 9204 liberally if you think the best RFC can be a crutch on which your new thing can depend. Absolutely but um I think by the time you get to the point that you're doing the thing that far in you're you're not going to be using enough of it to to really help people because you'll be constantly saying well we do it in this way except for this and this and this and this and this and this and it'll get more confusing to to explain to people so ultimately yeah copy and paste may be a friend uh when it comes to some of those things. I as just noticing David's chat thing there: "varint all the things" is what I was suggesting he do, it's just use the new MoQ varints rather than the HPACK varints.

Alan Frindell: Ah, HPACK varints are different from QUIC varints which are now different from MoQ varints. Yeah, yeah, it's rrr. Okay, yeah, so normalize a little bit. I suspect that the um that the MoQ varints will be better suited to your use case anyway CPU wise. Yeah. Officially we'll get it right.

Alan Frindell: Thank you. Lucas.

Lucas Pardue: So um I kind of agree with all Martin's comments, uh but to ask the other question I'd like to flip the question around a bit: what what is specific in the proposal to MoQ? Like there are some aspects sure but like is it can we extract the design such that like it is addresses the the problem about this synchronization over a transport where streams are you know not tightly coupled and globally or and all that stuff. Um don't boil the ocean on that, just just maybe take that question away. My view has always been like HPACK and QPACK are not strictly tied to HTTP, they just use those as like good seed inputs and targets for the compressor but that you could use them for anything if it made sense for you. So um that's just another comment. I agree with the Huffman thing like we're not going to have enough data to train a a static Huffman table. Um one of the things with Huffman that always confuses people in my experience is they assume it's a it's like a dynamic Huffman table that's being applied and you have to say no no it's based on some opaque analysis from 10 years ago or older and like just get rid of that one too. Um all of this work seems fine to me even if I'm not a QPACK expert like I'd be interested to see the next iteration of a draft you come up with based on the feedback you're getting now.

Alan Frindell: Okay, I mean so thank you for the feedback everybody and uh, you know, I I will probably I'll make an iteration of this. Uh I don't know that there's the people in MoQ don't know QPACK very well, I don't know that anybody whose MoQ focus has read it, so uh or and certainly no one's implemented it since I only wrote it two weeks ago. Um but yeah, we'll maybe have another update in the coming months.

Mark Nottingham: Cool. Thank you. Thank you Alan. That was that was informative. Um so I think that takes us to the end of our agenda. Um I'll I'll make one brief aside uh for folks who are interested uh the HTTP Workshop is being held again this year and today is the last day that uh people are they're taking uh expressions of interest. So if if you're interested in that look up HTTP Workshop somewhere. Uh Tommy anything else?

Tommy Pauly: I don't think so, no. Um we ended a little bit before time, but yeah, thanks for the good session everyone.

Mark Nottingham: Thank you and and folks who are remote uh if if it's applicable get a good sleep or a coffee. Thanks everybody we'll see you all hopefully in Vienna.

Tommy Pauly: Very good. See you.