Markdown Version

Session Date/Time: 17 Mar 2026 01:00

Martin Duke: Right here? Does it work?

Alan Frindell: Yes, that's much better. The echo was much...

Martin Duke: Yes, we will have people stand right here if they are going to talk. Uh, that doesn't work locally. There's a lot of echo in this room, which is unfortunate, um, but there we are. Okay, I got 9:00, we're going to start. Good morning, afternoon, evening, everyone, wherever you are in the world. This is Media over QUIC Session 1 at IETF 125. Thank you for being with us today. As always, this session is being recorded.

Martin Duke: And I'm Martin Duke, your friendly MoQ Chair. This is the IETF Note Well. It covers the intellectual property implications of you being here as well as outlining our code of conduct. If you are not familiar with the IETF Note Well, I encourage you to put those words into your favorite search engine and read up on all of those policies.

Martin Duke: Here are some resources for the meeting. We're still relatively early in the meeting week. Some of us are just beginning it today. Uh, you can read the slide as well as I can read it to you.

Martin Duke: Okay, uh, we will be using Meetecho for uh, joining the queue, um, having any sort of show of hands. If you have not—if you are in the room here and have not already joined using the light client by scanning the uh, the QR codes that are on the mic stands out there, please do so, so we are aware that you're here and we get the correct size room um, in future MoQ meetings. So again, if you have not yet joined, please join using the QR code on the mic stands. Those of you who are remote, please remain um, muted uh, until you are about to speak, but be aware there is about a one-second latency in turning on your mic. So, uh, you probably want to be a little proactive about that. Um, also if you are remote in Meetecho, it generally pays to have headphones due to the echo cancellation properties of this tool. And as always, when you enter the queue, please state your name and affiliation.

Martin Duke: This is the agenda for today. Um, we're due to some conflicts on Thursday with other security—with the CFRG, we're loading up the security topics uh, in today's meeting. Hopefully, there's some security enthusiasts here to to comment on what we're going to talk about. Would anyone like to bash today's agenda?

Martin Duke: Okay, this is Thursday's agenda. This is uh, obviously mostly non-security related topics. Would anyone like to bash this agenda?

Martin Duke: Oh, interesting. It did not... Okay, I had another slide that um, I had another slide that uh, did not seem to make it into the presentation. So let me just call it up and I'll read it to you. Uh, we have a number of upcoming dates. Uh, one moment. Let me see if I can find it. There it is. Okay, oops.

Martin Duke: Okay, I'm—I'm trying to share screen. There we go. Ah, okay, here we go. Sorry for the delay. Uh, this is um, these are just upcoming dates in the working group. We've uh, received consensus on a number of virtual interim dates. Those are all Mondays, except for 26 May due to the US Memorial Day holiday, we moved that to a Tuesday. Uh, all those meetings will be 90 minutes beginning at 16:30 UTC. Furthermore, we've confirmed the location of our next hybrid interim. It will be Cloudflare's offices in London, in Central London. Uh, the first two days—that's a Tuesday and a Wednesday in June—will be Interop, as is our norm, and then the following two days will be issue discussion. So that'll be running through a Friday. Thanks again to Cloudflare for for donating their space for this event. Um, all right. Anything else for the—Oh, I also uh, wanted to pitch that Alan Frindell, our illustrious editor, will be presenting at HTTPbis this week about um, compression over MoQ, uh, what we're calling MoQPack. Uh, so if you're interested in that topic, you might want to find your way to HTTPbis this week. Any other comments or questions for the chairs?

Martin Duke: Okay, in that case, we'll hand it over to Alan, um, who is going to talk about updates to the MoQ Transport draft since last time.

Alan Frindell: [Presentation: MOQT Update: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-moqt-update-02]

Alan Frindell: Okay, you can see me. And can you share my slides? I won't be able to drive them, and that's going to be weird because I'm going to have to go look and see what's on that monitor. Okay, there we go. Can you guys hear me okay?

Martin Duke: Yes.

Alan Frindell: Okay, fabulous. Uh, all right. We're going to just cover uh, everything that's changed uh, in the draft since draft-15, um, since last time we met in Montreal. Next slide.

Alan Frindell: Uh, okay. Just so uh, people have the timeline, we published draft-16 uh, in early January, and that is our current Interop target. Uh, we found that uh, kind of doing every other draft is a more reasonable cadence for people to get caught up and go a little bit deeper each time uh, given some of the wire image churn. Uh, and then draft-17 was published at the draft deadline for this meeting. Um, I'm the presentation here kind of covers both together. If you want to separate it out and find out which changed in which, uh, or if you want to go write an interoperable implementation, you should probably check the change log or just pin the draft-16. The next Interop target is likely going to be a draft-18 approximately 8 weeks from now and 4 weeks prior to the next hybrid interim. Next slide.

Alan Frindell: Uh, so this was a comment in the Slack after we published draft-17. Uh, this almost feels like a completely new protocol. Uh, we haven't changed a lot of the API uh, or the semantics of the protocol, but the wire image is significantly different. So, uh, we have replaced what has existed for—since the beginning of MoQT, which was there—there used to be a single client-initiated bidirectional control stream, and that has now been replaced by a pair of unidirectional streams that are used for setup. Uh, and SETUP is now a single message, not client and server setup. And GOAWAY is also on that unidirectional stream. And then all of the requests that you would make, uh, each go on their own bidirectional stream. Uh, and that led to the removal of seven different control messages that we previously had, which are now replaced by uh, QUIC or WebTransport control messages like FIN and RESET, STOP_SENDING, etc. Uh, the data transfer has not really changed much. Uh, it still uses unidirectional streams or datagrams. And uh, we have our own varints now, uh, which we've wanted for a while, uh, mostly because we wanted a larger one-byte range and we wanted to support uh, the full set of 64-bit numbers. So, uh, that is like the headliner, and then we'll go through like some other changes that have happened. Next slide.

Alan Frindell: Uh, paint for the bikeshed. We renamed some things if you're coming back to the draft and you want to know where it went. So what used to be called SETUP_PARAMETERS are now called SETUP_OPTIONS. Uh, people felt that it was confusing that SETUP and messages had parameters and they were—they behaved differently. And then what had been inconsistently referred to throughout the draft as EXTENSIONS, OBJECT_EXTENSIONS, and OBJECT_EXTENSION_HEADERS, uh, has now been—those are now called properties, uh, and they can be set on a track or on an object. Uh, and there is also a new uh, recommendation for sort of a canonically safely encoding uh, into sort of a binary track name. And there's an example from the draft is there, um, which could have—there are other ways we could have done it, but that's how it's—how it's done in the draft. Next slide.

Alan Frindell: Track properties and message parameters. So uh, in the last couple of drafts, we've—we've tried to separate these two concepts out and make them crisp. So uh, publishers can attach key and value metadata to a track, and that goes in PUBLISH, SUBSCRIBE_OK, or FETCH, and it shows how the publisher is sending you a track. This—these track properties are associated with the track, and those are end-to-end. It goes—they go everywhere the track goes. Um, message parameters, on the other hand, are—I say hop-by-hop, I know some people didn't love that term, but the idea is that like those are really scoped to the endpoint you're talking to. Of course, if you're a relay, you might look at the parameters you got and then make a similar upstream request if you want to, but they are not guaranteed to traverse hops in that way. Um, message parameters are also now TV-encoded instead of TLV, which makes them mandatory to understand. You have to—if you want something new, you have to negotiate it because if you get a code point that you don't know, you'll have no idea how big it is, you cannot skip over it. Okay, next slide.

Alan Frindell: Uh, some changes in the control plane. Uh, we took what was SUBSCRIBE_UPDATE and we renamed it REQUEST_UPDATE, and you can update all kinds of requests, uh, and the one of the main use cases there is being able to refresh your authentication token on an existing uh, request. Um, there was this EXPIRES uh, concept in the draft that was very vaguely understood—uh, defined before, and we've crisped it up to explain like what that means and how you can refresh it potentially. Um, we uh, we crisped up some of the language in JOINING_FETCH, and so now, for example, if you have paused a track by setting it to FORWARD_0, and then you later reset it to FORWARD_1 and you would like—and that happens in the middle of a group and you would like to go back to the beginning of whatever group that happened in, that's now possible uh, by issuing a JOINING_FETCH. So you can do it multiple times in the same track. Um, SUBSCRIBE_NAMESPACE has been completely rewritten, uh, so there's uh, a sort of a discovery mode which tells you about available namespaces, and those come in the bidirectional response stream. Uh, and then it also can solicit PUBLISH messages, which come on their own bidirectional control streams in draft-17. And you can sort of say when you subscribe, you can want just the namespace piece or the publish pieces or both.

Alan Frindell: Okay, next slide. Uh, data plane. We now allow you to mix and match streams and datagrams in a single track. Before we said like, you're either a stream track or you're a datagram track, but now you can have, for example, base layers encoded using streams and then like datagram enhancement objects in the same track. That's fine. Uh, when you fetch something, there's now a way for the publisher to say, uh, I—you asked for 0 to 10 and I—there's a gap in the middle and I have no way of filling it, so I'm just going to skip over it and tell you that I can't, and you'll get what I have, um, which we think is useful and there's some other issues that are around that we'll talk about on Thursday. And uh, there was a lot of weirdness in the object status uh, enumeration that we have fixed up. So you can no longer—object status is no longer like a true object, like you can't set properties on it, uh, if it's not a normal object. It's really just a way of communicating where some of the boundaries are. So we removed the ability to send like OBJECT_STATUS does not exist on the wire. That just isn't something you ever need to do. Uh, and we've explained how END_OF_GROUP and END_OF_TRACK like work more crisply in terms of how you would use that to populate where the gaps are in your cache, uh, or for a subscriber to use that.

Alan Frindell: Next slide. GREASE. This has been a to-do for a long time, and now that we have like figured out our extensibility story, we know what we need to grease, and so we have greased it. So everything that we think that you might—that you're allowed to send unnegotiated values for, there are now GREASE code points, um, which include SETUP_OPTIONS, PROPERTIES, all the error codes, and uh, the type of an authentication token. If you want to send a value in any of those fields, then you have to negotiate that. Is there a question?

Alan Frindell: Oh, did I go back too far? Okay, I'm sorry. Is it better here? Okay. Should I speak more slowly? No, it's fine, just uh—it sounds better here. Okay, we'll stay right there. Wait, there's more mics up here. We're not sure. Maybe this is better? I'll stand right here. David set it up yesterday and he did something. Okay, just—just leave it. I think uh, now all the mics are close. Hopefully, only one was on before. Okay, everyone got it? We GREASEd stuff. People should start—once we—that's in 17, so once 18 rolls around, people should start sending GREASE and breaking people that don't understand. Next slide.

Alan Frindell: I heard you like delta encoding. So we have delta encoded pretty much everything that we can in the protocol. Um, so when you send options, when you send properties, when you send ranges that have starts and ends, everything there is now delta encoded. Uh, motivation was both it being slightly smaller on the wire, but it also makes it very easy to do duplicate detection for things that allow duplicates or don't allow duplicates. They sort of have to be stacked together. Um, looking through the document, there is like only one—like I see that in fetch responses, there are like group IDs and object IDs that are not delta encoded, and I would say that those days look numbered, uh, and then I think we'll be out of things to delta encode, um, so that's probably coming. Next slide.

Alan Frindell: Errors. We added a bunch of—so we added a RETRY_AFTER field uh, to request areas that is semantically mirrored after a similar HTTP header, and a bunch of new error codes that express important failure conditions. And GOAWAY now has a timeout hint uh, that you can use if you want to give the peer an idea of how long they have to get gone. Next slide.

Alan Frindell: Rendezvous. So this is something uh, that has been asked for for a while, and we finally got it in there, which is that uh, if a subscriber can express to a relay that it's willing to wait a little while for a publisher if there is no—not one there. So I subscribe at the relay for a particular track, and the relay's like, I have no publishers for that track, some implementations would just immediately fail that, others would sort of let you hang around. Now the subscriber says like, I would like to wait up to this long for a uh, publisher to materialize. And it honors the French heritage of MoQ. Next slide.

Alan Frindell: Um, okay. That was like all the uh, updates in 15—uh, 16 and 17. Uh, so just a uh, heads up. So uh, we—we made a change for how we're managing issues for PRs that are still in development. So there's—there's three fairly big pieces that are currently expressed in either PRs or separate drafts, and they're—they're kind of in a rapid iteration phase, and um, none of these have merged into MoQ Transport yet. We decided that we're going to track the issues with each of those proposals in the author's repo, and we found that tracking it on the PR, you would leave threads, the PR would get updated, GitHub would completely erase the thread or hide it, and we were just having a very hard time like tracking it and not losing things. And we don't really, you know, the editors are trying very hard to drive the MoQ Transport issue count down, and so having issues tracked in our repo for things that don't necessarily have clear consensus to merge in yet uh, didn't feel right either. So here's where the repos are if you're wanting to get engaged with with those. You can obviously comment on the PR also, but if you have like a more substantive uh, something that needs a lot of back-and-forth discussion, we prefer we have the discussion over there. Next slide.

Alan Frindell: Okay, it's a little small for me to read here, but I'm going to so—uh, our current issue status. If you can see in that uh, upper right corner screenshot, we are now down to a double-digit issue count. Um, if I'm correct, we crossed over 100 in the wrong direction about a year ago, and we peaked at 156 or 157. So um, we've been working really hard um, since Montreal. The numbers in the—the green numbers uh, here show the diffs since Montreal. So um, we're at uh, and there's a small—the total doesn't quite sync because things were changing over the weekend when I was drafting slides. Um, but we've net closed 26 issues since Montreal. Um, the biggest so—of the roughly 100 that we have left, there is, you know, something like a quarter of them have a PR that will close them right now. So, you know, people go review them. Uh, there is another third of them which are editorial, and I will say, uh, you know, we don't think it's a great idea to just go and make a bunch of editorial changes because it makes the diff harder to read. So we'll probably be strategic about which ones we fix and when, um, but it will not take very long to close those out, particularly um, for those who don't know, um, the editor team is now, at least I am, quite AI-assisted in producing PRs, um, and can crank them out quite fast. So um, our list of things that need discussion is getting kind of small outside of our big rocks, um, which is a good thing. Um, there are some people who have some non-transport issues open. You might if you—if you opened this GitHub repo right now and looked for the issues that are marked "blocked," you might find one assigned to you um, to close because we don't—if it's not transport, there's probably a better place to track this now. There didn't used to be two years ago, so... And even the parked issue count, um, if you go and look through what's parked, a lot of it is sort of things where we're like, "Uh, do we need this? Do we not need this?" You know, kind of open questions like that. And things like we need to re-number things, some to-do's to re-do the security consideration—you know, write some more security considerations, etc. So even that does not look like a scary list to me. So um, I'm still operating under a dream is that we will get this issue count to zero post uh, 126, which is in about 5 months. Um, which means we're going to net close about 4 issues a week. Uh, so if you have been dismayed by the slow pace of MoQ in the last 4 years, like, you know, now is your time to shine. Um, so I think that's it. Next slide, I don't think I have any more slides.

Martin Duke: Just one. You'll have to enter the queue and then get close to this thing. Oh, there you go.

Colin Perkins: Actually, the video is still on the other—Oh, David, do you know how to—Oh, did you not put it there? All right, come on this side for a minute. Just come in front of the thing and then I'll futz with the settings right there. I can speak into this too, that's what I was talking to. So it was actually goes back a couple—uh, one more slide behind this one. We were talking um, about putting those in the other repos. I have a slight concern that those repos aren't archived in any way or logged from an IPR point of view, and those are all topics that are likely to have a ton of IPR. So I'd like to be doing those in a way we have that. I wonder if we could just tag them in a way that you clearly did not include them in your bug count.

Alan Frindell: Um, I—okay, so I know less about this topic about how uh, IPR management and Git repo management just—is there someone who can speak authoritatively about it?

Colin Perkins: We don't have to resolve it right now. I just think it's an issue we should dig into. That's the only thing.

Alan Frindell: Okay, happy to take other suggestions. This was just like in the last like week or two that like I needed some way to track for filters in particular, needed a place where we could have like long threaded discussions that weren't like getting like smooshed over each other.

Martin Duke: Yeah, Alan, I—I think if we're going to—well, we are moving down this road and I think if it's going to be an extended period of time before things are brought into the MoQT draft, I think we just need to adopt those drafts with the idea that we might eventually merge them into MoQT.

Alan Frindell: Yeah, that's fine, so that it's brought into the MoQ repo.

Martin Duke: So, I mean, that would be easier if those features were—now, REWIND is expressed as a draft. Um, and you know, we—I don't know that it's gotten—we talked about it a ton in Boulder, I don't know that it's—people have been have reviewed it in detail. Um, we can have an adoption call for it. FILTERS and SWITCH are still expressed as PRs against MoQT. So I don't—are you suggesting that we just go ahead and merge them in a rough state, that we maintain them on a branch, or that we ask the authors to convert to a draft form to be adopted and and managed separately?

Martin Duke: I'm not suggesting anything related to the existing PRs against MoQT. I mean, those are—there's no IPR issue there, like that's in the—that's in the working group repo, like that's—that's not an issue. But the issue that Colin brought up, there are these individual drafts floating around that are at least like intended to eventually merge into MoQT. Maybe they'll become extensions—like that's something for the group to decide later, but to—if they run for a long time outside the MoQT draft, then we should, assuming there's consensus, we should adopt them so that they are brought into the working group repo.

Colin Perkins: Um, Martin, what you just said was not what I understood from Alan. I read the slide as, "Let's take filters, for example, that that is not in the IETF repo, it is in MoZandeti's repo."

Martin Duke: Oh yes, you're making an excellent—yes, I'm sorry. Pardon me. Um, you know, I'm not even familiar with the Git—for the—for the PRs that are now also repos, I—I have not even honestly looked at the GitHub-foo that's involved there, but this—this is something we need to figure out.

Alan Frindell: Okay, proposal: have the editors take an action item with the chairs to figure this out and ensure that IPR is correctly resolved. That sounds like an important thing that we should do. Um, please take further minutes. Do you have—you're at the front... Rowan?

Rowan: Uh, hi. Uh, um, so could you go back to slide 5, please? I can't go to slide 5, but Meetecho chairs can. Okay. Oh, take—there you go. Hi. Um, yeah. So um, I wanted to understand um, so it sounds like with this change that a relay um, you could negotiate with your first hop relay, but then uh, that relay, it could remove things but it really couldn't add things unless it already had some kind of preconfigured knowledge of what the next what the next hop could be.

Alan Frindell: When you say "things," are you referring to properties of the track that the publisher is sending or are you referring to parameters in a message that affect—a message that are going to the next hop?

Rowan: Okay.

Alan Frindell: Yeah. So I mean, relays generally are like aggregation points. So—are you on mic? What's that? Are you on mic? Am I on mic? Yeah, it's sort of picking you up. Oh yeah, go a bit closer to the mic on there. I'll—I'll refresh after this presentation. Okay. Um, the point is that like, particularly when we think about SUBSCRIBE, your subscribe may only go to one hop because it may have that relay already be subscribed upstream. So you can't make any assumption that your parameters are going anywhere further than the first hop. Uh, and so if you want to subscribe with some super cool like thing, you had a new relay, that is only applying to your local uh, the local relay that you're talking to. If you're publishing, you can set properties and those will go wherever the track goes. But and there's a separate PR that's open that talks about possibly creating a mandatory-to-understand type of property, which is if you're a relay and you get something that you'd be able to identify that you don't understand it, and you would just blow up if you got one because you don't—for some reason it's not safe to deliver that way. Is that being—was that clear? I feel like it was. That helped. That—that helped. Okay.

Martin Duke: I don't—I can't see the queue very well if there's any more people in the queue. No. Okay, the queue is clear. Um, this—I think we're ready to move on to the next topic unless someone raises their hand immediately. Okay. If I may, very briefly, we're going to refresh the thing here so we can change the camera. We'll be back in 3 seconds. Okay. Well, you didn't wait for the reply. Uh, while we're sitting here waiting, if you arrived late, uh, please scan the QR code on the mic stands to join with the light client so that we know that you are here for multiple reasons. Uh, it would be much appreciated if you could do so. Your way of uh, signing the blue sheets is just to join with the light client. And it's back, and it's back. All right, excellent. Next up is Alan, uh, in case you missed Alan for the past two minutes. Uh, he's back to talk about MoQT URLs.

Alan Frindell: [Presentation: moqt://: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-moqt-01]

Alan Frindell: Are you going to bring up the slides? Thank you. Ooh, that's a better camera. Uh, is the audio clear from here?

Martin Duke: Yes.

Alan Frindell: Yes, actually.

Alan Frindell: Okay, that's good. Uh, so uh, I'm talking about the MoQT URL and I've got a slide in here from Will. I don't know if Will wants to be close to a mic in case he wants to talk to his slide um, when we get to there. Uh, okay. Next slide.

Alan Frindell: Okay. So uh, in the Boulder interim, we were talking about URLs and we designed this new thing on the fly, which is in PR 1486. So if something is a MoQT—and this is about how—how do—how does the client know what transport to connect with? And so if you can—if you say moqt://, that means the client, if it's—it should send an H3 ALPN and all of the MoQT ALPNs that it supports. Now, of course, if it doesn't support H—if it doesn't support WebTransport, it doesn't put H3. If it doesn't support native QUIC, it doesn't put the other ones. But if it supports both, it puts them all on. It can put them in whatever preference order it wants. Uh, and the server will—if the server selects one of the MoQT ALPN ones, that means you got native QUIC and it tells you immediately what version you're speaking. Uh, if the server selected H3, that means that it wants to do a—you're still—you're not done yet. You need to try again using by sending a WebTransport CONNECT request and you use wt_available_protocols using the same list of tokens you had in your ALPN but with H3 removed. Uh, let me pause there and see if any—that's like sort of, I don't know, call it like the universal client. Like, that's how it's supposed to work. We also thought it would be nice to say uh, well what if you want to tell somebody to only try QUIC? So we added this like +q and +wt syntax in the scheme. And so if you got a moqt+q, that means don't even try H3, and if you got one +wt, it means don't try the native QUIC ones. Uh, next slide.

Alan Frindell: Okay. Then we also got some feedback on the PR—or on the PR—on the scheme, and uh, one of them—the feedback was that like you really shouldn't be messing around with schemes like that for things that represent the same resource. And moqt://foo.com/bar and moqt+q://foo.com/bar actually are the same resource. Um, and URIs are specified such that when you're trying to determine something, you—you compare exactly on the scheme and so we're sort of fighting the system. But then I also took that a step back to think about, like, what exactly is the point of trying to—why were we trying to put transport selection into the URL in the first place? And so I tried to make this table which is like, what does the client actually support and what does the server actually support? So in almost every box in this table, nothing changes with having those URLs, right? If—if you were a QUIC-only client and somebody sent you a moqt+wt:// URL, you're going to fail with our—the scheme we cooked up in that PR, you would fail immediately, and if we removed it and you sent a generic moqt:// URL, you would fail when it got to the server and there was no ALPN match. So it's going to fail either way. And the same for the opposite, where the red X's are, right? If—if the client and the server just don't speak the same transport protocol, it's not going to work, no matter what you do. And if they only both speak the same protocol, it's also not going to change anything. It only really matters when they both support both, in which case the client expresses its preference in the ALPN list and the server's preference is going to win. And so the question is, is there any—is there any point where you would want to be telling clients that support both to restrict the set they're connecting with? And I sort of can't come up with a reason why. So to me, that's like, let's keep the good, which was like the generic moqt:// URL and toss the +wt and +q. So come—I think let me just pause there and take feedback on that approach because it's different than what we said in the room in Boulder and I don't want to change it without people... I can't—there's so many in the queue, but it's a very tiny...

Martin Duke: Ben Schwartz.

Ben Schwartz: Hi, Ben Schwartz. Uh, talk to me about QMUX.

Alan Frindell: Uh, QMUX is not, you know, we'll see who wins their race here, but that may be their problem, not ours. Um, but in a way I think, I don't know, talk to me about HTTP/2. Uh, like if—you know, that racing QUIC and TCP is already a problem, and I don't think MoQ's going to try to solve that problem. Or are you suggesting that we should bend URLs that are like, "This MoQT thing can only be accessed via TCP"?

Ben Schwartz: I—so my only—I'm not suggesting anything. I'm just pointing out that yeah, we have—we have more than two protocols in place here, we arguably I guess have four, um, four—four transports below substrates, potentially below MoQ. Um, so this—this table gets to be a lot bigger and uh, the situation gets a lot hairier.

Alan Frindell: Yeah, I mean for that matter, even WebTransport is—is hiding WebTransport over H2, which is also a completely viable way to speak MoQT. Um, but we sort of assume that that's your—you already figured that out for HTTP, so unless somebody has really great ideas about how we can improve that system, I say we just use the same one. Uh, next, maybe that's Ted? I can't tell. The picture is very small.

Ted Hardie: Uh, Meetecho speaking. We cannot hear you in this room. They couldn't hear me in the room either, this wasn't good. Okay. Uh, Ted Hardie speaking. Uh, I much prefer the uh, version without the +q and +wt. I think it's—it's cleaner and I think you're persuasive with your um, your table. There are a couple of people in the chat who are talking about uh, yet more um, potential substrates like QMUX or something else, and I think that actually makes it even more important for us to ask the question of, like, what resource are we addressing rather than, you know, what's the set of protocol steps to reach the resource? Because I think that's where, when you're talking to the client, you ultimately want to say, "This—this is the resource you're trying to get to, and the mechanisms by which you—you are going to attempt to reach it are determined by the protocol, not by what you see in the URI string." So I think that from my perspective, the URI string is much cleaner the way you have it now.

Alan Frindell: Yeah, thank you. And—and you can still select an individual transport by running it on a different port or with a different name or IP. Next.

Martin Duke: Yeah. So um, oh yeah, this—this is great fun. I can't see myself. I can't see what's going on. Great. Sure. Yeah, anyway. Um, I think we're rapidly converging on this on this idea that we'll be um, removing the plus variants. Ben did ask about QMUX. I think it's going to be a problem that QMUX has to solve, not us. Uh, because QMUX will need to be able to say when you negotiate TLS over TCP and you're going to be doing QMUX, QMUX isn't the thing that you're negotiating in that case. You're negotiating the thing that's sitting above the QMUX and it'll need to be able to be identifiable somewhat. Uh, it probably needs to be distinct from the WebTransport version. Uh, so you've got H3 and then you'll have a MoQT-over-QMUX or something like—like that, that'll need its own ALPN in that case, distinct from that. Or it could just be MoQT in that case, because that—that works.

Alan Frindell: I—I imagine you'd do the same thing: H3, you'd do MoQT-16, MoQT-15, and MoQT-16 says, "If you negotiate this, that's QMUX—that's QMUX version 00," or and MoQT-18 is QMUX version 01, which is how we did it for QUIC. There is an issue open in the QMUX repo for that and it has—there is—people are—are people with opinions because there are MoQT implementations that run over QMUX-00 right now. So um, and we're doing crazy things. So I think we'll talk about it in QMUX whichever day that is, tomorrow or... Yeah. Yeah.

Martin Duke: So there's been a—there's been a long sort of question about... David's doing a great job here. A long sort of question about whether or not uh, a URI scheme identifies an abstract thing in the sense that this MoQT thing would be, or HTTP has decided, or whether it identifies the sort of means by which you obtain the resource. And I think we're sort of gravitating generally in the—in the ecosystem toward this sort of abstract notion for the URI and a much more concrete notion for the uh, for the ALPNs. But again, there's some question about that as well as as to whether there's some level of abstraction in the ALPNs as to whether they refer to the entire stack. I mean, obviously in your WebTransport example that you had previously, the wt_available_protocols will probably use the ALPN tokens that match what's used over QUIC, so it's not a concrete instantiation of of MoQT with the stack underneath it. So we're in this sort of weird liminal space in terms of uh, what—what those labels mean. But I think ultimately the right thing will happen uh, and you'll use a combination of the label and the context in which you find that label to determine exactly what you're going to do. But you'll have to be a little bit careful in these cases when you when you manage multiple version skews uh, as as yeah as QMUX goes through different versions of drafts, as MoQ goes through different versions of drafts, syncing them up was probably the only way to deal with that in the short term and the long term. Yeah, good luck.

Alan Frindell: Yeah, no well, I don't think we're just the first, we're not or maybe we're not even the first, but you know other people have that too. Yeah. Next.

Martin Duke: Yeah, Martin Duke, Google, as an individual. Um, yeah, I—I think getting strong feedback to get rid of the plus, so uh, plus one to that. Um, there's—there's uh, if I understand correctly, like the way this works is as a client then I would—so there's two layers of protocol negotiation because of if you're using WebTransport. So at the QUIC layer, the ALPNs are um, um, uh, MoQT-16 or whatever and...

Alan Frindell: No, no, for—for WebTransport, it's H2 or H3 are the ALPNs.

Martin Duke: Pardon me, H3, yeah yeah, pardon me. So you have H3 and uh, WebTrans—I'm sorry, H3 and um, MoQT, and then uh, assuming the server picks H3, then you have the further WebTransport negotiation which could then lead to MoQT. I guess the only time you have a problem there is if the server picks H3 not knowing exactly what you're doing and then you kind of end up in a spot where you can't actually negotiate MoQ.

Alan Frindell: You mean like it's a web server but not a WebTransport server, but it was also a MoQT server?

Martin Duke: Yeah. I mean, I'm just thinking this out loud at the mic. I mean, I—I guess you could end—well, I mean, like... I mean, I—I guess if you, yeah, I mean if you have a web server, I mean I guess you might not implement WebTransport but also be MoQT, I don't know. I mean, like, I guess that's the edge case that's bad here, but I think I can live with it personally. Um, okay. I—I think that's kind of a weird case. Yeah, I agree. Okay. I do see the queue getting longer. There is another thing about URLs I want to talk to, so if the chairs can let me know how we're doing on time. Um, but if people are—if everyone's getting the mic to say don't do plus, then you know, I think... All right, I think we have—we still have 7 minutes, so uh, I'm not going to cut the queue quite yet but um...

Alan Frindell: Okay, but I have another non-transport related topic in this deck, which is about fragments.

Martin Duke: Okay. All right, then let's—let's move on. Colin.

Colin Perkins: Yeah, uh, thank you for providing services, um, but you know, plus one to minusing the plus from the scheme. But uh, the—the thing that I think is important is, I think we have a very simple model for how we do this, which is every layer, what we do at any given layer is purely say what the next protocol is, okay? And that's how we do it in WebTransport. So in WebTransport, we just say, you know, H3 is the next protocol, and then inside of the WebTransport protocol, we negotiate the—the what's the next protocol, which is this. And we should do exactly the same thing with QMUX. I think that that model's going to work really well for us for all of these variants. So what you will say is you will say QMUX-0 is the next thing. You don't mention MoQ at all, right? And then inside of QMUX, you say the next protocol is MoQT-18 or whatever it is.

Alan Frindell: Okay, I want to—let's talk about QMUX offline said, okay? And we can go talk about it in QMUX tomorrow, too. Yeah, that sounds good. Uh, okay. Uh, Christian maybe? I can't see you. Is it Christian next? Good evening, but he has—oh there we go. Christian, audio? Also, Christian, you need to turn on the audio. You're... You need to turn on the audio. We can't unmute him, probably not. You haven't turned on the audio in Meetecho. You can unmute him, no. Okay, Christian, I—for some reason we can't unmute you and um... well, maybe I can—no I can't. Okay, uh, maybe Lucas can go and Christian can type in the chat?

Martin Duke: Uh, Lucas, go ahead. Maybe Christian can figure it out.

Lucas Pardue: Hello, can you hear me?

Martin Duke: Yes.

Lucas Pardue: Yay! Um, I'll be brief. I think, yeah, this is a bit of a pain. Um, there's lots of QMUX questions. We've only just adopted the draft. Come to the QUIC working group session. We do have some time to talk around this topic. I suspect we'll need more time. I don't know if maybe it makes sense to—to kind of get some of the QMUX folks to come to the MoQ interim meeting in June and just schedule some time there to—like, do a co-hosted hour or something just to keep nailing down on this. Some of the pain points remind me a bit of some of the issues we saw with like WebSockets over H3 and extended CONNECT and some of those issues that didn't really go away, we just ignored them. So yeah, do the URL thing you're going to do, but I—I do think the ALPN stuff will need to get solved. I do wonder how much of a problem it is at the end. While we're iterating, sure, we'll need to figure out how to do different draft versions or whatever, but I'm more focused on the end goal, like in 2 or 3 years' time, does what we produce make sense? Because even today, things don't make much sense and it's—it's kind of a pain. Thanks.

Martin Duke: Lovely. Thank you, Lucas. We're going to move straight into Suhas's presentation now.

Alan Frindell: [Presentation: Application-Agnostic DPoP Proof for C4M: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-application-agnostic-dpop-proof-for-c4m-01]

Alan Frindell: Okay, thanks, Will. Next slide. Okay, let's talk about fragments. In MSF and CMSF, which are things that run over on top of MoQT, those drafts have specified like meanings for the fragment that are not related to the connection, but they're sort of instructions to the client about what to do after the connection happens. Um, like, after you establish a connection to this URL, then you want to send a SUBSCRIBE to this track. And there's other things there. Go read the drafts. Um, the question was should we include transport preferences here? I'm sort of inclined to say no, but I have a bigger question about fragments, which is on the next slide. Which is, so those MSF and CMSF are going to fine-use fragments, which is fine. So what if anything do we need to say in the MoQT draft about the fragment? Like, do we just say we're not defining one and other people can define them and they mean whatever they want? And if they whatever running on top of MoQT gets to define their own fragment, then do we need to like require that they identify what they are so that a client that supports more than one will be able to correctly interpret the fragment? Um, or do we just not say anything and make it MSF/CMSF's problem and let the chips fall? Ted.

Ted Hardie: Uh, Ted Hardie speaking. I'm a little confused about why you would want to use fragments for this rather than a parameter. Um, there are lots of ways in—in a URI to include a parameter, and if you do it with parameters you can name what the parameter is. Um, you know, just like as pointed out in the thing, SIP uses a transport parameter to tell you what the underlying transport is. You can use parameters here to convey instructions to the client and you can have more than one type of parameter if you're not trying to use a fragment. In general, fragments are defined in the URI system as specific to a MIME type. Um, so a fragment in a particular MIME type has a particular meaning. You're not really using MIME types in this. Uh, so although fragments are a valid part of the URI syntax here, I don't think they're actually the one you want. I think you want parameters instead.

Alan Frindell: Can I ask a follow-up question? Because I—and I know this—this discussion's been going on for a while in MSF and I've followed—I've been in and out of it. Um, one of the things that I think is confusing about parameters is some of those parameters might belong to the WebTransport resource, and some of them now you're saying don't, that they're actually instructions to the client and they don't get sent along, and that...

Martin Thompson: That's what fragments are for.

Alan Frindell: That's—Martin Thompson in the room says that's what fragments are for. So um, I think that's why we went for that separation. You can imagine if the URL is signed, for example, like we can't strip out these things that only the client's supposed to see from um, that part of it.

Ted Hardie: So—so the use of parameters is scheme-specific. So if you define a particular set of scheme parameters which are either mandatory or optional and have specific meanings, uh, then you can define what part of the uh, consumer of that URI uh, it's addressed to. Now, uh, I think Martin said to you the reason you went to fragments is because those are always referenced by the final consumer because they're the consumer of the MIME type. Uh, that's true, but because you don't have a MIME type here, you're running into this problem that if you want to have more than one kind of fragment, you don't know why this fragment is different from that one. And you would know that in a URI scheme that was referencing uh, MIME types by them being different fragments for different MIME types. So unless you invent MIME types to go with this scheme, the fragment is going to have that ambiguity where I don't think the parameters would. Now, as I said, syntactically it is valid, um, so I'm not going to like get up here and and put on a purist hat and say thou shalt not. Um, if—if you want to go this way, it—it is—it is a valid syntax, just a little confusing, um, but I think it's going to retain that ambiguity where some other mechanism might not.

Martin Duke: I'm going to close the queue very shortly, so uh, if you want to enter it, enter it soon and be brief, please.

Will Law: Yeah, uh, Will Law, Akamai. So I—I am a supporter of fragment. I think Ted's made it clear that there's not the direct MIME type association, but you're allowed to—it has the very clear understanding that this is not information for the server. And with these URIs, we're munging two things together, as you've mentioned: how the client connect to a—a resource, and then what it should do at the application layer once it's connected. And we're putting that all in one string here. And I think having a—constraining information that's purely for the application, the client application, putting it in the fragment makes syntactical sense. And I don't think MoQ Transport should define what goes in the fragment. There are going to be hundreds of applications that run on top of MoQ Transport, and they're going to invent very different ways to communicate their application-specific information, and we should give them some freedom in how they choose to do that. We should just define the moqt schema, which says how you connect to the MoQT resource, and uh, step out of the way.

Alan Frindell: Uh, so Will, I wonder if... I mean this discussion about MIME types, and I admit I'm a little bit weak on it, but I wonder if there was a way that the server identified which MIME type it was, then you would know how to interpret the fragment. Um...

Will Law: I don't think—but the server doesn't need to do that. In this case, it's just opening up a communication channel, right? It's not actually returning a resource. It's establishing a connection.

Alan Frindell: Yeah, no it's weird. Um, okay. We can drain the queue. Mike Bishop.

Mike Bishop: Yeah, no hats on, just... I think the—the difficulty with putting it as a parameter, which we discussed at the interim, is that that winds up becoming the address of the WebTransport resource, and what we're really—ideally we would like to have multiple layers here, but the URI syntax doesn't give us multiple ways to slice it. Um, I think fragments do kind of make sense, but as they're defined, we essentially would paint ourselves into the corner of having to define MIME types for MoQT, which isn't great, but maybe what we wind up doing.

Martin Duke: Harald.

Harald Alvestrand: Harald Alvestrand, Google, speaking as with my hat of a MediaMan Chair. There's an awful lot of existing practice around fragments, and they—a lot of it ties strictly to you interpret it in the context of what you got back, not how you connect. I think it would be advantageous for the sanity of the—of the internet ecosystem if MoQ kept on doing—using that. I mean, the # is just a character. You don't lose anything by—by doing something different.

Martin Duke: Thank you. Okay, Colin. Um, you can go as long as you want, but it's going to come out of your secure objects time, so be brief.

Colin Perkins: Nothing safe. Um, so so look, be pretty quick on this. I like—I've spent hundreds of hours when I was chair of the DAV working group on this topic. It is deep. Uh, and I'm going to totally ignore all of that. I don't care. I'm—I'm just going to treat this as some bits on the wire. There's a hash mark, there could be things to the right or left of the hash mark. I don't really care how we arrange those bytes right or left of the hash mark, um, because it won't matter, because these implementations are not something else, they're a MoQ implementation. When we talk about a WebTransport on in a relay, it's not like it's a standalone thing. It's a WebTransport sitting under a MoQ relay, it knows how to interpret these parameters and knows what they're defined as. The thing that's important to me comes back to Will's point, which is lots of apps are going to need to extend and to define these parameters. And because of that, I think we should very well define in our URI a structured way where you can put named value pairs regardless of if they're on their right or left hand side of the hash mark in a way that's extensible and that that extensible part should be in the base draft. Then defining a bunch of those and saying how they're used and which one of the things they're used by—the client, the relay, they're removed, they're not removed, they're, you know, encoded in EBCDIC—that should be up to the places that use them, but we should have the base structure to do it. And this just seems like the most pragmatic thing that we can do, and I guess we will have to figure out if it's to the right or left of a hash mark, but I don't really care.

Alan Frindell: Yeah, I mean the—that is—sort of where I was coming to, which is like, I feel like in MoQT we should say something like, "Let's imagine we're using fragments, we should say something about that in MoQT like MoQT applications can use the fragment to in this way, but I would also like them to be self-describing in some way, so I can look at one if I support more than one as a client, I know what to do with it." Um, anyway that's all I want to say and now I'm going to give it back to Colin.

Martin Duke: Okay, Colin go, timer's already running.

Colin Perkins: [Presentation: Secure Objects: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-secure-objects-00]

Colin Perkins: Welcome to MoQ. Okay, I will—I will choose to the other thing with my AV support crew. Okay, secure objects. Um, so um, this has been moved into a working group draft. Thank you very much. We're trying to reflect consensus as best we can. I have slides up here. I can look—perfect. Next slide. Um, for people that have um, not been involved with this, um, we have the issue that relays can read all of our data we send them, right? We have TLS protects us on a hop-by-hop but not the end-to-end. And for some of these applications, there's a lot of relays in a lot of places in the world with a lot of different access, and people are very for some of the more secure communications, that doesn't meet it. You need something secure where you can also encrypt things end-to-end in addition to what you're doing by hop-by-hop. Next slide.

Colin Perkins: So uh, what we have done in the previous meetings is we've sort of split things up into uh, three categories of um, some stuff that the—the relays can basically read and modify, add, delete, all of those things, some things that they can read but not change, and some things that are end-to-end. So that's roughly the main goal of where the draft is today. Next slide. Bunch of issues closed. Um, next slide. And we've got some—some open ones that I'm going to be uh, talking about here, and we'll—we'll jump into some of these going forward. So next slide, please.

Colin Perkins: Okay, one of the questions is we have this uh, field of data where we take a bunch of things and we add them to the AAD data for the encryption. So this is stuff that will be integrity checked and we will check that it hasn't changed when it arrives at the far end, or else we'll get an error. Um, but it's not sent over the wire, that's a—that's a key here. And this sort of grew over time and was a little bit hatched. Um, some of the stuff like key ID, um, we—we had it in here. You need the key ID in here, and it—it matches S-Frame also has key ID in its counter in this same thing. Um, and then we put the key ID also in the immutable properties. Obviously, you don't need the key ID twice in this, that's ridiculous, right? So there's just some mistakes made like that. Um, the uh, track namespace um, is we'll—I'll talk a little bit more about the proposal what to do on that and why it is. Um, but it's so that the—the thing we're protecting against is cut and paste of an object from one track to a different track. So we need something that checks that that was right. Next slide.

Colin Perkins: So the proposed solution of what we're putting in here is track name, and the reason uh, and I'll get to ordering of this in a second, then the immutable properties which includes the key ID, the group ID, and the object ID. This pretty much mirrors the what is in S-Frame with the exception of track name, which doesn't quite have the—the same concept there. The reason track name is here is it's not part of our key derivation, because the keys are bound to the namespace, the track namespace, and the track namespace is part of the key derivation. The whole reason when we—when we previously decided on that design was so that if you were subscribed to a namespace and you started getting publishes on it, you had the keys—could be scoped to that. You could have keys for different keys for every track, or you could have a key to the namespace. So um, the reason track name was put first in the ordering here was you might be able to partially compute a hash along the way and reuse that precomputation. I don't know if that works or is relevant. Um, so this is something that needs more discussion and more review, but this sort of proposed solution we've been looking at right here. Um, let me stop there. Comments on that? I see a couple people in the queue.

Martin Duke: Victor.

Victor: Uh, can you hear me? Yes. Yes. Okay. I am seriously concerned by this notion that the keys can be reused across different track names because in different parts we rely on group and object ID uniqueness as the guarantee that we do not accidentally reuse the—the AAD—uh, the AD numbers, and it would make more sense just to rederive the key for every track name because that is not actually expensive and I mean publishing—subscribing, publishing is to some extent like an operation that is kind of expensive, so I—I think that would be uh, better.

Colin Perkins: Okay, so Victor, you're suggesting that the track name should be included—that basically you're suggesting the key ID should be bound to a track, not to a track namespace.

Victor: Yes, I'm also concerned that if they're bound to a track, then you will basically have a—uh, non-reuse if you have same group object ID between different tracks.

Colin Perkins: Okay.

Martin Duke: Mo.

Mo: Yeah, uh, I had the exact same... Was there—was there a decision to cut things between namespace and name or it just wasn't considered all—all the way? Was there a good reason why you wanted to have the namespace part of key—key derivation and not the name, or it was just an arbitrary cut point?

Colin Perkins: It—it was—it was not to avoid the computation of having more keys, but so that if you received a track uh, in a namespace you'd subscribed to, that you could set up ways where it was uh, encrypted that way. And like, look, I—this was long ago, I'm—I'm happy to pull this out, it was just I was trying to match up with a prior decision.

Mo: I agree with Victor that it—it makes more logical sense and for security properties you—you want whatever the key is to only have one nonce space, and this would guarantee that you have one nonce space with the group ID and object ID.

Colin Perkins: Right. I think the nonce space is actually independent on this—well, never mind, let me retract that statement. Let me—let me just say, okay, let's—let's take that as just a note going forward. Unless some other people object to it, we'll make that change in the next draft. Unless we get objections.

Martin Duke: Alan.

Alan Frindell: Um, so my question's about uh, the properties, and I saw a issue in LoC which referenced the properties...

Colin Perkins: We've got slides coming up.

Alan Frindell: Okay, well then maybe I'll wait for that because I think this just we need to talk about how the track properties or object properties or both and how that works. Yeah.

Colin Perkins: That's actually probably the biggest issue in the presentation coming up. Okay. Sweet. So um, all right, wrapping this one up. Next slide. Varint encoding, this is bikesheddy. Um, obviously we have to have a canonical way to encode these integers. So the two ways that are proposed is we always use 64-bit integer—like, we expand them out to a 64-bit integer when we're validating them for the integrity checks, or the other way is we always re-encode them as the shortest number of bytes possible. Um, we're probably generally sending the minimal overhead one on—over the wire. Right now the draft says "should"—uh, you should do that, not you must do that. Um, I I think, you know, does anyone have any uh, arguments to be made that might push us heavily toward solution 1 or solution 2, or they don't care? Yeah, give me some feedback on what to do here.

Martin Duke: Martin.

Martin Duke: Martin Duke, uh, didn't need to run I guess, but uh, so for—I don't really care that much, but I would say number one we don't actually need to put it on the wire, like you could just as part of the processing expand it to 64 bits when you do the...

Colin Perkins: I—the slide is a mistake. It will—it'll be exactly the same on the wire. I don't know why I wrote that, my apologies. It is not more bytes on the wire, it's the same. You're right. Thanks.

Alan Frindell: Uh, so I think on Thursday I also have um, a slide about minimal varint encoding, um, and Victor had raised a point about it. I don't know if we want to get into it here or not, but the—the con for requiring it has to do with like, "Are there error conditions in your varint decoder?" And if you say that everything has to be minimally encoded, then there are now error conditions which are different from "I don't have enough bytes for more." So um, that was one. The other one is I'm not sure I quite understand why this is a problem. We added text in the draft that said like, "When immutable properties pass through a relay, like, you have to use the same encoding that the publisher used." And so isn't—won't it still work if you don't change it?

Colin Perkins: I'm not... There might be some things that aren't sent over the wire directly here that are included in the AAD. I can't think of one off the top of my head.

Alan Frindell: Okay. I guess I'm saying I don't understand that there is a problem that we have to write—say something about here.

Mo: Um, Mo, I think the—there is a problem. The—the blob that uh, that we came up with, that Victor proposed a long time ago, brings everything to be serialized. So I think that's not a problem, it's already serialized, and it's probably serialized with minimal—minimal encodings. So I would—I would bias toward going with minimal encodings because you're already going to have that blob and you don't want to unpack it unnecessarily uh, for your AAD functions. But then there's things outside of that that are also part of the AAD, um, and they're not already serialized, they're not already blobs, and for those we should be specific about those. And I would um, only caution against mandatory minimal because I am aware of some uh, uses of varint that hack things like um, using the alias code points as other things like "not a number," "negative infinity," "positive infinity." So, you know, your four zeros are not actually zeros, they're—they're code points for other things. So that's the only problem that I—I would run into for specifying mandatory minimal encodings.

Colin Perkins: Okay, that sounds like a bigger issue for—for Alan.

Alan Frindell: No, okay. So—so where you're trying to do this is like when you're—you're you've encoded up a message, you're about to effectively sign that message and send it or to, you know, compute the encryption stuff for it. And you largely always have all the bits of the encrypted message, that's one spot. And then the more common spot is effectively exactly the opposite. You've received a message, you've parsed it at some level, you have not passed it up to the application yet, and you're trying to check it's a valid message before you let the application see it.

Alan Frindell: Okay. I—I still think that you'd as long as you keep the same bytes from the beginning to the end, it doesn't matter what we change.

Colin Perkins: Okay, so you—you're you're saying a third one that I didn't say here which is like, you—what—you use what was sent on the wire.

Alan Frindell: You use what you got. Um, is kind of... and and about minimal varints, uh, I think I talked to Ian the other day and he said well sometimes hardware people are like, "No, no, no, I really want to be able to use the like 5-byte one because then I need to like..." they you know, like use the other bytes for some other hardware, they want to get like some AES operation aligned or something.

Colin Perkins: Yeah, yeah, I mean I'm one of those people. Like, we often are trying to align things so they're in fixed positions in the message and they don't move around, and the easiest way to do that is use the max size message for every integer and now everything stays in the same spot, right?

Alan Frindell: Right. Or sometimes use like a 5-byte number because like you have 3 bytes of other stuff. Yeah. Or weird. Yeah.

Colin Perkins: Um, okay. Look, I want to move on for this one and I'm not quite sure how to resolve it, but maybe after the next discussion—maybe after the discussion about the generalized issue, it'll be easier and I'll add the third possibility and get a bug open on this. So action item for Colin: make sure that the issue is expanded to have three proposed solutions. Ah, next slide. Um, okay. We've got a bunch of stuff that we're still working on the security properties, um, and you know, this is stuff that I'll be working on on the—the these ones, but like this doesn't change the on-the-wire protocol, this changes our explanation of the security properties of it. Um, next slide. And I think we're to the hard one here.

Colin Perkins: Okay. So right now we have track properties. And this draft doesn't really provide any way to encrypt track—or or deal with uh, integrity or encryption or any end-to-end uh, properties of the track properties. And we're starting to put relevant stuff there. So, uh, I've got a range of options here, um, and I'll—I'm going to walk through some slides for them, but briefly: one is, you know, don't do that. Like if—if it's something you want end-to-end integrity for, make it an object property instead of a track property and put it in the, you know, first object the headers one thing, right? Um, the second one is only provide authen—uh, authentication for track properties. And you can't check that the track properties were valid until an object arrives. I'll explain how to do this in a second. And similar description. And the fourth one is uh, I think this Alan's sort of proposes is, you know, make a in the same way that we did this for objects, do a similar thing with, you know, have different keys, different nonce spaces, etc. for having a way to provide end-to-end for parts of the track message. So next slide.

Colin Perkins: So I'll go through each of these in a little bit more detail. Oh, actually the first one I'm not going to go through in any more detail, it's pretty much "don't do it." Right. Okay, so this is option 2 and 3. And what we would do is when we're computing um, so I'm going to talk through option 2 which is just about authenticating the track properties. You would if you were what you would do is every time you were sending an object, not sending a track message, not sending a track property, you would include all the track properties in the AAD data that you're authenticating for every single time for all of the objects. So later, when you get to the far end and you um, are checking that the message hasn'Colin Perkins: t been tampered with, you again do the same thing. You include all the track properties there and if any of the track properties were mucked with in the transport, they would fail the integrity check here. And—and of course, we'd have a container of which track properties needed to be in this category and which ones weren't.

Colin Perkins: The problem—the so—the weirdness about this solution is you don't actually find out your track properties were tampered with when you receive the track properties. You find out they were tampered with when you receive the first data object and it fails, and actually you don't even know whether the object or the track property was tampered with. You just know something was broken. Does that make sense? Like, not discussing pros and cons of this solution, does it make sense what I'm proposing there? Okay, I'm seeing nods in the room. Okay.

Colin Perkins: The—the next one is people like, "Yeah, but I want encrypted track properties. How do I get those?" Okay. And basically, we form a blob um, that we put in the objects. In the—in the first object of the group, we include all of the track properties that need to be encrypted as an encrypted object parameter effectively instead of sending them as a track parameter. And you know, I started writing this all down and sort of trying to write it up. I'm like, this is crazy. I'm right back at where I was of option zero at this point. I'm at the point of just in—make the track properties be um, object properties and put them in the first group. I don't need any extra mechanism to do this.

Colin Perkins: So that's—that's those two class solutions. Now, I want to go next slide to the last class solution here of option four.

Martin Duke: We have—we have people in the queue. Do you want to take them or...

Colin Perkins: Oh, sorry, I missed that. Yeah.

Martin Duke: Richard.

Richard: Yeah, I was having to click five more Meetecho buttons to give all the permissions. Um, just an observation on option two here. Um, in light of the previous discussion about hashing things into the keys, if these track properties are constant over the lifetime of a key, um, we could just hash them into the key and not have to add them to the AAD.

Colin Perkins: Oh, that's an excellent point. That's a much better design.

Alan Frindell: Um, on three, I don't see why you couldn't have encrypted track property also be a track property. Meaning you could validate it.

Colin Perkins: That's—that's option four. That's option four. We'll get to—get to option four. Yeah. Need bigger slides. Okay. Next slide.

Martin Duke: Magnus. Magnus has a question. Magnus Westerlund.

Magnus Westerlund: Magnus Westerlund, Ericsson. Have you analyzed what in the track properties could you actually do if you were a rogue relay? Because I think that's the core of the issue: is this really an issue or not? Or is it something which falls in a reasonable trusted relay and it can always do much worse by not forwarding objects, groups, etc.? So is it actually needed to be solved?

Colin Perkins: Um, I have not, Magnus. And I think we're sort of at an early stage of understanding what track properties would be used for. There were some for video initialization type stuff that seemed to me like they might be things that you wanted integrity on, but I don't know. I think—I think we need to talk about what our requirements—I mean, that's my end sort of punchline on all of this is, I don't think we understand what our properties—our requirements are here yet, much less what the solutions are. Um, so...

Martin Duke: You have four minutes remaining, Colin.

Colin Perkins: Okay. So uh, skip this slide, that's—I've already talked about this. Next slide.

Colin Perkins: Okay, so this is option four and this is the one Alan's talking about a little bit, which we form a whole you know, we form another we—we put encrypted or integrity protected things in the actual track messages and we uh, when we receive them, we have another information to decode them. So this would probably result in needing a different key ID, a different way to set up. Some of the things that's a little bit not 100% clear on this is how we would deal with the nonces on it. I think that we would probably just include the nonces completely in the message and send them along with it, because we're not really worried about the bandwidth of uh, control messages. Um, so a bunch of this stuff may be much easier, um, but I think that we'd need to poke around with this a little bit and I think it'll need some changes to the MoQT base spec as well. I don't see those as a problem, they'd be minor things, but we'd need to set this up.

Colin Perkins: So I don't have a concrete design for that, but I think that we would be basically designing an independent, you know, track encryption and track integrity protection scheme that that went over the control channels versus—instead of the data plane. Um, next slide.

Martin Duke: Once again, Mo, you're in the queue.

Mo: Just finish—just finish everything, Colin, and then I'll...

Colin Perkins: I—let me just check what the last slide is here. I think that is everything I just...

Martin Duke: This is the last slide.

Colin Perkins: Okay. So yeah, perfect. Let's run through...

Mo: Yeah. So I think the last option is better because I think it's better to actually authenticate the—what's in the control message and do it once, uh, not try to hack things in. I think the track properties have caused a lot of confusion because people are using them like, you know, data plane compression. They're trying to take all the object properties that they thought would always go all the time and just compress them up into the track property and and try to go the other direction, pushing them back into the object, you know, crypto work, it doesn't make sense to me. I think this is cleaner, it makes more sense. I don't think you need a nonce, you could just say that you cannot have more than one of these things, this is the only—nonce is zero, force the IV to be zero, you can't have multiple of these things.

Martin Duke: I'm going to close the queue soon, but Suhas.

Suhas: Um, I actually prefer option one, just because we have a machinery that defines how these things work and and the nice thing about option one is that if the publisher wants to update a track property at the data plane, it can do it in an object or group boundary, which which is typical to the case when you're doing video config and you want to change the region of orientation or the the region of interest or orientation kind of a things, which are more of a data-driven rather than a control message setup. Um, having a send a control message to update at the same time wait for the objects to come and then have the time shared between those on the receiver side to work on it kind of breaks the the pipeline of "I got an object, I'm going to decode and verify things." Um, my preference would be to option one, simple, unless we have a real need for something more complicated. Not prefer track complications here.

Richard: But please, please be succinct.

Richard: Yeah, I'm not an expert on the application domain here. I would love to have some more detailed requirements for this stuff. I'm—maybe I just need to read some more things. But this number four seems like the most complicated thing. Um, it goes beyond uh, the the base thing in that it requires a whole new key management scheme. We had this whole notion at least in the base thing that uh, you have keys assigned per track, um, which is a nice clean boundary, aligns well with with MoQ concepts. Um, here is some other thing that we need some other notion of key management for. Um, so I would really hesitate to do this this option four. Um, I I think option one's great if we can get away with it. Option two, especially in the "hash it into the keys," uh, seems totally reasonable and doable, but yeah, let's not go past that.

Rowan: Hi, uh, I'm also—this this option four does also seem pretty complex to me, uh, but my reservation with option two is that uh, is the debugging this is going to be a nightmare. Um, uh, like even if even if there is—even if you had uh, you received something immediately and it was the first, you know, you got a message directly after you got the track information, you would still be really difficult to figure out what happened that caused your error.

Colin Perkins: Okay. So here's my ask for people. If people who are using track properties could think more about what the requirements are here and some use cases where you might want integrity or you might want encryption on on on the track properties and send them to me, that'd be really useful and I don't know, maybe I'll write up proper PR for option four and option one. I don't know, we can debate, but I don't—I don't think we have enough to make a clear decision right now. I mean... Right. Option four might look less complicated if it was written down. That is one of the things. So...

Martin Duke: Okay. Thanks, Colin. Next up is Thibault.

Thibault: [Presentation: Privacy Pass Authorization for MoQ: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-privacy-pass-authorization-for-media-over-quic-01]

Thibault: Okay. Sound working? Yes, no, maybe?

Martin Duke: Yes, it works. We hear you.

Thibault: Okay. Good. Well, um, hey everyone. Um, so um, I'm presenting a draft that we're doing with Suhas, Colin, um, and myself, um, and this is an update on like the Privacy Pass authorization for Media over QUIC. Um, throughout this slide deck, it's mentioned it's draft 03, this is actually draft 02 that has been cut. This is a mistake that went through the slides on that. Next slide.

Thibault: So, um, quick reminder about like what is Privacy Pass? So the core concept provides unlinkable tokens so you cannot link issuance and redemption. Um, there's like blind issuance, um, which means the issuer doesn't um, like sign—like sign the tokens without actually like seeing like the token content. Um, this is like single-use tokens that are uh, redeemed by um, like the Media over QUIC relay, uh, which prevents uh, like replay while being privacy-preserving, and the separation of concerns, at least like architecturally, um, which means like the attester, the issuer, the origin, and the client, um, are all different and are like different um, like entities even though like they might be operated by the same entity, there are separation of concerns here. On that, next slide.

Thibault: So, um, why does it matter for Media over QUIC? Um, the um, core idea and like what we're uh, looking to have here is to have some uh, like anonymous subscription where users can subscribe to content without necessarily like revealing um, the viewing patterns. Um, there's still um, some need to have some like fine-grained access control such that like the token encodes the scope of the namespace, um, and the track name. There's some separation between uh, the relay that like validates the token and the user identity. That's the main benefit um, that we see with Privacy Pass. Um, and um, finally, um, we provide some uh, batch token or uh, some anonymous credentials such that um, we can have more long-lived sessions without necessarily having to redo the attestation flow, which might be costly over and over. So we have like some privacy. We don't see like the relay doesn't learn about like the viewing pattern. It remains private um, like to the user. There's some scalability meaning like the um, the the the fact that like the relay um, can validate the token allows to avoid to um, how to say, go back every time uh, to the issuer, no matter like the amount of subscription that like one user does. Um, and finally, uh, we tried in the draft to integrate with like existing um, token types such as like blind RSA, VOPRF, also have some consideration for like new constructions such as ARC, which have been recently adopted in Privacy Pass draft, even though that's more experimental at the moment. Next slide.

Thibault: Okay, so one of the things that we tried to integrate more of the concept is like the the reverse flow, which allow like the origin—like the MoQ relay—to reissue tokens um, after bootstrapping uh, from the issuer. So from a client perspective, the client first goes and perform an attestation, um, which gets them a token. Once they've obtained the token, they exchange this token uh, with the the relay through like a client setup phase, um, and with the the setup, um, they receive uh, either like one more token, a batch of tokens, um, or like a credential. Um, this is part of like the reverse flow. Then down the line, uh, the like through the operation, um, after each session and to make the sessions um, how to call that, like unlinkable, um, the client um, can request like and will present token and can request new tokens that the MoQ relay will reissue. Um, the changes that we had since the last draft, specifically on this slide, is like we've updated um, the SETUP—client setup—um, and like the various um, like the various code points to the latest um, MoQ Transport draft. Um, so this has been updated in the draft as well.

Thibault: Next. So the benefits we have from having such a reverse flow is um, it allows to like more easily bootstrap uh, from an external issuer using public verifiable token and um, then the origin can like uh, bootstrap using these tokens to like reissue tokens of their—of their own. Um, there's possible integration with like rate limit uh, with like rate limit tokens to provide some presentation limit for rate control across namespaces or track names. Um, this is something that like probably would have to be more fleshed out. On the privacy side, each token presentation is unlinkable even for the same credential, so even when the MoQ relay acts as an issuer itself, um, there's still unlinkable presentations. In terms of efficiency, reverse flow could like allow for having like batch issuance, so like from bootstrapping from one um, token which is uh, provided by the initial issuer, the um, MoQ relay should be able to issue more tokens that could be used for multiple operations. Um, and finally, uh, these various tokens could allow to provide some continuous uh, like authorization. The request of new tokens in-line uh, with the operation for long sessions should allow for that. Um, this is something that we've tried to illustrate in the slides. So if the client at setup time present one token, they can also request one more token um, that will like be issued by the server. Um, when the server like—when the client comes back and say okay I want to subscribe, um, they can like present like this new token that has been provided by the server at setup time, get one more token and then again later in time, um, the like when the client wants to perform like a publish operation, they can present like that last token that they've obtained. Um, what's important here is each of these operations uh, like is unlinkable, each of these presentations are unlinkable, um, the relay may correlate operations through other means, but at least from like the the the authorization perspective, um, they should not be able to do so. Next slide, please.

Thibault: One of the things that we've introduced um, with the draft uh, is better error handling, which has been detected through like a like a very first early implementation of uh, the previous draft. Um, mostly like we have like for instance like an error code registry, uh, which we've received received like some reviews uh, from IANA, which we will uh, like open issues for and try to address. Um, this error code registry allows to have um, better understanding of like what's happening through the flow. Is like the token missing? Is it invalid? Um, has it expired? Has it been replayed? Um, some possibly some like extensions uh, to to these error codes, but at least like this is here so that we can extend it down the line as we learn from like the various uh, iteration and implementation of the draft. Next slide, please.

Thibault: Okay, one of the um, other big questions that we had uh, in the previous draft that was open is like how do we do actually like how do we match on action? How do we match um, on track names? How do we match on namespace? So this is something we've introduced uh, like through this draft. Um, the way it works is like we extract the token and we add some uh, some matching flow along with it. Um, so the the flow which is on the right: we extract the token, verify the token uh, if the signature matches, um, and then um, as part of like the replay protection check, we check multiple of uh, these possible actions. So is it like a client setup, is it a publish—is it an action for like a publication of a namespace, um, and the the track name, the track status, etc. um, that we have that through. Once... oh sorry, thank you. Um, the um, so what I was describing before, which is like a bit more concrete here, uh, like about the matching type for uh, the track names or track namespace, um, it's part of like the the the token that is signed uh, and presented to to get access. Um, the scope uh, which we've described before, so like if I want to subscribe operation as of here, uh, I will say I want to do an action, the type of match uh, and the value. So like here if I want to like watch live sport, the value would be "live sport." This is a prefix because like there might be different lives uh, that are happening at the same time, um, and this is something um, that would allow like for like extensibility down the line. Um, there's also wildcard for uh, like an empty value, um, this is something that like is like definitely fresh and like would benefit from like some more uh, like further review I think from the group. Next slide, please.

Thibault: Okay, so what are the next steps? Um, we have new IANA registries uh, which have been reviewed by IANA, so like as I mentioned before, that's something that like we will need uh, we'll open issues for, um, there's been a couple of like ranges and like issues flagged here. Um, um, in addition, um, the security consideration is like still empty, so that's something that like we need uh, like to be filled. Um, and yeah, open to questions. The chat has seemed to be active but I haven't been able to read so if you have a comment to make, uh, like please make it to the mic.

Martin Duke: The queue is open. Going once, twice... Ah, Altanay.

Altanay: All right, um, hi. Uh, Altanay, Cisco Meraki. Uh, can you—I have two questions. This is question one: can you elaborate how um, this relates to multi-multiple relays, like a chain? Does everyone have to get a token? Every hop has to get a token?

Thibault: Yes, um, so I think it really depends on like the type of deployment you're looking for. Um, I would say yes, uh, for two reasons. One of the reasons is, um, a lot of the efficiency that we gain um, at least in the implementation is due to like the type of tokens that we're using, which like are only scoped when um, the tok—like the the relay that like issuing the token is the one verifying them. So like we cannot chain the relays if they are like operated by like different entities. And the second reason is, um, even if we had some sort of like token which were verifiable by by each and every entity, um, this might allow for like potentially more linking. Again, given the relay are communicating between each other, that might be like an acceptable like privacy leak, um, but ideally yes, they would need each—each and every relay would need like their own token.

Altanay: Okay, so then um, not considering the performance overheads, which are obviously there, um, the the you are making it unlinkable from the relays' perspective. This is question two. So the relays have no idea who's whatever asking for what. But the issuer has all the scope it needs. So now the issuer has all the data what stream was requested, what client requested it, everything. Or maybe I don't know Privacy Pass as much. Um, how do we de-risk um, all this leaking happening from the issuer side?

Thibault: Yes, so like the issuer still needs to know um, like you need to get access to some resource. So like the issuer still needs to know that like you want to like for instance like uh, are you allowed like do you have like a subscription um, so that you can watch live sport. But like they don't necessarily need to like they won't know which live um, you will watch. And so like this is something that like would be at the relay and the issuer would have no information about.

Altanay: Okay.

Alan Frindell: You're muted in there.

Alan Frindell: Okay, so um, one, uh, I see people squatting on uh, code points that we don't have IANA registries for yet, and we've already had problems with like MoQT comes up with some use case like I mint error code 100 and now all of a sudden I broke your draft. So um, let's like try to coordinate it like we'll just keep an appendix in MoQ Transport in the short term as until we have created registries. Uh, a Wikipedia entry says David. Um, so that's one thing. So um, and this is for any authors who have any drafts that have any code points that go in any of the things that are now GREASEd, like uh, please send them to us so we don't blow each other up. The other one was the about chain relays and uh, I pursuit like parameters are hop-by-hop and tokens are parameter. So there's no guarantee that any relay is going to forward any token sent to it and think also about how subscriptions are aggregated. So a subscription may come to the first hop relay, it already has an upstream subscription, it's not going to send another one upstream. So I this has been like mentioned in a few different contexts before, but if that's a deal-breaker, we need to know.

Martin Duke: Lovely. Thank you, Thibault. We're going to move straight into Suhas's presentation now.

Suhas: [Presentation: Application-Agnostic DPoP Proof for C4M: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-application-agnostic-dpop-proof-for-c4m-01]

Suhas: So, so I'll be talking about application agnostic DPoP framework. The focus of this draft is enabling sender-constrained tokens for Media over QUIC. We presented this draft um, at the last IETF um, at OAuth working group. I'll bring in some context of what happened and why we are presenting this here. Next slide, please.

Suhas: To—to motivate like why do we need a new draft for DPoP proofs in MoQ. The CAT tokens the—that we use today, uh, they they can be two types: one we can either they can be bearer tokens or the tokens that are sender-constrained. The challenge with bearer tokens are—the nice thing about bearer tokens are they're simple but they have security implications. Uh, sender-constrained tokens provide stronger security guarantees. Whereas the bearer tokens can be easily replayed, a possession of the token means uh, equals to authorization so there's no ownership. Um, whereas with sender-constrained uh, one has to prove that they have a bound a key client's key in order to access. So if I just copy a sender-constrained token, I cannot just reuse it because I need to prove that I have the possession of the private key that's bound to the token. The challenge with current CAT DPoP claims is that they are bound to HTTP request methods and URIs, uh, but since MoQ works over QUIC, we do not have some way to kind of if we want to support DPoP in CAT for MoQ, then we need to kind of enable the DPoP claims to also understand what MoQ is. So the scope of this draft is define DPoP bindings specifically for MoQ protocol messages, fills in the gap left by RFC 9449 which is the DPoP RFC which is HTTP-specific approach, and this draft does not include any new security concerns, it builds on already proven mechanisms. Next slide.

Suhas: What kind of uh the if we can jump slightly one step deeper into what kind of attack vectors uh that we see with token today and where the DPoP or sender-constrained uh requirement comes into play is that if you think of a compromised relay where a malicious or breached relay can extract these bearer tokens uh for MoQ connections, then these token can distributed through a botnet and and that would result in millions of an unauthorized subscribers subscribing those streams. Uh, this is kind of equivalent content piracy at scale and resource exhaustion. A similar kind of way where anyone who has the token, they can replay the token uh where the attacker does not even need to have the private any keying material to prove that they have the token uh that they are bound to that token so they can just replay. And some of the attacks that we can think of in web browser kind of environment where malicious cross-script attacks, the scripts can basically steal the tokens off your memory or storage and can replay it. Um, so in all the all the use cases, all the attacks that kind of listed here, the main thing here is that the DPoP would not let an attacker to just steal a token and and reuse it because they need to provide a proof of possession of the key that's bound to the token. Next slide.

Suhas: So why are we bringing this work here again? Uh, we discussed in the IETF 124, we did present this at OAuth working group. Some of the recommendations that we got from the security AD and the OAuth chairs were uh, we have a long-term vision within IETF where we might want to establish an OAuth directorate, which would help multiple working groups who need some an help in order to kind of extend anything with OAuth or you know define some security requirements with OAuth. There might be a directorate where people can come and ask and get get some answers, uh but but but there's nothing in place right there. The near-term strategy kind of felt like um since MoQ is the one that needs a requirement to have if if it if it needs to have a requirement for sender-constrained CAT uh tokens, why not uh we do this work at MoQ, but OAuth will provide a knowledgeable OAuth expert who can work along to make sure that we we are not making some crazy assumptions here. Um, I think uh the next set of action items is that we need to circle back with all the chairs and ADs from both MoQ and OAuth working group uh to have a formal coordination here before we adopt this draft in in any form, and we also want to kind of look for consensus on the right uh house for this to for this work to continue. Next slide.

Suhas: I think I'll skip over this one because this kind of shows the attack vector I was talking about where an attacker can steal the token and can replay as long as it wants until the token expires because it doesn't have to prove uh that it possesses the token. Next slide.

Suhas: Um, this kind of goes one step deep into how DPoP works where a client basically uses its uh when it goes to the authorization server, it basically provides its proof DPoP proof and when it makes the request and once it gets the token signed uh by with bound to the client's public key, so whenever client wants to go to the resource server and kind of want to ask for a particular resource, it provides a new DPoP proof that provides that proves a binding to the client's public key and hence you you're able to prove that you own that token or you're bound to that token, uh and the only challenge with RFC 9449 is that it's limited to HTTP methods as we talked about, we need to kind of expand that to support MoQ messages.

Richard: Yeah hi. Um, on the point about threat model, I was just going to expand that and say in addition to this token theft point, there's some MoQ-specific uh considerations here that make it extra extra important to have these DPoP proofs. Um, if you there are multiple relays between a source and a destination, say a publisher and subscriber, like those any of those relays can impersonate and claim credit for uh a client's authorization if these authorizations are bearer tokens. Um, and so in addition to the you know there's token theft by you know people breaking into an endpoint, there's also token theft by relays you know observing tokens then replaying them in a different context.

Suhas: Thanks Richard, valid point. Next slide please.

Suhas: Okay, the proposed what are we proposing here is that um a general a claim called a context, an application protocol context, that identifies couple of things. One is the type, basically identifying what transport protocol that you would want to support, and a set of parameters a set of sub-claims within that type that defines uh for example for the MoQ as the transport, it defines what are the actions that you want to apply this token on, like subscribe or publish, and it also has some optional parameters like what track namespace and track name you would want to apply. But the idea here is that we have a generic application application context uh claim and that can apply to any protocols of today and also in the future. Uh, however, the key point here is that the token acquisition remains unchanged, so we are not changing anything about how the token get we get the token, uh we only modify how the proofs are bound to non-HTTP um operations, so existing OAuth infrastructure continues to work. Next slide please.

Suhas: This kind of goes the in one step deeper into application context that I talked about where it identifies the type MoQ is the one that's defined in this draft, um but it provides a registry for someone in the future wanting to divide do for CoAP, for example, they can come and define a new draft based on that one. But our draft kind of talks about MoQ as the protocol identifier and defines an action, uh which could be one of the MoQ verbs that we use, and it also identifies uh the context on which the app the action works, in this case the namespace and name. All what happens is that in terms of validation, the once the server verifies the action is allowed, uh after verifying the DPoP proof that this token is bound to the client who's saying he's for whom this token is valid, it validates the action, it validates the namespace, and performs any replay if if based on the nonce it can know that has this token been replayed or not, based on that it validates this action to be successful or not. Next step, the next slide please.

Suhas: And the draft the is about 5 minutes, so I would... Yeah, I'll do that one. Thanks Martin. So we define two formats for the token, one is the JWT and CWT. Um, JWT because we because that's what the HTTP kind of DPoP draft provides, and CWT because CAT is based on CWT. Next slide, next slide please.

Suhas: I—I'll skip over this one, this just shows in in detail about how the token acquisition flow kind of bind from the DPoP proof bounds into the client's key and from there how a MoQ endpoint can use those tokens to talk to the relay which is the resource server in this case to perform an action. Next slide.

Suhas: And the security properties as Richard said, one of the attack vectors are to kind of make sure that the replay attack or a token theft kind of an attack does not happen in one relay, it can happen in multiple hops and every hop where someone can observe the token. So definitely what we propose here is builds upon what RFC 9449 says uh for making it sender-constrained cryptographic cryptographic proof for possession a nonce-based replay protection, but what we define kind of expands that OAuth in another in another dimension to show that uh there's a cross-protocol security because you know that you're using it for MoQ uh and someone wants to use it for HTTP in future need not have to uh have collisions across different protocols, and it's bound to an operation and the context on which it's working on, and there's an application separation because it's defined per protocol type. Next slide.

Suhas: So some of the next steps we would like from the working group is that we look looking for feedback uh for in we looking for input from the people who want to use sender-constrained tokens, especially interested in deployment scenarios and requirements and see uh if this what we are proposing makes sense and also security reviews is welcome. We have an open source implementation uh for using DPoP in CAT CAT for MoQ, uh the the repository link is there. We are looking for testing and experimentation with this with that and for interoperability as well. In general about the working group coordination, we would like to have this ongoing discussion with the chairs and AD across the two working groups here so that we can decide the right working group to continue the work. So we need a formal agreement on where should where this work should continue and and gain the interest from the participants as well. Yeah, that should be all.

Martin Duke: Okay, we have a couple minutes for questions and comments. If you have if you want to enter the queue please limit your comments to one minute. Alan.

Alan Frindell: Okay, quick question. Um, I know there was an issue opened a long time ago about whether MoQT needs a field to convey DPoP information and I couldn't I well my agent scanned your draft and said that it didn't say, so it's probably right. Um, and I just want to know like are it seems like we can probably carry any of the information that needs to be carried within the authorization token field itself and we don't need to change MoQT? Is that true or are you still on MoQT?

Suhas: Yes, this this does not require any change in the MoQT. Uh, the one of the CAT if the CAT token is opaque to the MoQT library so uh the CAT library knows that there's a claim called DPoP inside CAT because CAT today defines DPoP claims as one of the claims but that DPoP claim is bound to HTTP. So if we want to use CAT with DPoP claims in it, it has to understand MoQ. That's what this draft kind of tries to address. So MoQT core transport does not need any change based on this.

Alan Frindell: Okay. I'm going to assign that issue to you to close. And if you don't close it, I'm going to close it tomorrow.

Suhas: Sounds good. Thanks.

Martin Duke: All right, if there's nothing else, we're going to move straight into SWITCH, which is Will, here in the in the room. Uh, while he's walking up, or just to say that if you are... Yes, Will. Can I present from here? Why don't you come up to the mic? We have this I mean, someone should use this, it's been here all all morning. Uh, while he's walking up, um, if you are a proponent presented something, I strongly encourage you to review the Zulip chat, there's been a great deal of technical discussion on every presentation. So there's a lot of good input there for those of you who are proponents.

Will Law: [Presentation: Dynamic Track Switching: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-dynamic-track-switching-01]

Will Law: Thank you very much. Ooh, this is... Test, test. There we go. Right, good morning. My name is Will Law from Akamai, and I'm presenting on dynamic track switching for MoQ relays. This is you might want to refer to this as server-side ABR. And I'm representing the results of an ad-hoc design team that uh met at the Boulder interim and I've put the work into a separate draft which is linked here, but this is give you feedback on the general design and also suggest some changes.

Will Law: So firstly, why are we calling it dynamic track switching? This is not a common term of art for media. And the reason is because we want to make it more flexible than just ABR, adaptive bit rate, right? MoQ can deliver things that are not media, so we might want to switch non-media objects and we might want to switch them on properties that are not actually throughput. There might be other server observable uh properties that we want to switch on in the future. So we're making it a general term, dynamic track switching, but its primary use case right now is server-side ABR.

Will Law: The general approach here is quite simple: we define two parameters, right, switching set assignment and DTS activation. I have slides explaining what we are what these mean. You subscribe the client subscribes to each one of the tracks that it might want to switch between and then it tells the server, "Okay, go ahead and do the switching for me," and then it can turn on or off that behavior. It's conceptually quite simple.

Will Law: Um, so let me explain the parameters. The first one is called switching set assignment. This parameter accompanies a subscription request. So the client's subscribing to a video stream and in it it says, "Hey, I want you to take this stream, add it to set number one, and the minimum throughput you need to play this particular track is 5,000 kilobits per second, and I'm willing I'm willing to let you wait 50 milliseconds before you decide not to send this object to me." And I'll have a separate slide as to why we need or why we might want a time limit involved here. So the client would add this parameter and subscribe to each of these tracks. And when the server receives those tracks, it puts them into a forward equals zero mode, so it's not actually sending anything to the client. And then the client needs to say, "Okay, I'm ready for you to start switching now," so it appends on its last subscription a second parameter called DTS activation, which as its name suggests just says, "For switching set number one, please start switching." So it controls the precise time at which the the relay or the server starts implementing the switching logic, and this can both turn it on and turn it off for a set of subscriptions.

Will Law: Here's an example. It's a lot of text, but imagine we have a client wants to subscribe to three video tracks: there's a high, a medium, and a low one. It would subscribe to high, it would add on the switching set assignment parameter and say, "I'm putting this in switching set number one, switch if you've got more than 4,000 kilobits and I'm willing to wait 100 milliseconds." The relay receives that, it then subscribes to the medium one, it subscribes to the low one, and when it subscribes to the low one, it also appends that activation parameter, which is step number five, and says, "Turn it on." So the relay now starts toggling the forwarding status of these active subscriptions, and it's always attempting to forward the highest bit rate threshold which is smaller than or equal to its estimate of the connection throughput at the time that it's making the decision. And decisions are only made when the first object in a new group arrives. So this is group boundary decisioning. We're not switching on every object, we're switching on every group.

Will Law: So a number of people say, "Why do we need the waiting attribute?" This is there to enable the algorithm to fail down if one of your tracks doesn't arrive in time. Then as soon as we have a notion of "doesn't arrive in time," we need to define a timeout and that's what this waiting is. I really wish I could animate this but I can't because it's a PDF, but we've got purple, blue, and orange tracks, right? And we're making our switching decision for the last group to arrive, which is group number 24. And the timer starts when the first object of anything in the switching set arrives. In this case, it's the blue track. So it starts the clock at blue. Now, if blue is my desired target track, this is the highest one smaller than the bit rate, I would just forward that object and I would stop all other decisioning until the next group arrives, right? That's very simple. But let's say purple is in fact my the track that I would like to switch to, right? So we start a clock at the blue one and we notice that the purple has not arrived within its pre-declared tolerance, which was 100 milliseconds. In this case, the relay says, "I can't deliver this. I'm going to fail down. I'm going to pick the next lowest track," which in this case would be the blue one and it would send it. So the whole point of the waiting timeout is to enable us to have this fail down action.

Will Law: So one question for you. Um, we could drop this time limit completely. It would simplify the implementation on the server, no tracking of time. What we would lose is the fact that you would only want to deliver the purple track because that's the right bit rate, and even if it never arrives, we'd just sit in this mode wanting to deliver it. And actually, this is how many server-side schemes work today because it's an edge case. It's pretty infrequent that the track's not going to arrive at all, and if if this happened, the client has other mechanisms by which it could detect this failure and it could cancel this ABR and set up a new ABR set. So if we want to do that, we can simplify the switching set assignment parameter to simply indicate the switching set ID and the throughput threshold. No need for time. So this—this is a question for the group. I'm not going to pause here, but we'd appreciate feedback on this.

Will Law: Second question: should we add in a new fractional attribute? And the argument here is the server's estimate of the throughput, especially coming from a congestion controller, is for the entire connection. So imagine I'm a client that has I'm trying to listen to three speakers, I'm subscribing to five people, each of which is producing three bit rates. So I want five different ABR schemes happening at the same time, and I want each one of them to consume one-fifth of the total connectivity available. So in that case, we can simply add a new number here, which is the throughput fraction, and if you specify N, then you want the set to be evaluated as if it were assigned 1 over N of the total available bandwidth. So I think this is a nice improvement. It's default if you don't specify it as one, you want to take up the entire channel if that's the only thing you're switching on, but it does allow with a relatively simple improvement it allows a good feature base, which is multiple concurrent ABRs happening within your connection.

Martin Duke: I'm going to say there's some sentiment on the on the Zulip to kill the to kill the timer—to kill the timeout part.

Will Law: Okay. That's cool. That's feedback I'm looking for. Thank you. Um, can we get rid of DTS activation while we're killing things? And the argument here is that all I need to do is tell the relay when it's safe to start switching, right? Because if I'm in the middle of defining a switching set and objects arrive, it's going to start sending them, and those might not be the ones that I want. So we can get rid of DTS activation parameter by simply adding an activate switching flag to our assignment. And it's normally zero, which means, "Hey, I'm going to send you more things for this switching set," but on the very last time you call it, you would set it to one and that's the signal, "Okay, relay, you can start actively switching." So I think this is a much cleaner implementation. I like getting rid of parameters and I would recommend we get rid of the DTS activation parameter and replace it with a little flag in our switching set assignment parameter.

Will Law: And can we extend this to a publisher-controlled scheme? It's actually very simple to do that. So in the scheme we've we've described today, the client is reading a catalog or it's got out-of-band information and it's setting up these thresholds itself. But the publisher could do this if I wanted a very simple client, right? So there we can define a track property—I think that's the new name now, track property. And in it, it would supply the information shown here: what ID does this track belong to, what's the throughput threshold, what's the fraction, and we add on one other number, which is the number of tracks in the set. And this is because the relay again needs to know when is it safe for me to start switching, and it's safe when you've you've got at least three tracks, you've got all the tracks in the set, then you know you're ready to start switching. So this gives us a publisher-controlled scheme that works in a very simple way—sorry, highly congruent mechanism with the client-controlled scheme with the same behavior on the relay. So I really think we should do this as well.

Martin Duke: Colin's in the queue. Colin for queue.

Colin Perkins: Okay. I so quick question on this one. I'm more concerned at the startup you're not going to have accurate estimates. Do you do you have thoughts about how we get those initial estimates? Those seem like they'll be more like and what happens before the estimates of the available bandwidth are good? Because that seems like it's going to really influence the initial experience much more than when you turn these things on or off or how long it takes to set up these tracks. I mean, all these control messages are going to happen almost instantaneously. Um, but I don't think you'll know what to do at that point. So what's the thoughts on the timing of when you start switching and how you deal with it before you have enough information?

Will Law: That's a very good point, Colin. Startup is always the problem. We have a scarcity of information because the we will have with the congestion controller some estimate of the throughput, but it's probably very application-limited. We've done nothing but exchange some setup messages at that point, so we're going to probably have an estimate that's the low end of what our connection can truly support. So the one argument is, okay, go with the low end. You will initially start at your lowest bit rate, which is the safe way to start. It's perhaps not the best quality for the user, but it's safe. And as soon as you start sending data, you're going to have an higher estimate in your congestion controller and you should switch up pretty rapidly, probably at the next group object. So that's one approach. The second approach and there's a comment on this in the document is the client can provide a default start solution. So it could say, "At the beginning I would like you to at least start at this one megabit stream, not the lowest one, not the highest one, but somewhere safe in the middle." This is analogous to HLS, it has a similar scheme, DASH can also do something like that. But it's a way for the client to say, "If you don't have confidence in your throughput estimate, then take this default track." So we can also go down that route and it's it's not exclusive with the first solution. A third solution that the client can do, there's a separate thread coming in in MoQ about padding tracks. And the client could basically ask for a padding track to be sent to give the rate controller a better estimate of the throughput before it begins its subscription. So that would delay startup, but maybe it works it into the application. But the client has some control over this, so it might want to do that.

Martin Duke: Well, we have three minutes left and three people in the queue, so can you just wrap it up pretty quick?

Will Law: Okay. Sorry, long answer. There's a number of one. Okay. Next question too. Uh...

Martin Duke: Or do you want to take the queue or you want to finish your slides?

Will Law: I'll finish my slides. I just have yep, this is "Do we want to switch on something other than throughput?" So it's possible that you might want a more complex algorithm that says, "I want to take path and packet loss and history and other things into account, and maybe relays compete on how good they are at switching." So what we can do is just create an IANA table where you say, "I have an algorithm for switching and I'm calling it algorithm number one," and you can register it. And then in my switching set assignment, I say, "I want you to switch this and I want you to use algorithm number one." And it could go do some magic combination of observable properties and switch for you. So that's a suggestion, maybe it's something we add down the pipe. I'll stop there for slides and take questions.

Martin Duke: I'm going to close the queue soon, so enter it now if you want in.

Tommy Pauly: Hello, this is Tommy from Disney and I have two questions for your page six. Uh, yes... Sorry there... yeah at this example. In from my understanding your proposal is to make the relay side track selection, uh but uh I would like to know why the subscriber need to figure out the bandwidth throughput number.

Will Law: Because the relay doesn't know it. The objects flowing through the relay are opaque binary objects. There's nothing that the relay knows about it. It doesn't know if this is 300 kilobits per second audio or 10 megabits per second video. So the client needs to tell it this information. In the in the publisher case, we create a property where the publisher does provide this information, okay? And then we don't need the client to specify it. But for the normal case, the client—the relay has no idea what what the objects are or what size they are, and therefore the client needs to tell it and set the the level that it wants to switch at.

Tommy Pauly: Uh, and for my experience, I think the relay is responsible for the bandwidth observation and estimation, and the specific observation and estimation method is really different. So this kind of throughput of threshold may very limit the capacity of relay.

Will Law: Okay. So your point is the relay's not going to be able to make a good estimate of the throughput.

Tommy Pauly: Uh, yes. Or it will it will damage the performance of bandwidth estimation of relay. I'm not suggesting anything, just tell my curiosity.

Martin Duke: Okay, we don't have enough time. We're going to have to take this offline. Gerte.

Gerte: Oh okay okay. I'm very happy to be here for the next half hour, so I'm happy to continue the conversation.

Kutish Remable: Hi, Kutish Remable. Could you go to slide number 11 for a second, please? 11, yes. So that's the publisher side. Um so here you suggest that set throughput fraction, right? And that's something that the publisher is going to say, but it does not know about like how many remote participants are in a conferencing call that like I am viewing as a subscriber. So how is it going to set that?

Will Law: It's true it doesn't, right? However, this is usually done in an application context. I'm a web conferencing publisher and I'm publishing for web conferencing clients. So I know that there are they're going to be several of them. But it's a very good point. If my application knows that there will be five publishers, then it can add that throughput fraction. But it could well be that it's not generically available enough that we might want to remove it from this publisher-controlled scheme.

Kutish Remable: Right. Concretely in a conferencing call, like it also depends on like how many views like you're actually viewing on your screen. And it might be like you are seeing like if you are on a Mac you might be seeing all 32 participants at a time whereas if you're on a small screen device you're only seeing like five of them at a time. So and that information is usually only available on the on the subscriber side.

Will Law: And this is a weakness of any publisher-defined scheme. It cannot react to differences in the client. So it's only really useful where you have a very thin client that is highly deterministic that never changes. If you've just got like one app on a mobile phone that does one thing, then publisher-defined is appropriate. But in almost all other situations, client-defined but server-implemented gives you a lot more flexibility and control.

Martin Duke: Okay. Right. Got to go offline. Alan.

Alan Frindell: Okay, uh, I think this is a good general direction and I mean there's a lot of good technical chat. I say like keep rolling with it. My question is like what what's the sort of disposition here? Are we going to like adopt this draft? Do we think it's like an extension that will go parallel to MoQT or are we I mean is that our current plan?

Will Law: I think so. It's so hard to hear here. I got multiple echoes so.

Martin Duke: Are we going to do this as an extension? Is this going to be an extension or are you you want to get it into MoQT?

Will Law: I want to get this into a MoQT. I think I think MoQT should have a simple ABR scheme from day one. Uh, we are media over QUIC as our primary use case and we are a push-based system, and having some ability for the server to switch on the push, I think is is a core behavior that we should look at getting in from day one.

Martin Duke: Okay. I think going the route of maybe we need to adopt the draft and work through the issues and then when it's ready we can merge it in. Because I think there's already a lot of issues with the draft, which I think are resolvable, but it's kind of on a parallel tracks right now.

Martin Duke: Yeah, just briefly as chair. I mean, I I think the I think we'll continue to develop on the draft and when we feel it's ready we can make a decision whether it should be an extension or or go into MoQT.

Martin Duke: Okay, Suhas, you get the last minute.

Suhas: Thank you. Um, plus one this this should be a core MoQT and again how we get get there is a different thing. And Will, this is really good start, but this kind of going back to Gurtesh's point, this kind of misses out when you have multiple switching sets and how do you kind of uh make the decision based on that, and that's that can be something you can easily add to your draft. I'm happy to help you on on that. Thanks for the work though.

Martin Duke: All right, thank you everyone for your comments. Once again, if you're proponent, I strongly encourage you to look at the Zulip records because they were very productive. Um, and we will see you all on Thursday. Thanks for coming. Thank you. Thanks Will. Thanks. Cheers. Bye everyone.


Session Date/Time: 19 Mar 2026 01:00

This is the verbatim transcript of the Media over QUIC (MoQ) working group session.

Working Group Documents Context:

Session Slides:

  1. Chair Slides Session 1: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-chair-slides-session-1-01
  2. MOQT Update: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-moqt-update-02
  3. moqt://: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-moqt-01
  4. Secure Objects: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-secure-objects-00
  5. Privacy-Pass-MOQT: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-privacy-pass-moqt-00
  6. Application-Agnostic DPoP Proof for C4M: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-application-agnostic-dpop-proof-for-c4m-01
  7. Dynamic Track Switching for relays - v2: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-dynamic-track-switching-for-relays-v2-00
  8. MOQT-over-QMUX: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-moqt-over-qmux-01
  9. MSF & CMSF update - Shenzhen - v1: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-msf-cmsf-update-shenzhen-v1-02
  10. MOQ production at Alibaba: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-moq-production-at-alibaba-00
  11. LOC: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-loc-01
  12. Filters: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-filters-01
  13. MOQT PRs and Issues: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-moqt-prs-and-issues-00
  14. Chair Slides Session 2: https://datatracker.ietf.org/meeting/125/materials/slides-125-moq-chair-slides-session-2-00

Martin Duke: We're going to start. Actually, it's 9:00 a.m. now. Well, Ted, I think that you're asking which is Shiyang Yang and which is Lerong Rong? I don't know. Maybe somebody in the room knows. But those of you who are unfamiliar, MoQ has a tradition during its meetings of having a local sports mascot. I had a lot of trouble on at least Google Images in finding pictures of Shenzhen sports mascots, but this is from the National Games that were here a few years ago.

It's 9:00 a.m. Welcome to Media over QUIC. My name is Martin Duke. I'm one of the chairs. Magnus had an urgent matter in Stockholm he had to attend to, so I believe he's on an airplane right now, so I will be flying solo today. This meeting is being recorded, as always. This is the IETF Note Well. You've probably seen it this week already several times, but it covers the intellectual property implications of you being here, as well as outlining the code of conduct expectations for participants. Please take a look at it if you have not done so already.

This is today's agenda. We loaded all the security issues into Tuesday's session because CFRG is right now and we didn't think we'd get much security participation today. Here's what we're going to go through. Mostly, we're going to cover some of the other drafts that are out there in the working group, and then we're going to give about 30 minutes to the editors to work on MoQT issues. Would anyone like to bash this agenda? Okay, seeing no one, I'm going to hand it over to Mike English, who will talk about the Hackathon.

Mike English: Apologies, one moment. My lights automatically turned off because I'm 12 hours off. Okay, I'm back. Yeah, so Mike English, Cloudflare. Hackathon report this time will be brief. There was some Hackathon activity. Things are mainly focused right now on kind of getting things into the automated interop test runner. And so we're still kind of working through some issues with making sure that all the Docker images can build and, you know, the right versions are assigned to the right implementations and things like that. Let me very briefly see if I can share my screen. I don't know where I'm able to select what to share here.

Martin Duke: Oh, that's the problem. Okay. I just now understood the interface. Magnus always did that. Go ahead, Mike.

Mike English: Gotcha. Okay. One moment. Let me share this Chrome tab. Okay, so this is a current view of the interop test runner. So Englishm.github.io/moq-interop-runner will bring you to the top-level page where you can see a variety of test runs. And then within those, you can see different relay implementations matched up with different client implementations and the results of those pairings. It's designed to be aware of what versions each implementation supports. And you can see these little superscripts indicating the expected version that would have been negotiated. So you can see we have kind of mixed results here right now. Some of this is not actual failures of the MoQT protocol, so much as, you know, test runner issues that we're still kind of working out the kinks for. Each of these will bring you, you know, if you go to one of these results, you can see detailed logs.

Martin Duke: Can you zoom in a little, Mike? That's completely illegible.

Mike English: Uh oh. How far do you want me to zoom in?

Martin Duke: That's a little better. Not great, but better.

Mike English: Yeah, so this is what the test results look like. You can see we have some defined test cases. The output is TAP formatted, so there is kind of a standard output that a test client supports. And each of these test cases is defined as a specification that lives in the repository. So right now we just have a handful of specifications for setup exchange, making an announcement, expecting to receive a subscribe error if a track does not exist, etc. And we intend to grow this list. So as people come up with additional tests that they would like to perform, we can add more specifications and test clients can implement to those specifications. That way we have a diversity of implementations implementing the same behavior, so that we can kind of have the spec be the guiding principle and implementations of the specification yielding results in interop. Any questions?

Martin Duke: I have one. I know that we talked about maybe sitting on draft 16 for a while. I've heard some feedback that maybe we wanted to get all the very significant changes in 17 and 18 implemented sooner rather than later. I don't know how your thinking's evolved on that or if anyone would like to approach the mic to state an opinion about how long we sit on 16, either as an implementer or not.

Mike English: Yeah, I would love to hear feedback on that.

Alan Frindell: Can you hear me? Okay, maybe you can hear me now.

Martin Duke: Yes, hopefully.

Alan Frindell: Ah, okay, now you can see me. There we go. Here I am. Yeah, my thought is we have—so 17 is out. Some people have implemented it. There's a lot there, especially if you're going to try to support both versions in one implementation, because it's quite different. So, I mean, I know I've started poking around at it. In terms of when we roll to like actually ask everyone to move the interop target, my thinking is we'll do that in 18, which I'm targeting in about four weeks before the June interim, so like about two months from now. And I kind of hope that like whatever remaining wire image shuffling that we're planning to do will finish. And so after 18, like, you know, people—if you look at the drafts that are coming in that people want to merge into the core, a lot of them are adding new parameters and adding new relay behaviors but are not changing like sort of the core framing and stuff. So, anyway, that's my thinking. So move to 18 starting eight weeks from now and skip 17 unless people want to.

Martin Duke: Okay. Mike, can you start a conversation on the list about this? But what I'm hearing is maybe we target 18 for London, the London interop. But again, if you can take the action item to get a discussion on the list going.

Mike English: Yep, I will do that. I see in the chat also Victor expressing a preference for 17/18 sooner since supporting 16 and 17 is a lot of pain. We happen to be—so as you can see, we have two implementations here, moq-rs and moq-rs-draft-16. So we actually make a hard break each time. But I know that some implementations support multiple versions at once, and the changes between some of these drafts are significant to kind of carry both at once. I see plus one for 18 in London. I'll take it to the list and we'll get more input.

Martin Duke: Thank you, Mike. Alrighty, moving on. Next up is Will Law, who's going to talk about MSF and CMSF. Yes. Is mic test okay?

Will Law: So, thank you. Good morning. Will Law, Akamai. I'm going to cover a very brief and fast update on our streaming formats MSF and CMSF. So if you're one of the people wondering alone, "How am I going to actually stream video over MoQ transport?" you're not alone. We held a brainstorming meeting in Boulder in the United States of people who were interested in MSF and CMSF to raise a lot of issues which we bring back to IETF. It was very successful. There are a lot of people there, some in the room today, but I'd just encourage you to join this community of people who are interested in MSF and CMSF because the more input we get, the better it is. And as a consequence of this meeting, we've had a flurry of PRs and I want to particularly call out Suhas, who's contributed a lot to MSF in the last two months. Thank you, Suhas.

So let's go through some of the major changes. The first one is our rename is complete. So you may have heard of the streaming format referred to as Warp; that has now been changed to MSF, which formally stands for MOQT Streaming Format. It is what the name on the can is, what's inside, but we mostly just call it MSF. And then what used to be called CARP, which is the CMAF compliant MOQT Streaming Format, is now called CMSF. And I'll only refer to them by their acronyms going forward. And remember that CMSF is basically everything that's inside MSF with the addition of CMAF support.

One of the things we merged was a take on the URL format. We're still debating this, but at the Boulder interim, we decided to go with a fragment-based scheme which I'm describing here. And this is being done in parallel. These are not my latest slides, unfortunately, because anyway, it doesn't matter. The MOQT Alan presented yesterday, this is probably going to be our schema and the portion in blue of the URL is transferred over to the server. The fragment is intended for the client and it communicates the namespace and the name that it's going to connect. And in our case, we mostly want to connect to a catalog, so we can describe the catalog there. And we also, within MSF, define a key-value schema for the fragment so that we can pass additional key-value pairs or parameters, fragment parameters, to the client. And some of these are predefined. The table in the bottom right shows some of the parameters that we've defined so that you can identify a sub-clip in your content. Just yesterday or the day before, Alan proposed that the fragment has a prefix that identifies the streaming format. So the fragment would be #msf: . I quite like this idea, I support it, and hopefully we can move forward with that and we have a pretty universal descriptor then. You could look at any URL and you can know what streaming format it represents.

We also added support for variable substitution. This is pretty important. We want our catalogs to be cacheable in a relay. If we have a million people watching a stream, we want to give them all the same catalog, but at the same time, we want to give them custom advertising. So how would we do that? And one of the ways we do that is we can put key-value parameters in our fragment. So the URL that is vended by the content management system will generate unique IDs for every user, and these can then be easily substituted into the catalog. So simply, we use percent character as the designator of our variable substitution, and we replace them in just prior to parsing the catalog. And this, I think, gives us flexibility to support customization but at the same time allowing us to cache the catalogs in our edge relays.

We're also supporting an encryption scheme. This is MoQ Secure Objects. I reference there it's a draft. And this is not DRM, right? This is end-to-end encryption of MoQ objects. So if you don't want your distribution provider being able to view or read your content, you want to encrypt it. And it's very simply, in the catalog on the right-hand side in yellow, we see some new keys that have been added to each of the track descriptors in order to communicate Secure Objects and the information that's necessary for the client to decrypt them.

We've also defined a template for the media timeline track. So as a reminder, the media timeline is a description of the history in a live stream. So if you only want to play the live edge of a live stream, you don't care about the media timeline track and you never subscribe to it. But if you're a client, say it's a sports game and you want to scrub back into the past, you need to understand which groups and IDs or media time to go to. So we defined a very simple JSON structure which the original publisher produces as it's encoding the audio and the video. But it can get quite verbose, right? It's an example shown on the right. For the use case of fixed-duration groups, which are quite common in many distributions, we can apply a very simple compression scheme and collapse that timeline JSON list into a single template which is shown on the left. So the template on the left is equivalent to the data on the right and it takes advantage of the fact that we have fixed-duration GOPs and fixed-duration groups. But I think this will be a use case that's quite common and now we have an efficient way of describing it.

Now I'm getting into some PRs that are still open for discussion; they have not been merged. And I don't think we have enough time in this talk to actually debate these; there are some topics, but I just want to highlight them so you can go to the issue and add your opinion. The first one is what's called zapping, right? But the bigger picture here is we want to be able to start a stream very quickly and also switch in and out of that track very quickly in order to support stable playback at very low latency. One way to do that is to produce two variants of every track, one that has more dense keyframes than the other one. So A has a keyframe every three objects and in B it's every six objects. And it turns out that we can, with a single number, which is essentially the number of objects between keyframes and a simple formula which is described here, we call this the keyframe spacing parameter. The client can then figure out how to jump between the two tracks. And this allows it to start quickly and also switch out of trouble if it needs to. The downside of this scheme is you've got two tracks for every track that you would have without it, but I think for streams that are being viewed by lots of people, having actually two egress streams is a de minimis consequence of having much faster join times and switch times. So please look at this PR and if you like it, you can comment.

Another one is PR 121, publishing tracks for logs and metrics. So the beautiful thing with MoQ, it's a bidirectional protocol, right? We are using—we always think of using it as the players just subscribing to audio and video content. But the player can also publish logs and it can publish QOE data, which today we typically beacon out with a third party. But now in our catalog, we can tell every client, "I want you to publish tracks yourself." And we can tell it the namespace and the name that it should publish to, we can tell it the format it should produce, such as a MoQ metrics track, and we can even give it a token that it can use to give it permission to publish this into the network. So we've—there's examples shown on the right to show publishing both MoQ logs and MoQ metrics. And I think this would be a nice differentiator for MoQ media compared to segmented adaptive media where we have to beacon out through a different return path.

Another PR for discussion 141, adding support for init tracks. So we tried to simplify our communication of the initialization data that the client needs to start decoding the content by just putting it as base64 and sticking it directly into the catalog. And this works. However, it works as long as it's static for the life of the stream. But use cases were introduced saying, "Hey, in the middle of a, say, a long-running stream, we want to change the size. We need to supply a different init data." And for that purpose, we can use a track. And it's very simple, the client subscribes to the init tracks and it can get the init at the beginning and then if you want to change it or update it at an arbitrary time, you get the new data in that track. And there's a mechanism shown on the right. We can just declare the init track as one of the tracks and then we can reference it in another track that will use it for initialization. I see Mo for clarifying question.

Mo Zanaty: Not clarifying. Lohk has a mechanism to do the parameter set updates as well without catalogs as a property, track property or object property.

Will Law: Okay, very cool. Thank you. Yeah, we should actually look at do we want init data supplied as a—as a track or even object property because that would allow us to inject it at certain points. Okay, so that's good idea. So maybe can you comment on 141 and just say—I'm not saying we shouldn't have an init track, it's just we might want init tracks and init in a catalog and properties and they all work.

Okay, another PR 133, we want to add standards that are highly common in media, which is SCTE-35 markers for ad insertion availability, as well as embedded captions, 608/708, as well as external captions, right, IMSC1 and WebVTT. So the first thing we've done here is a proposal for transmission of SCTE-35 data. So we have a track definition on the left-hand side in yellow that say, "Hey, I'm a SCTE-35 track." And and the nice thing is with MoQ we can decouple the transmission of this data from the audio and video data. We and we use the depends attribute to link this SCTE-35 to the tracks that it's referencing. And on the right-hand side is a proposed payload. We have an event timeline structure and this is SCTE-35 using the event timeline structure. I think it's a very neat and easy to understand scheme and hopefully we can bring this to fruition. This is part of 133.

The next thing we're looking at is out-of-band captions. How do we transmit WebVTT and IMSC1? Question is, do we—do we still want both or can we just standardize on IMSC1? On the right again, we've have a proposal for how we would map WebVTT data to our event timeline payload construct.

Martin Duke: So, Will, we're under five minutes, I want to leave time for discussion.

Will Law: Yeah, yeah, I'm almost there. And then the last of this data is embedded captions, 608/708. So you see in the track description on the right-hand side, we have an accessibility field. Within that, we have a schema says this is 608 and then we've got some value attributes for that schema. This is extensible for other types of accessibility as well because we do need to comply—be compliant with accessibility requirements for broadcast media.

We've also have another PR 118, again thanks to Suhas. This is for how do we communicate our—the requirement for auth info. We have two schemes that we support, Privacy Pass and Common Access Token. And how can we tell the player, "Hey, you need to supply one of these tokens for this track." So it's very simple scheme. We have a key called auth-info and inside that, we have enumerated fields and they can either be cat or Privacy Pass. And then I'm using the variable substitution to actually define the token that the player should use when it subscribes to this track. I think this is a pretty simple scheme, but please comment on 118. And that was the end. I'm actually three minutes early. Are there any questions? I can't see the online queue, but...

Martin Duke: Benjamin. You're muted, Benjamin. Although it doesn't look like Meetecho is muting you, so I think it's a different audio issue. Qwen-hao, why don't you go while Benjamin figures out his audio.

Qwen-hao: Yeah, so my question is about the pub track for the log and metrics. But I want to clarify the purpose of this track is for just for recording the logging information or is it for the server-based ABR because...

Will Law: It's not—it's not for ABR. The idea here was that if I have a player that's having a problem, for example, I can instruct the player to start sending me its logs because I want to debug its performance. I don't envisage most distribution schemes want to get logs all the time from players, and maybe we have a scheme where the verbosity is adjusted so the player only sends error logs to a central collector so that you can monitor a fleet of players and know which ones are having errors. That would be an example of the logging. But it's flexible and maybe we should define different types of log and and the conditions under which we might want to report data from the player. Okay.

Martin Duke: Okay. Benjamin, you want to try again?

Benjamin: Hi. Okay, so let me go to slide 14. Yeah. Whoops. So this—on slide 14, you were talking about authen—about authorization. Yeah. I—uh—I find this a little puzzling. I think this is related to a discussion that came up in the last session, but I think the—um—I don't really understand the idea of authorizing to receive a track. The track is can be encrypted. And if you want to authorize people to receive it, you can just give them the keys or not to decrypt it, right? I think that what we're authorizing here is not for the not the content of the track, which can't really be controlled, it can be copied. Uh so I think that what we're authorizing here is actually access to the relay, to the relay network.

Will Law: We're—we're authorizing access to the subscription message. So the client can connect to a MoQ server, but then what can it do once it's connected? And we want to constrain what it can subscribe to. So certain tracks will have requirements that you must pass a token that gives you permission to even request it to subscribe to it. The content's not encrypted, but we're just controlling access to it from the client.

Benjamin: So I still—I want to understand what it is we're controlling access to and why. I it sounds like maybe the answer is that we're controlling access to um the resources required to redistribute that track, um and that there's we need to know who to basically who to bill for that if the there was one relay with serving multiple customers.

Martin Duke: All right, we're we're gonna have to take this offline. Uh, okay.

Will Law: Yeah, Benjamin messaged me. The use case here is basically CDNs and content distribution where you you have a large edge and you don't want to have uncontrolled access to that CDN edge. You want to hand tokens to people who have permission to come and request the asset from the distribution network.

Benjamin: Right, so conventionally in a modern CDN that would be done by encrypting the content...

Martin Duke: All right guys, take it offline. Take it offline. Chao, briefly please.

Chao: Uh, oh. Uh, I I saw the video init track added and uh seems uh tracks are the first-class citizen in the same MSF uh and uh my question is are there any principles uh for us to decide when to add new tracks to uh MSF and CMSF?

Will Law: So I just want to understand the question: when do we add new tracks to the catalog or to—to what?

Chao: Uh, yeah, to the catalog. Yeah.

Will Law: So it's it's up to the publisher. Normally, it depends on your use case. If you're a live sports event, you you will start producing audio and video for one camera and you may never change the tracks that you produce until the end. However, maybe midway through the sports event, you have an overhead camera and you want to—that's now available. You can update the catalog to describe the availability of the new tracks. So the catalog only gets updated when track availability or attributes change. Um and there's no requirement to update it if if nothing's changing.

Chao: Um, uh, I think I meant uh when we decide to add new tracks like for the init I mean the uh video init track, yeah. Uh I I think uh we added a video init track this time right, uh there's a PR for that. And uh how can we decide whether or not to add a new track or to leverage the existing track for transferring uh for init segments?

Will Law: Yeah, yeah. So you would first of all decide how you want to communicate your init segments. We have three proposed ways to do it: in the catalog as base64, as an init track, or as an object property. So you decide which one you want to use. And then the next question is how often is that init going to change? If it's static for the life of the stream, which is most media, then you would only define it once, whether it's in a track or in a track property or in the catalog. But if it's going to change midway through the track, then you can't put it as base—well, you could put it as base64 in the catalog and just update the catalog. So all three mechanisms have the ability for mid-track initialization changes. And it's you have the freedom to choose which one is more convenient for you.

Chao: Oh, I see. So we...

Martin Duke: Okay, no, I'm sorry, we're out of time. We have to take this offline. Thank you. Thanks, Will. Okay. Uh, there's been a productive thing in the Zulip about um privacy, so you should take a look at that. Minghui, go ahead. Martin, can you hear me? Yes, sir. Uh, uh, can you share my sliders or...

Minghui: You did not submit slides, did you? Oh yes you did. Sorry. There you go. Yeah. Thanks. Uh, hi everyone, I'm Jiang Minghui from the Chuck—X-QUIC-Dev Group. Today I will share our MoQ production experience at Alibaba Group and introduce our some new ideas. Okay, next slide. Uh, we have employed MoQ in some production scenarios, uh Top of Voice Search, uh cloud rendering game, Top of Live Digital Human and Alipay AI assistant. We are also integrating MoQ into A-app and uh Quinn-app and some new scenarios. Next slide.

Okay, uh why we choose MoQ instead of WebSocket or WebRTC? In short, uh QUIC have a nice—nicer performance. QUIC give us no head-load—head-of-line blocking, zero RTT, pluggable CC and MoQ as a flexible uh media-level abstraction on the top. Okay, next slide. Yeah, in the scenarios of Top of Voice Search, MoQ versus Web—WebSocket, uh connection latency reduced by uh 75% on the Android and 64 reduced on iOS. Next slide. Yeah, in another scenarios, uh cloud rendering digital human, MoQ versus WebRTC, the first—the first frame P90 latency cut by over uh 50%. Next slide. Yeah, and in Alipay AI assistant, over 50% reduce—reduction in both connection latency and average interaction delay compared to the traditional WebRTC pipeline. Next slide.

Yeah, after tuning uh Chuck-QUIC X-QUIC with GCC, MoQ and WebRTC show almost the same latency about uh 107 and versus 104 milliseconds. CC and at the transport layer work—works well but CC alone is not enough for us. Uh the sender still needs feedback from the application layer to adapt media quality. This is the motivation for our new draft. Next slide. Yeah, now our MoQ implementation is open source. X-QUIC on GitHub. Uh we have participated in the MoQ interop runner uh with draft 14 and draft 5 uh is step—stably running in the real production and draft 16 is also work—working in progress. Next slide.

Okay, uh this slide shows the gap. On the left, transport layer feedback only see uh the packet and on the right side we want to know more information like delivery status, arrival timing and buffer level. Uh this—this are the information from the uh application layer. Uh this is what we are currently missing in the MoQ. Next slide. Okay, why does feedback from application layer improve performance? If there is no feedback, the sender keeps keeps sending at the max bit rate. When the network gets worse, it leads the serious stalling. Uh with feedback the receiver tells the sender what is happening in the network uh and the sender reduced the bit rate to match the uh network condition, stalling goes away. Okay, next night.

Uh we have do some uh experiments to validate our thought. We run some controlled experiments with uh video streaming to validate this. Uh both group uh you can see this slide to know the whole the experiment uh to know the whole the experiment pipeline. Uh both—both group use MoQ but the only difference is whether use uh feedback or not. Yes. Uh thanks to the Ant-RTC from the Ant Group for contributing this experiment. Okay, next slide.

Martin Duke: Uh we have a question from Mo.

Mo Zanaty: Clarifying question, uh when you say feedback, do you mean end-to-end feedback from the original publisher all the way to the end subscriber or do you just mean hop-by-hop feedback on a one single QUIC connection?

Minghui: Uh the from the source to the end, from source to the end.

Mo Zanaty: All the way. Okay.

Minghui: Okay, uh the next slide, next slide. Yeah, thanks. Okay, this slide shows uh our test data uh under different bandwidth limits. The feedback group always shows higher FPS and much lower storage. Uh you can see the details on this slide to know our experiment. Okay, next slide. Okay, uh this experiment test our uh test our feedback draft in the packet loss scenarios. Uh at the 50% pack loss, the max frame interval drops from about three seconds to under one second. The worse the network, the more feedback helps uh to improve—help improve performance. Okay, next slide.

Another experiment we test is uh we uh we try to improve the performance of the AI infer—inference video generation. You can see this slide to know the whole pipeline of this uh this experiment. Uh in this case the upstream source is an AI inference uh like you can see uh VLM and and the MoQ server. Okay, uh in this experiment the MoQ client sends delivery feedback to the MoQ server uh which can sign the inference pipeline to switch different uh video generation quality like you can switch between the high quality, medium quality or the low quality levels. Uh then the inference uh can can output different video generation quality. Oh next slide.

Okay, uh in this experiment we have two key findings. Uh in good network condition feedback at zero overhead and uh but in the bad condition, uh 80 milliseconds delay plus 5% packet loss feedback give us uh 33 higher FPS and 33 fewer stalls. Next slide. Yeah, so we propose Draft-MoQ-Multimodal-Feedback this draft. Uh this—this key design is to add a new MoQ feedback track to carry real-time feedback reports. It provides more information about per—per object delivery status, uh reserve timestamp deltas and optional metrics like uh play playback buffer headrooms or something else. It serves uh both the application and the CC algorithm—algorithms. Next slide.

Okay, uh you can see this is the whole loop of our feedback uh feedback draft. Uh this—this is straight-forward. You can you can see the receiver reports uh object status and buffer level via a feedback track. Uh the sender uh use it in the two directions. Uh upward the application layer adjust bit rate, resolution or some inference parameters like CFG or something else, chunk size, yes. Uh on the down—downward the CC algorithms gets extra signal like loss rate and buffer level and then do some adjustment. Uh this forms a closed loop. Oh next slide.

Okay, uh our new draft can work with other existing draft like QUIC-Receive-Timestamp. Uh we hope this this draft can bridge the information gap with the transport layer and the application layer. Oh next slide. Okay, this is summary. Uh to—to summarize we we shared our MoQ production experience and experiment data in Alibaba Group and uh we proposed MoQ-Multimodal-Feedback to add application layer feedback. There are still some open questions listed in this slide. We look forward to your feedback and discussion. Uh you can you can also contact us via um email and Slack. Okay, thank—thanks. Uh thanks Marty. Yeah.

Martin Duke: Yes, thank you Minghui for being on time. Good presentation. Um I do want to say as chair that uh with the way we're doing things these days, if you're interested in this feature and think it should be in MoQ, the best thing you can do is start contributing to their GitHub on that—or commenting on that draft and most importantly implementing it. If we get interop on things, it's more likely to make it into the standard. Uh with that, I'm going to close the queue quite soon so enter if you want to be. And if people can be brief, that's great. Mo.

Mo Zanaty: Uh this is interesting. I think a lot of us feel that there will be need for app-level feedback. The challenge though is in MoQ, uh it's inherently wanting to be large fan-out, large distribution. So in your testing, have you tried many different subscribers giving feedback to a single sender, single publisher? And or are you aggregating the feedback in any kind of way?

Minghui: Uh no, no, we just the primary uh test on this on this track draft. We only use the uh single client and single server, but we can uh we can supply uh more more scenarios of this draft. Yeah, so uh thanks.

Martin Duke: Zahed.

Zahed: Um, so I kind of like asking the same question perhaps. The thing is like you have a receiver-side bandwidth estimator and I suspect you have a sender-side bandwidth estimator as well. And my two I have two questions. One is basically how do you on the receiver side, how do you do the bandwidth estimation? And in the server side, sender side, how do you like put all the estimations together from different clients? Kind of like what Mo was asking, but how do you do receiver-side estimator? Let's focus on that one. If you go to the previous slide, you have something called you're sending receiver-side bandwidth estimation. One more perhaps. One more back. Yeah. Estimated bandwidth, receiver reports estimated bandwidth. How you're calculating that?

Minghui: Uh okay, okay, we have integrating the QUIC-Receiver-Timestamp this draft to uh to know the uh some trans—transport layer information, yeah. So we can know the real-time uh real-time network condition and then we have a policy to uh estimate the real-time uh real-time bit rate then uh and decide the policy. Yeah. So uh...

Zahed: Okay, I I would like to understand a bit more but maybe I'll talk to you later.

Martin Duke: Thank you Zahed. Victor.

Minghui: Yeah, yeah, you can you can talk to me later. Yeah.

Victor Vasiliev: Uh Victor Vasiliev, Google. Uh we in our implementation we have a feature which is similar to this. The key difference is that we don't put it on a track; we do it as a hop-by-hop extension and the second difference is that is the format. Our format is we only send positive acknowledgments and we only send the time delta between the expected receive time and the actual receive time. So uh I for a long while I was trying to—we've never submitted this as a draft—but uh I do wonder if we can somehow figure out if what we implemented is the compatible with what's in that draft and whether we can converge.

Minghui: Uh yeah, we can uh do more do more connections uh after the meeting. Yeah.

Martin Duke: Yeah, Victor, I think writing something down would be helpful. Alan.

Alan Frindell: Uh yeah, uh thank you so much for uh presenting this, bringing this work. Uh I think it's great. Uh I'd love to hear at some point like what you're act—when you say deployed at scale, like this may be for all I know the biggest MoQ deployment in the world, so would love to hear some of the numbers. But I'm curious if you can roll all the way back toward the beginning of your presentation. Um if you could just talk a little bit more about the application use case scenarios that you had there. I'm particularly curious about kind of the one you mentioned, which was like user-to-compute or human-to-computer AI, um because that's another area that I think, you know, is like MoQ could be a very good fit. Just explain kind of what the a little bit more about what the application is that you're using.

Minghui: Uh okay. Uh we uh we observed some new new video generation uh real—they need the real-time uh in—display, yeah. So you have the latency uh the latency is the key um and uh they they need to the uh higher FPS and the lower end-to-end latency. Yeah. So when you have the um when you have the network network condition different, uh you need to know this this change, uh this change in the network condition uh quick quickly. So uh we have to adjust the uh inference framework uh parameters quickly uh between the uh in the one single one single request or the between the different requests. So uh so it can...

Alan Frindell: Can I stop you there? I think I get that part. I was sort of asking like one higher level. Like, what is the application? Like, I am I talk to my agent and I'm like, "Generate me some video," and so it's like dynamically generating video on the fly and streaming it back to me? Is that the use case?

Minghui: Uh yeah, yeah, yeah, yeah. Just just like that.

Alan Frindell: Thank you.

Martin Duke: Okay, thank you Minghui. Uh once again, uh nice presentation and um and good job writing it up in a draft. That is the right way to to propose changes at this point to the protocol. Uh okay, Suhas, you're up.

Suhas Nandakumar: Uh hi everyone. Uh Martin, I'm not seeing my deck yet. Maybe there's a lag. Okay, I can see that one. Uh hi, I'll be talking about um um why we want to do MoQ over QMUX. QMUX is a new draft or new work that QUIC working group has taken up. Uh this presentation is bet—basically trying to say um how do you want to do MoQ or when do you want to run MoQ over TCP, especially when uh QUIC or UDP is unavailable for whatever reason. And we know QMUX is basically a multiplexing layer that provides QUIC-like stream abstractions or TCP or TLS. Uh next slide please. You have slide control. Cool, cool. Thanks.

The idea here is that MoQ is designed to work with QUIC and which runs over UDP. and but there are certain scenarios where UDP or QUIC might not work, like UDP is less—okay, QUIC is more blocked than UDP in general, but some of the use cases like enterprise firewalls or mobile carrier restrictions, um some middleboxes can drop the UDP packets because they do not would not want to support it, or there can be some rate-limiting scenarios. In all these cases, uh we might not have a MoQ over QUIC just work. Um and without a TCP fallback, MoQ applications simply won't work in these environments. And this limits kind of adoption and reach of MoQ-based applications, especially in the enterprise and some of the examples I kind of talked about. Um and we need kind of a way to support something that would allow this to happen.

The proposal here is that um we want to run QMUX uh as in an—the the layer uh as over which MoQ is running. And QMUX kind of provides a multi—multiplexing layer and it sits between your MoQ application and your TCP/TLS stack. Um the—the main thing here is that QMUX is kind of a polyfill which provides um a notion for bidirectional streams for control messages, unidirectional streams in MoQ, for example, for data delivery, and a kind of transport parameters to configure your connection which you the the way you would do in QUIC, and also flow control through some parameters like max-data-frames. Um the—the key insight here is that the applications that coded against QUIC APIs work unchanged. Uh this would be critical so that we have one code base that can work uh where in UDP-friendly and QUIC-friendly environments as well as not-so-friendly and not-so-friendly environments. I think this is kind of summar—I'll just go skip on this one. Uh one thing I'd like to say on this slide is that even though there are good benefits of running MoQ over QUIC, but we also need to kind of understand that there are some tradeoffs as well, which is head-of-line blocking. Um the nice thing about QUIC is that you have parallelism with the streams. Um that will go away. Uh again, they the—we have been doing a lot of work within the MoQ working group to reduce the the amount of RTTs that we need to do, but uh if you go to the QMUX, it will increase the RTTs. It might add additional two RTTs compared to how we would set up using just uh over QUIC. And and we lose some of the good features of QUIC like connection migration and those things. We need to keep that in mind, but the idea here is that you'll have uh probably you'll end up not being able to send anything versus uh with some penalty you're able to have the MoQ application work is what the goal what the goal here is.

And if we do MoQ over QUIC/QMUX, um how we we basically take the uh the there is a easy mapping. We have the request streams which we use the bidirectional streams in QMUX and we have unidirectional streams for data, which is what we use for sending the send—for as data streams. And for all the request—for all the control stream data, which is for setup messages, it would be stream number zero. And anything you want to do with flow control uh and other things, we have these parameters that talks about how do you want per-connection level max amount of data that you can send, and per-track—per-stream level you have maximum stream data parameter and also uh to do the flow control on the number of streams you have a max-stream parameters. So basically what it's saying is that there's quite a bit of direct one-on-one mapping between how we want to use MoQ over normal QUIC versus QMUX. So there's not much uh kind of, you know, an application needs to change to support uh QMUX and underneath.

Having said, you know, what's what how QMUX works and and why it might be helpful, the challenge comes here is that like when you want to use MoQ over QMUX, uh MoQ can run over QUIC or it can run over WebTransport, and it can we can have different MoQ versions and we can have different QMUX versions. Um what is how does, you know, the client and server identify each other when they're running? Currently, the draft specifies a straightforward simple idea which is like we don't care at the MoQ application layer; we just use the MoQ ALPN like moq-16 or moq-17 that identifies the draft version. Uh the client basically sends the ClientHello message with this ALPN and if the server is fine, um they we give next if if it's a QMUX-based server, basically they do establish a QX transport parameters. That's how you know that uh there is a QUIC-QMUX being set. And once the QMUX session setup works, you end up setting uh like a MoQ session is ready. Uh the here the thing is that yes, the you need to kind of establish a TCP over TLS connection to see if is on the other side is QMUX or not. Um and we can we need to kind of figure out what kind of happy eyeballs things we you would want to do to not reduce your connection setup late—latency, something like should we do both your UDP as well as your TCP in parallel and and once something gets connected, wait for a while before uh you totally give up on the attempt on the other other one. But some of those are some of the things kind of we need to uh think about and and provide recommendations for someone who would want to use MoQ over QMUX. Let me skip this slide.

There is an alternative proposal here, uh which is basically inspi—but but the problem with the previous ALPN proposal is that it does not let you negotiate anything over the QMUX. It assumes somehow the QMUX works magically if the server and client are on different versions, it might it might just fail. Uh to kind of alleviate that problem, one we have two alternative proposals here. These are not in the draft, but this came over discussion with few folks around. Um the first proposal is kind of how WebTransport you kind of upgrades itself to uh from H3 ALPN to next level saying these are the available protocols that I would like to use and say which MoQ version you use. Why don't we kind of bring the same design uh to the QMUX negotiation as well? So what it this basically says is that at the TLS layer, if you are running MoQ over QMUX, uh at the TLS layer you say, "I support QMUX version 1." And and in the next available protocols in the QX parameters, you say, "Oh, I support moq-16 and 17." That's how you can know that you're supporting QMUX-1 and moq-16 or 17. The same way you're doing it over WebTransport, you start with at TLS layer your QMUX and at the next available protocols it will be H3. From the H3, the next WT available protocols will say what MoQ version you're using. This kind of is inspired by the WebTransport design. Rohan, I have one more slide and I'll get back to you if if that's okay. Um and on the other hand, the other way to solve the same ALPN problem is that you can define one ALPN at the TLS that kind of has combination of the MoQ version and the QMUX—QMUX version. This this kind of provides you a way to say in my TLS my my TLS layer ALPN to kind of clearly identify what MoQ and TLS uh sorry MoQ version and QMUX version that you're support. But it can have a version exploration—explosion, especially when we are in the draft versions. Um maybe if once you're standardized, it might not be a problem, but you still have this problem of version uh explosion here. Then I can skip this one.

Okay, the question to the group is that um what what does people think about having to work on this draft and invite people to kind of help make this happen? And we have seen some early deployments. The usually the challenge with early deployments is that the the private deployments they kind of go do it and when IETF proposes something it becomes really hard uh to kind of change. So we need to kind of do something early so that we can give recommendations to the implementers who are doing MoQ over QMUX to kind of say how how do we support this application. Yeah. The open for questions please.

Martin Duke: All right, I had to lock the queue because Suhas did not leave a lot of time for comment. So most of you are not going to make it, but Rohan, go ahead.

Rohan May: Um hi, Rohan May. Um so apologies that I I have not been super closely following this, but um so I'm an implementer. I want to be able to make sure that some um MoQ traffic gets over it it's only going to be able to go over TCP port 443. Why would I want to use QMUX instead of WebTransport with HTTP/2?

Suhas Nandakumar: Oh WebTransport over HTTP/2 should should be totally fine if if that's uh I I don't think so this would avoid if if that's that's your deployment. Uh but that also requires you to uh have HTTP/2 server on the other side. But if the QMUX will be the deployment you have on the server side, this will give an actual way to move to that. Got it. Thank you.

Alan Frindell: Uh thanks Suhas. Uh so a couple of things. One, I think that we're going to there's probably more talk about how to do QMUX version negotiation in the QUIC session, so probably people should come there. I think I'm firmly on team, "There's only a MoQ ALPN and it tells you what QMUX version you're running rather than having multiple layers." Um also, I think most of what's in this draft could probably be consolidated into the MoQT draft into a very short amount of text which just explains that we're going to use MoQ—you can use MoQ over QMUX and in how you do that. Um the clear—the timing there, since we don't know if QMUX is going to finish before we will or we want to reference it normatively or if we would need to have a separate draft that says how to do that if if the ordering goes MoQT then QMUX then MoQ over QMUX. Anyway, perfect. Sounds good Alan. Thank you.

Martin Duke: Great, thanks Suhas. Many people were in the queue and didn't get an opportunity. Please use Zulip or the list or contact Suhas directly. Next up is Mo, who's going to talk about LOC.

Mo Zanaty: I'll try to catch us back up. Um this should be relatively simple. Wait a minute, sorry. Yes. Uh this is the LOC uh media format, Low Overhead Container. Um the changes in version 02, uh there were some editorial, large editorial, but um uh was peppered everywhere, extensions were changed to properties to align with MoQT, and uh variants were replaced with the new uh VI64 variant. Um the—oh, this—this is the old version. Um this is not the one that you uploaded.

Martin Duke: There's a lag in Datatracker, I approved slides and it doesn't show up. Sorry. Okay, I'll try to remember the changes.

Mo Zanaty: All right, so the um the major substantive change was moving the metadata location. Uh we um had an agreement to move most of the metadata into the MoQ payload so that it's uh opaque to the relays. Um however, on the next slide, you'll—we'll discuss that in a bit more detail why there's actually a split now between two sets of properties: public and private. One that lives in the payload, one that lives in the MoQ header. Uh we added a timescale property; it was called timebase before, um but we renamed it timescale to align with some uh other APIs uh to give the scale of the timestamps. Uh we removed the capture semantics from timestamps um because you can use it for presentation or anything else. Uh retained start codes, and we added a lot of information about secure objects integration and the security considerations.

Okay, so the big change was moving metadata uh from the uh MoQ header extensions uh into the MoQ payload. Uh so this is the old version: the MoQ header extensions uh map directly to the LOC header extensions and the MoQ payload map directly to the LOC payload. So it was a nice clean simple uh mapping and both of the payloads were the exact bytes that you get out of the encoded audio-video chunks uh from WebCodecs. Um what we moved to in 02, uh there are still some LOC public properties that are a subset of the LOC of the MoQ uh header properties, um but the majority of them are expected to be in the LOC private properties, which now live in the MoQ payload. And then the LOC payload follows uh its private properties. Um so now you have the uh MoQ payload not being the exact uh encoded audio-video chunk anymore, um but it's both private properties and the payload. Um so there's some discussion on that if people want to look at the issue. Uh there's a little bit more detail in there about why that that split was necessary and which specific properties we think could be private or public.

The timescale is very straightforward. It's a new property that just de—that just provides the denominator of um of the timestamp. Uh so 48,000 for audio typically, 90,000 for video. Um if you don't specify it, we default back to the uh original interpretation, which was microseconds. Um it could be on the track or individual objects can override it.

The open issue is the major one uh Colin already addressed yesterday. Uh the track properties that can't be authenticated or encrypted, um there needs to be a solution to that, probably in Secure Objects, and then uh LOC will just follow uh that uh solution. Um there's some editorial things about uh registry collisions, we'll clean that up. Luke brought up a question about LOC outside of MoQ; I thought we had agreed that that was out of scope, but um if anybody has a different view or opinion, please chime in on that issue. Um again, registry editorial things, the WebCodec long-lingering issue, and finally the from last night uh we realized why uh people have grief when you encode a variant with not minimal encoding, so that's what we were doing with the frame marking extension, and uh that caused some confusion. So maybe—maybe we'll revert that and just do explicit lengths and not not try to overload uh the VI64 and keep it minimal length. All right, that's it.

Martin Duke: That's just well done, sir. Anyone in the queue? All right, let's move right into filters discussion. Like six clicks I have to make, give me a second. But we're going to move right into the filters discussion. Let me make sure this is the updated one because that will make a difference. It's version 1. It's version 1. You don't have slide control yet, I'd to give it back to you. Go to the go to the last slide, let me see what it is and I'll tell you if that's right or wrong. Why I gave you slide control so. There's 13 slides. Okay, that—that's the right upload. There's nothing to be done anyway because Meetecho's not syncing. Okay, yeah. No, this—this is the right one.

Mo Zanaty: Um okay, so uh filters. Uh this is a new PR 1518, the old PR 1401 uh was much larger and uh the feedback was, "Let's try to reduce the scope, make it more manageable to merge." So we did. Um the major changes uh in bold there, removing the location filters and the group filters. Uh so we already have filters in the current MoQT: the subscription filters, uh we renamed them to subscription location filters, and of course forward uh flag is also uh basically a filter. So we've already have some version of filters. This adds the new range filters and the track filter. We also uh simplified some things by removing some of the parameters, max-tracks-deselected and um the—the equivalent of it for the setup option and the parameter option. Uh we improved the wire encoding, um some delta encoding enthusiasts got a hold of it. And we clarified a lot of the interactions uh between other parts uh of the spec like um how to manage subgroup closing and things like that. Okay, so setup options uh we removed um one of them and the only thing that's left now is uh max-filter-ranges, and that's changed to be a total across all ranges and all parameters. Uh it used to be for each individual filter type, now it's across all filters. Max-tracks-selected is still the same, we removed deselected.

The range filters um the major uh change here was adding AND-sets. Uh so the range filters uh again they were reduced to only these four: we removed the group filter and the location filter. Um and we added this AND-set to all of these four. Uh the AND-set allows you to basically do AND/OR combinations of all of these filters. Before the design team didn't want the full complexity of AND/OR sets, but then some other issues came up later where relays that aggregate uh these subscriptions need to be able to express AND/OR sets. Even if a client doesn't have to express it, the relays end up having to express that uh to avoid some DoS vectors, to avoid pulling in too many objects because they can't express the filter succinctly. And it's actually a really simple change, not much text at all. Um you have this parameter AND-set uh this value AND-set in every parameter and uh for the same values of AND-set, you AND all of those in that same set and then you OR all of those groups of AND-sets at the very end. So it's a very simple um...

Alan Frindell: Alan, you have a question? Sorry, there's a AV lag here. So to be clear, before these were single-value parameters because there was never a need to express you would not have subgroup filter twice in the same block because it was just an OR of everything, so having two of them didn't mean anything. But now they're multi-valued, so I have every distinct—maybe you can explain.

Mo Zanaty: So the echo appears as horrible, um do you understand my question?

Martin Duke: All right, well, yeah the the echo is bad at the at the front here. Um well, it's already gone past the scroll there. Can you repeat it quickly?

Alan Frindell: It was something about single like a single value versus multiple values and if something changed there.

Mo Zanaty: Are each filter parameter now multi-valued? Like you can have the same subgroup filter parameter twice with different AND-set numbers?

Alan Frindell: Yes, yes you can have the same param—yes, before each parameter except the priority the the property filter used to allow itself to repeat. Now they all allow themselves to repeat with different AND-set values. If you have the same AND-set value, then it'd be a protocol violation, but different values is allowed except the property filter; it's a combination of AND-set and property type that make it unique and it's a violation if you repeat those two identically.

Mo Zanaty: Um okay. Uh some examples, the motivating factor for most of these filters was being able to do keyframe scrubbing. You want object ID zero, there's a simple easy way to do that; the bold numbers here are what we actually care about. So object ID zero, this shows how to encode it with an object filter, just put zero. Uh you want a video base layer, that's subgroup ID zero typically, that's what LOC specifies. So you just put a subgroup filter for zero. If you want two enhancement layers, one and three, they're two temporal layers, um then you put a subgroup filter on uh subgroup one for a count of one and subgroup three ending at three, and we delta-encode those, so those red numbers slash them through and minus the previous one and you get the delta-encoded version in black. Um the property—the property type uh filter, the key thing to show there is that um we have to add the property type before the values. And that also shows where no end, you just omit the end. The last example is if you want to do the OR, you uh change right after length is the AND-set, so you just put two different numbers for the AND-set and you'll end up with an OR of those two things. So you may want across, you know, temperature and humidity, temperature higher than 30, uh OR humidity higher than 50, instead of ANDing them together. You can also remove any of these filters just by having a length of zero; that's used in a request-update. Um and the ranges can can express all of these operations, you know, equal to, less than or equal to, exclusively between, inclusively between.

Okay then the track selection filter. Um this was one that generated a lot of um comments and thanks a lot to Alan and Ian for raising a lot of really good points that helped us refine this. Um this uh if you're familiar with SQL, this is like the same as doing SELECT TOP N in SQL. Um so it's selecting uh the top N tracks over an entire namespace and it's selecting them based on the values of a property that you specify. So you specify the property type um and it will find the highest values for that property type across all of the tracks in the namespace. You can combine this track filter with all of the other filters too, uh so all the prior filters that we had and the subscription location filters and the forward flag is the ultimate filter that disables all of these. When you combine all those filters, uh the key thing is that the track filter always runs first in order to help with scalability. Explain why. Um this was a change that was requested uh to allow the relay to run a single track filter for all subscribers of that namespace one time at the ingress of the relay instead of having a track filter per subscriber customized uh to their needs. Question?

Chao: Hello, so um I have a question about the track filter. So is this for the end-to-end or hop-to-hop?

Mo Zanaty: For the end what?

Chao: Is it for the end-to-end or hop-to-hop? Like original publisher end to the client or it's just the relay with the...

Mo Zanaty: It—it's to the original pu—you subscribe to a namespace only to your uh near near relay, your edge relay, um but then that edge relay will go through its network and we actually uh recommended in the um in the latest PR that relays should aggregate and upstream these these requests, all of these filters including the track filter, uh so that they all propagate back up to the to the eventual source to the original publisher and so across the entire relay chain you end up with those filters propagated across all of the relays.

Chao: Okay, so the original publisher can know all the properties and can do the filter, right?

Mo Zanaty: Well typically the original publisher will only know its own properties so it can it can self-filter uh the range filters and the location filter and all those other filters. The track filter is a comparison across other tracks, so the track filter can only be done at the edge relay or its directly connected publishers. All the original publishers that directly connect to one edge relay, that relay will run the track filter across all of them. They can't run it themselves because they only have one track; they're always going to be the top one.

Chao: Oh let me get—let me try to understand it. So there may be multiple original publishers and there is one relay and they pass all the filter to the original publishers and they return how how next happen?

Mo Zanaty: The—the conceptual model is you're you're doing this over a namespace, right? So let's say you have 100 publishers in this namespace and they're all publishing—let's say they're all publishing to the same edge relay. They happen to be co-located on one edge relay. That means that edge relay has 100 connections to these publishers directly; it will run the track filter, it will not propagate the track filter to those original publishers because they're only one. If—if all of the subscribers that wanted to filter those 100 publishers said, "I want the top one," then that edge relay in front of all those 100 publishers would say, "I'm going to look at all of these publications and I'm going to pick the one that has the highest metric." Let's say the metric was audio level, who's speaking the loudest. So it'll look at all of those 100 publishers and it will pick the one that has the highest audio metric. It can't propagate that filter to the publisher because the publisher doesn't see everybody else.

Chao: Yes, yes, that's true. So so what what control message should be like uh...

Mo Zanaty: It's in SUBSCRIBE_NAMESPACE. Uh the track filter when you subscribe to the namespace to say you're interested in this entire all tracks in this entire namespace, you can add the track filter parameter and in that parameter you specify—I'll just go on one slide and it'll there it is. So the track filter parameter is in SUBSCRIBE_NAMESPACE or updates to it and you specify the property that you want to watch: audio level, humidity, temperature, whatever. You specify the property that you want to watch and how many tracks you want to get back, so the top N, I want, you know, N equals 2 or 3 or whatever. Um and there's also a timeout parameter to say uh if one of the publishers has not delivered objects within this amount of time, don't consider it in the top N anymore. Uh so those are all the parameters that the track filter has.

Martin Duke: All right, I think we need to move on. Let's go back to the presentation. You can save your questions for the end.

Mo Zanaty: So we've removed the max-tracks-deselected um out of this uh out of this parameter um because we now allow relays to purge the old state using PUBLISH_DONE anytime they want. So before there was a rigid algorithm um of a table dimensioned max-tracks-deselected deep that the relays and the and the subscribers had to each maintain; now the relay's just free to PUBLISH_DONE anytime it wants to purge however much state it wants. That's a balancing act between if you purge too much state then you have to re-publish again um but now it's under relay control and it simplifies the filter. Um we also added a a new error uh if relays want to protect themselves by saying, "You know what, this filter's too complex. I can't upstream this um and I don't want to process this this much complexity," they can return an error conflicting filters the SUBSCRIBE_NAMESPACE or when they close the uh close it out so that they basically just say this this is too complex of a of a filter.

We added some uh some diagrams to show the state transitions of this track selection filter. You start off Unknown where the name and alias of the track is not known at all to the subscriber, so you can't forward it any of the track's data. When when the track delivers an object that gets it into the top 10 in the top N then it's considered Newly Selected and it follows that publish path with a forward=1 flag when it gets promoted, then it goes to the Selected state. It's selected in the top N tracks and it's passing the filter at that point so all of its objects are flowing. If later it uh gets evicted or demoted by another track, um then it drops out of the top N and it goes Deselected. It could also drop out because of a timeout; it didn't deliver objects for too long of a time and so it'll get demoted and drop out. It'll go to the Deselected state and the objects there fail the filter so they're not forwarded anymore. When we go to Deselected, the red key changes we made in this version were we we do a request-update now to notify with a forward=0 flag to notify the subscriber that we're no longer forwarding this track. So instead of seeing silence and inferring that the track was deselected, we're sending an explicit control message signal that we're deselecting the track. If the track becomes reselected again because it delivers a higher metric again, then it'll go up to that request-update forward=1 flow upon reselection and it's back in the Selected state. If it sat in a long time in Deselected state and the relay wants to clean up excess old subscriptions, then the relay will PUBLISH_DONE that uh track and it'll go back up to the Unknown state. And of course no objects can be forwarded when it's Unknown. If that track becomes reselected again because it delivers a new object that's in the top N, then it has to go through the Newly Selected flow and deliver a new publish again.

Um some other nuances of this are you're allowed to subscribe to a name that uh is also in that namespace. So if you do a SUBSCRIBE_NAMESPACE and then you later also subscribe to a name in that namespace, uh that overlapping subscription is actually allowed because you're we call it pinning that one track regardless of whether or not it's in the top N; you're interested in it. This is a very common use case, uh for example in video conferencing you see the, you know, top four loudest speakers, but then you want to see one particular speaker, um, you know, the room that has 20 people in it or something, so you pin that room and then you don't care if it's the loudest or not, you always want it pinned, so you're directly subscribing to that one track. Um and that's that's supported; the track filter will never evict that track once you pin it because it's a permanent subscription. Um you can also update the parameter uh for the SUBSCRIBE_NAMESPACE, you can change aspects of the track filter. When you change those, it'll have to be re-evaluated and then you'll get new uh you could potentially get new publishes for the newly active top-N tracks and you could get forward=0 request-updates for the ones that got deselected.

Um and there's also a track property filter uh and that will filter the publish message itself. So if you apply a track property filter, you won't even get the publish message, so you won't even be notified of those publishers. So for example if you a you have something like CODEC uh as one of your track properties, you could say I'm only interested in in senders with an AV1 codec, so then that track property filter will block all of the other codecs from publishing to you.

Some other changes, um you know wire encoding efficiency, we delta-encoded everything. Uh the property filter was uh made to look more like the other filters by having multiple of those per property type, so that it can reuse the same parser that all the other range filters used. Uh we clarified a lot of the interactions between subgroup gaps and fetch gaps, what that means when you have filters in place. Added a lot more details to track filter selection and diagrams and how the reselection process works. And we added a section I'll mention in a minute about relay protection to avoid DoS attacks. And like I saw like you saw in the red on the previous slide, we explicitly signal every time we select we reselect or deselect a track by doing a request-update forward=0 or 1.

So there were some issues brought up about um asymmetric attacks because the the design of both SUBSCRIBE_NAMESPACE uh and specifically the track filter, um and some other features like forward=0, those features are inherently asymmetric, meaning that the subscriber can ask in a very simple one control message something that triggers a lot of work for the relay that consumes a lot of bandwidth and a lot of compute. Uh so there's a worry that that may be a vector for some asymmetric attacks um on the relays. Uh so we added some some guidance about how relays can help protect their uh their resources when they see uh a potential for something like this. Uh this is not going to mitigate everything because these uh these filters and things like forward=0 and SUBSCRIBE_NAMESPACE itself are inherently asymmetric by design. It's not a side effect, it's not an attack vector, by design they're intended for a subscriber to request something simple and the relay does a lot of work to give it back uh that that complex thing. Um so we can't mitigate all of the all of the asymmetry, but we're giving relays some guidance about when they can detect that something is uh overloading their resources and how they can escape out of it. So first of all the relays should propagate all of these filters upstream, should aggregate all what it sees from the subscribers to that namespace, and it should propagate a single filter upstream to its uh to its relay network. That way it doesn't have to do the work of pulling all the tracks in and doing the logic and giving it back out; push it all the way back up to the source.

Um there's also cases where uh if you can't propagate upstream because everybody's directly connected to you, if you have a edge relay that actually has a thousand publishers directly connected to it and all those publishers are in the same namespace, then that relay may impose some limits so that it says, "No what, too many publishers on this namespace. Uh the namespace is too large," so it can error out that um that subscription. Um there's also a new conflicting-filters error, uh so that if the all of the filters that are being requested by different subscribers end up causing too many conflicts and making the relay do too much work to aggregate them and propagate them upstream, it can reject them with a conflicting-filters error. There's also a prefix-overlap-existing error, um and that was originally intended for a single subscriber trying to overlap on its own SUBSCRIBE_NAMESPACE and uh two SUBSCRIBE_NAMESPACES overlapping each other from a single subscriber; that didn't make any sense, right? But now with multiple subscribers and the relay trying to aggregate them up, it does make sense because now you could have two different subscribers that are playing by the rules, I want ABC, you know track namespace ABC, and the other one wants track namespace AB, when that relay receives those two requests and tries to propagate it upstream, suddenly it's in violation of this namespace overlap problem, and so it can reject those as a as a set of prefix overlaps, even though that individual subscriber didn't overlap his own prefix but in aggregate all the subscribers did. Um so we're putting in some guidance to relays about how they can protect against these cases. It's not bulletproof um and like I mentioned it's there's inherent asymmetry in in these uh in these capabilities by design, uh so relays will have to exercise caution um when using them.

This is the list of all of the issues, um and if you go to the PR in 1518 there is a link in the bottom comment of the PR to the GitHub repo that's tracking all of these issues. It's actually tracked in the fork uh where this PR is being authored. Um so we believe we've addressed all of all of these issues, um but uh we'd like we'd like to hear from each of the submitters uh before we close all those out.

Martin Duke: All right, we have seven minutes for questions and comments. Alan.

Alan Frindell: Sorry, there's a AV lag here. Okay, thanks Mo. I did see a bunch of updates got pushed to the PR this morning but I have not had a chance to read them, so uh I will be doing so soon and um provide feedback because I know I filed a bunch of those issues. Um so I did have one a few questions that were related, I don't know how if you didn't mention in your presentation. One of them is uh whether like right now it sort of talks about this like deselected state, um and we removed the max number which I think is good either like if you're going to do a queue let the relay determine what that number is rather than letting it be different per subscriber. Um but I also know that some people have implemented this using a timer instead, which is like if you get deselected you just sort of I don't I'm going to send you a PUBLISH_DONE but I'm going to do it on a delay and if you get reselected before the delay I cancel it. So one question is did are is that now the recommendation or is there no text on it? So I'll just ask all my questions to come back. Um you mentioned the property filters that prevent the publish and that maybe wasn't in the previous version, so that was new I haven't seen that before. Self-exclusion is something that came up when I SUBSCRIBE_NAMESPACE I really probably don't ever want my own track but I think we can handle that in MoQT because it's the same it's not filter-related. And then the last one in security was around limiting recursive like do we need the recursive feature of of SUBSCRIBE_NAMESPACE in this case? Meaning when I subscribe TOP N can I limit it just to the particular prefix rather than that prefix and everything below it in the tree? And also can we limit to a single property per will limiting it to a single property per prefix meet all the use cases?

Mo Zanaty: Um yeah so on those those last points uh there's in that section uh talking about all the things that the relay can do to protect itself, um you could respond with namespace too large uh if you have um your recursion problem if you don't want the bigger namespace, you can just say this namespace is too large you have to subscribe to the leaf namespace. I don't think it's good to force that in the spec that nobody could ever do that, um but it gives the relays the freedom to do that if they think the namespace is really too large.

Alan Frindell: Yeah, it's not you don't always know how big the re like I don't know well I found you in the tree but I don't necessarily know how many children you have in the tree or how many tracks are in each of those places so I mean the more we can scope this down and the less the relay has to do, so if you say you can only subscribe to one namespace and it's not recursive and there's only one property per like that makes the relay implementation that much simpler because I don't have to track those N different dimensions of this.

Mo Zanaty: Would you impose that same limitation on SUBSCRIBE_NAMESPACE itself regardless of filters?

Alan Frindell: I don't know. If the question is I'm asking does it meet the use case? If the and I kind of think that for SUBSCRIBE_NAMESPACE I'm not sure I think that we needed the recursion there. Anyway, I'm not sure I don't know. What do we need to do to meet the use cases is the question, but minimal that we need to do.

Mo Zanaty: Yeah I mean I don't have a strong opinion I I tend to lean towards let's not restrict things if we don't have to and give the relays the tools to restrict things if it wants to and not ban applications from doing something if it's, you know, semantically meaningful. Um we're not using an example, you know, we're not using an application today in our experiments that that uh you know that go four deep into a namespace and and then wildcard the top level uh but uh you know I I don't know if another application may want to do something like that.

Alan Frindell: I would sort of rather go the other way which is like we scope the minimum thing and that way we don't have interoperability problems. It's very clear what you can expect from the relay and then uh if somebody later is like, "No, I desperately need this for this use case won't work until I do it," then we extend it later.

Mo Zanaty: I mean name—yeah, namespace and name construction is up to the app and, you know, it can work around anything, you know, it can flatten into one name it can flatten the namespace hierarchy into one big namespace with, you know, underscore separated, you know, tuples if it wanted to and put them all in one tuple, so apps can work around it even if we put that restriction if if they really want to.

Alan Frindell: That's true.

Martin Duke: Ian.

Ian: Ian Swett, Google. Uh I want to say mostly just thanks for engaging on the issues, it's actually been helpful to have separate issues because um there is a fair amount to discuss here and also I think it feels like on some of them we're making meaningful progress. Um I have not read the most recent version of the PR because it's so recent but I will hopefully today on the flight back. Um but as you can see with the number of issues there's still a I mean I think maybe we don't have to solve all of them before we land something in the end but I think probably a good chunk of them need to at least be clarified um before we need to resolve it so anyway. But thanks for making good progress and engaging kind of on each one individually, it's been helpful.

Martin Duke: Suhas.

Suhas Nandakumar: Hi, uh thanks Mo. Just to refer to Alan's point, I think it should like most of the applications at least I'm just trying from our implementation experience we were able to implement track filter for the active speaker switching use case and and we had an implementation before which was an application-driven metadata track and react to someone whenever someone becomes an active speaker or not versus the experience that we measured with uh relay doing the active speaker switching and and with our initial experiments it does show that the end-user experience is way way better than what it could happen in the other other alternate approach. Um but having said that, I kind of tend to agree with Alan that if we can start with not having recursion for filter, maybe that would give a way for implementations to kind of build this functionality and help do the interop. And and also I totally agree with one of the open issues that we had was what would happen if um if the at edge relay where they have multiple publishers, how how would you restrict uh getting all the data with forward=1 for everyone? And having a way for that edge relay to say that uh conflicting or, you know, just too large is a good condition for for it to handle. Um and if I I went through all the issues that you have and kind of see that we have provided resolutions and really like to see how how can we go further. Thanks for the work on this one.

Martin Duke: Uh, do go ahead Gurtej.

Gurtej: Gurtej from Apple. Actually I think my questions were asked but I didn't get the answer. One is that when you said when the deselection happens, there's a PUBLISH_DONE. I didn't understand like who is doing the PUBLISH_DONE?

Mo Zanaty: Let me clarify: when the deselect happens, uh you notify that the there's a request-update forward=0, so you notify the subscriber that it's not going to get objects anymore. Um you don't have to PUBLISH_DONE um because you want to keep the track alias alive. Uh if you run out of resources, you run too much subscription state, then the relay is free to PUBLISH_DONE to clean up that state, but typically it would hang onto it for a little bit so that the track may get reselected.

Gurtej: Exactly, because if you do the PUBLISH_DONE and new audio starts coming in with like actual voice levels, how would you know about it?

Mo Zanaty: Well when you PUBLISH_DONE you stop forwarding too. When you deselect, you stop forwarding.

Gurtej: And then it cannot be selected again once you do that?

Mo Zanaty: No, so this is on the egress of the relay. The egress of the relay is what stops and stops the forward. You still have to ingress everything, yes you you don't turn off your ingress because you have to receive the objects to inspect them. You can upstream the filter to avoid inspecting them, but if you are the edge relay, you have to ingress all of those publishers to inspect their objects before you can decide whether or not you're going to egress it towards people.

Gurtej: Okay and then the Suhas question about um like why is it invalid for two subscribers to have the same filter, sorry the same overlapping filters and like you would error out from the relay in that case. I think that seems like a valid case.

Mo Zanaty: It—it is valid. The problem is when the filter when the relay receives those two and it wants to propagate it upstream, it cannot propagate that upstream because that that's a violation of our rules right now, so it has two choices. It could it could use the broadest filter and then filter locally when it gets the responses to see which ones go to this one, which ones go to that one, but if the if the relay thinks that's going to be too much work for it, it's free to reject the subscriptions and say these are conflicting filters. App, you're in the same namespace, you better be the same app and you're not playing by the the rules that I expect.

Gurtej: Okay I'll ask more questions offline. Thank you.

Mo Zanaty: Yeah and and real quick um uh for Minghui's presentation about the Alibaba experiments, uh one thing you can go back into the previous presentations on this to understand all the use cases for the track filter. One of them is feedback aggregation, so if you had many subscribers, um you could aggregate all of the feedback tracks from those subscribers back to a single publisher with a single signal from a large pool of subscriber feedback. So that's...

Martin Duke: Okay, thank you Mo. Nice work. Uh we hand the balance of the time to Alan to talk about um open issues with the transport draft. Move next slide.

Alan Frindell: Uh I can give you slide control, I would prefer that. Oh, really? I like bossing you around.

Martin Duke: Uh I can give you slide control, I would prefer that. Oh, really? I like bossing you around.

Alan Frindell: Okay. Uh do I have slide control? If you're logged in as you, yes. I'm not logged in as me. I'm I'm at the room. Just pass it to the room. All right, let's take a look here. Um does the room exist in the participant file? Ah yes it is, look at that. You've got slides control. Now I see the buttons. We all see the buttons. Okay. Um okay, we're trying to start off with a softball. We'll see where we face plant. Okay. Um so now we have track properties and they go everywhere the track goes, but we realized that they are not there's nowhere to put them in the response to track status. Uh but they probably should be there. So uh should we just add a properties field to REQUEST_OK even though this is like the only message that would use it and all the other uses of REQUEST_OK would just never fill in properties? Does anybody have a—I'm going to do it unless somebody like gets up and says we shouldn't do it that way. Suhas?

Suhas Nandakumar: Yeah, let's let's do it.

Alan Frindell: Okay. Do you want to get in the queue or you just want to zapping? Zapping? Okay. All right. I'm going to try to fix your mic. Oh, keep going. Okay. Going once, going twice, we're going to do this.

Okay, this one will take longer. Invalid variants and non-minimal minimal encodings. So um Victor noted that the new MoQT variant parser now has a tri-state return, which is you try to parse, you get some bytes, you try to parse it as a variant. So it can succeed, that's good. Um there is a purely invalid case, which is we have decided that there you can have a your variant be from zero to nine bytes but it cannot be seven bytes. Um and uh some felt like that that was an extension point that we would be able to use in the future. Um if we made non-minimal if we required minimal encodings, then a non-minimal encoding would also be an invalid case or like this is just a hard fail. And then there's a third case which is you don't have enough data from uh to parse and so you need to go back. So compared to the QUIC variant parser, which really only had there were no invalid variants in QUIC. Like either it succeeds or you need more data. Um and so um there's some like I think there's some desire that we could just go that way and say like every every encoding is valid. Um there's no way to fail, there you can only need more data. Um and so um that is sort of what Ian and I would like to move to. Um and so um there's some like I think the issue's been like the PR merged in draft 17, there's an issue open, like that, you know, there's not been a consensus call, so like this is the time to speak up and say that what's in the draft is not, you know, needs needs addressed. Mo.

Mo Zanaty: I don't have heartburn over uh excluding seven bytes, but I do have heartburn over trying to exclude non-minimal encodings. Leave it as a SHOULD, um but I think there's many cases where applications need the flexibility to support aliased non-minimal encodings.

Alan Frindell: So sorry, I think there were too many negatives in your sentence. You're saying you would prefer to allow non-minimal encodings?

Mo Zanaty: Yes, yes. I want aliasing—or or they might not be aliases, they may actually mean different things.

Alan Frindell: Okay. I don't know that I love that design, but your point is noted.

Mo Zanaty: I agree it's ugly, but most apps are ugly.

Alan Frindell: Okay. Victor.

Victor Vasiliev: Uh I mean so when we originally discussed the seven-byte variant thing, uh I agreed that we can ban it if it's not going to become a problem, and then I went to implement it it was a problem and then I asked people if they're going to implement it or they're just going to ignore and not error in that case and I heard some people saying that they might ignore and not shut down the connection, uh which is a strong indication for me that the the seven-byte variant thing is not going to happen, uh and originally I thought it was fine because the there were no compelling reasons and now there's certain like it might not be 100% compelling but it's at least like I expect it to be in practice, I do not expect us to ever using uh the seven-byte extension point in practice. Like like those are too like I agree that both of those are relatively uncompelling but this one is more compelling for me because it's an actual implementation issue.

Alan Frindell: Victor, do you have a perspective on non-minimal encodings?

Victor Vasiliev: Uh non-minimal encodings uh I think it's another error case and if we're trying to get rid of it I think we should keep. Uh so one thing I want important is that I really do not want to be in a situation where I have an implementation which rejects several-byte variants and then there are implementations out there that do not because that means people will test their interop against those and then they will ship buggy clients and then they will attempt to interop with me and I will have to uh incur friction from that. So that is my like worst-case scenario and that's why I feel like we should either decide that everyone enforces those or we just don't enforce.

Martin Duke: Are you going to stop? Hey Alan, I do—can I inter—we're getting a little I'm hearing comments in the chat that you are harder to understand than people like Victor, so I don't know if there's any sort of magic you can do there to to sound a little clearer in Tokyo that you figured out regarding placement of the mic etc.

Alan Frindell: I can try to boss him.

Martin Duke: If you can do it though, please do it. Thank you.

Alan Frindell: Okay. I will maybe try to speak more slowly. Um yeah, it also might be possible to mute this mic, I'm not sure. Um I was just going to say that I I'd like to get a decision on the non-minimal encoding if we can at least get that done now and I think as individual I'm think I'm biased toward not erroring out in those cases but I just want to call out for the work group that like there are currently nine different ways to represent zero. So like maybe we like that, maybe we don't. Um but yeah. It'd be good to get it moved because there's currently a PR open that was going to change it so you I think always sometimes had to—I wrote a PR that said you must use minimal and must error otherwise but then the more I thought about it the more I wasn't really sure that that actually was a good idea. I don't know. Same. Uh okay, let's maybe let's separate these two things. Um is there are are we can we I only heard people speak for to allowing non-minimal encodings. Does anybody want to speak for requiring minimal encodings? Christian?

Christian Huitema: Well, actually my my argument is different. We are using the new variant in a places where we used the QUIC variant before. The QUIC variant do not have any requirement of minimal encoding. If we make that requirement, we are going to change a bunch of code flows and get everybody in trouble. So I would rather not do that.

Alan Frindell: Okay. Yeah, that is what I was trying to express on this slide that I think Colin is trying to speak on the other side of. Um but I agree with you.

Colin: I I'm I'm I'm I'm willing to fold my original position, just to be quick.

Alan Frindell: Okay. Uh Suhas.

Suhas Nandakumar: Yeah I I I I'm I don't have a strong opinion on this one other than I saw two presentations this week that kind of like both LOC and also Secure Objects that showed concerns with having uh not being—not having minimal encoding support. Uh I just want to make sure that um we need to somehow settle settle this. Uh address—give recommendations for those two drafts on what and how to address those issues along with what what would we do with MoQ transport.

Alan Frindell: Okay, but you personally don't have an opinion on either way? You just want an answer?

Suhas Nandakumar: I I just want an answer for the two issues which I see is real issues and somehow we need to kind of find an answer, I'm okay with it.

Martin Duke: Yeah, I would just like to comment as an individual that these these two questions are kind of related because they kind of come to the same point, which is do we want to have tri-partite tri-partite state in our variant encoder or not? Uh with our particular implementation it would be great if we did not have tri-partite state, so let's not let's not restrict um uh the variant length.

Alan Frindell: Uh okay. Um can I ask you to put your chair hat back on and uh my job is to reflect the consensus of the working group in the document. So do you have a way of judging the consensus of the working group in this case?

Martin Duke: We could do a show of hands if people want, but I mean has anyone spoken up for uh I don't think that anyone has uh said anything about the non-minimal in favor of restricting encoding length? Um I guess there's one person who's stated a a like wants to preserve the seven-byte restriction. We could do a show of hands if you would like. Or would anyone else like to speak in favor of the seven-byte restrict—the seven-byte restriction? Colin is coming back up to the mic.

Colin: Just just about—I thought we were talking about the minimal encoding, not the seven-byte thing. Okay. Um so maybe it it seemed there was some cross between which one we were talking about. Um and on the one if we're going so so if we're not going to uh say that you must encode in the minimal length, which is perfectly fair to do, um I think that we need to just take the next portion of the conversation here to be clear of does that mean when you send numbers on they must be the same as they as the publisher sent them to you? Do you have to preserve the encoding of them? Uh I think that was what Mo was that was the case Mo was making for it arguing. And uh I mean that's a, you know, that means in your caches and everything when you save all these things, if an object ID was if it was zero coded one way versus zero coded another way you have to keep that. Um I'm fine with us going that way, but I'm mean what I'm saying is that's the deci—we have to get clear on that decision.

Alan Frindell: Okay. We talked about this a little bit in Secure Objects. Yeah. And in as much as like that variant appears in a block that's fed into your AAD, it darn we—darn well better be the same, right?

Colin: But that means every relay has to preserve it all the way along, including in a cache.

Alan Frindell: When it's in the immutable block, that's true. Now the I don't know if Mo... Well let's talk about group ID.

Zahed: So sorry, I'm I'm sorry. Okay, a couple of comments here as an individual. So um first of all, not all variants are generated by the original publisher, right? You have you have track aliases which are very specifically local. Um so it's it's neither here nor there to say, "Oh, it's got to be preserved from the original publisher." Um regarding Secure Objects, I mean again to muddy the waters further to bring in that whole subject but I think a very simple solution would be just to expand those to 64 bits for um AAD computation purposes. Um...

Colin: No, but I just but the the question I'm trying to ask that we're here is nothing to do with Secure Objects, pretend it doesn't exist, okay? Like let's just take things that do pass along end-to-end, right, like object ID. Um do we need to so right now the draft says you SHOULD encode those minimally, it doesn't say MUST and it doesn't say anything else. And I think what what we're proposing here, what Mo was saying is no no, those get used for other things, other people, Christian made arguments made for other things, that's good. But it's the alternative text to they MUST be minimally encoded be if you receive them from somewhere you MUST transmit them in the same form you received them. Because that's the alternative as far as I can tell to meet the use cases we were just talking about.

Alan Frindell: I don't think that's the only alternative. Okay, fair enough, yeah. I think the other alternative is like for example Martin was saying like you just then you can't build applications like Mo, you can't use the variant length bits as part of your you need to have an explicit field rather than trying to make assumptions about that, or for something like Secure Objects you say like all variants get expanded to 64 bits before they go into the hash or whatever. So that's another option, but I don't I don't like the idea of having to preserve the encoding. That sounds nuts.

Colin: I I mean I thought so but that's where I thought we were going, that's what I thought the consensus call the chairs were about to make was. And I wanted to be clear what we were trying to decide to close this because I would it would be great as I'm with Ian it would be great to close this issue.

Alan Frindell: Okay. Um there is a queue. I had actually oh the we did manage to faceplant on the second slide here. Um I do have other stuff I would love to talk about.

Martin Duke: Do you want me to do a show of hands?

Alan Frindell: Do you want to just do a show of hands?

Mo Zanaty: I have a proposal before you do a show of hands. Sure. The classic IETF guidance is is senders be very conservative in what they send and receivers be very lenient in what they accept. Yep. So if you're the parser of this, accept anything, except the non-minimal encodings, and try to preserve that on. If you're the sender, don't expect anybody to preserve your encodings and use the safest most conservative minimal encoding. So senders SHOULD always encode them minimally but not rely on on being preserved. Receivers SHOULD allow in non-minimal encoding and SHOULD preserve it if they can.

Martin Duke: Okay, so the the binary choice I'm going to present here is should we change the current MINIMAL encoding...

Alan Frindell: Martin, there was someone else in queue here.

Martin Duke: Okay. So briefly, Mo, sometimes Postel is wrong. This is one of those times. Um the third way is that you can have uh that if you have a canonical way to express things for comparison and for signing, that you can use one of those ways. Thanks.

Martin Duke: Okay. Christian. Okay. All right, and so so so preserve the encoding is the best practice by all means.

Martin Duke: All right, if you've not already joined the meeting with Meetecho with the client, now would be a great time to do it. So uh this is a question on whether we should change the SHOULD to a MUST. Um sorry for the typos but this text editor was was killing me. Um the choice one is the status quo in the draft, which is that one SHOULD use minimal encodings. The requirement—the alternative here is to change it to a MUST. That would be a yes vote.

(Voting occurring)

Martin Duke: All right, things are starting to stabilize. Does anyone need more time? Okay, I'm going to call it good at four for four people in favor of making it a MUST, 22 against. Can anyone not live with it being a SHOULD? And if so, would you like to approach the mic and say so? Martin?

Martin Duke: This is not a cannot live with, but why do you even have a SHOULD? People are going to do the small thing because it's—don't confuse the question, Martin. We we copied it from QUIC, dude. Yeah, whatever.

Martin Duke: If you don't like a SHOULD, if you don't like a SHOULD you can file an issue against it. Alan loves when people file new issues. Um okay. Do you have the signal you need, Alan?

Alan Frindell: I do except the seven bytes, but I think what I'm going to do is I'm going to start a thread on the list and I'll ask the chairs to follow.

Martin Duke: Well we have two minutes. Do we want to have—there's no way it's resolving in two minutes. We can we can do a show of hands in two minutes.

Alan Frindell: Sure. Uh I think stop the timer so we can read the... Oh, I actually typed that correctly for once. Okay, so this question is asking whether we should currently in the draft seven-byte variants are disallowed. Yes Mo.

Mo Zanaty: Clarification, uh is there any side effect of this that it pushes a 64 out to 10 bytes or anything? No, there's no side effect of this. The seven-byte code point is currently disallowed for extensibility purposes. But it doesn't impact any of the other encodings, won't extend anything.

(Voting occurring)

Martin Duke: Okay, does anyone need more time? People are—okay there we go. Okay I'm going to close it. Uh 15 people in favor of yes, 5 in no. We have one minute remaining. Would anyone is there anyone cannot live with any outcome here?

Alan Frindell: Okay. No one's speaking up. Um that one's a little closer, you might want to take it to the list, Alan, but I think that's a useful signal that can inform.

Martin Duke: I I will put a thread on the list if people want to add additional written... All right, excellent. Okay, thank you very much Alan. I didn't think we were going to—heyAlan Frindell: Uh, I've got—I've got a slide, the next one will terminate in less than 30 seconds. Because it's a 30-second editorial bikeshed. Do people want it to say VI64 or V64? I already picked one, somebody opened a PR. What did you pick? It's currently VI64 for variable integer. Can anyone not live with VI64? Say so now.

Martin Duke: Okay, no one said anything, go with it. All right, thanks Alan. I didn't think we'd faceplant on variants, but there you go. Um, our next interim meeting is March 30th at 16:30 UTC. Uh, we had a—we didn't have time for a—a demo today of server-side ABR from the folks at Disney. Um, tentatively it is planned for the 30th, so those of you who might be interested in that, I encourage you to try to attend on the 30th. Uh, I'm going to put out a call on list for other agenda items that day. And um, I will see many of you at the virtual and um many others of you at our hybrid interim in London in June, as I announced in the—in the first session, and the rest of you I'll see in Vienna. Thank you.

Martin Duke: Martin, it originally was just I, but then somebody said, "Well, if you ever wanted to show a composite of like MoQT over H over QUIC, then you need to distinguish that like QUIC is raw."

(End of Session)