**Session Date/Time:** 27 Apr 2026 16:30 This is a verbatim transcript of the IETF Media over QUIC (moq) Working Group interim meeting held on April 27, 2026. **Martin Duke:** Magnus, I think it's... I can't hear you. **Magnus Westerlund:** Can't hear you. **Martin Duke:** You can't hear me? Really? Uh... **Mike English:** Magnus, your audio, like, is clipping pretty bad. **Martin Duke:** And Martin's a little quiet. I suggest neither of you talk. **Magnus Westerlund:** Still not. **Martin Duke:** So you can hear me, but not loudly? **Mike English:** Yeah, you're just a little quiet. **Magnus Westerlund:** The voice activity indicates... so maybe I can't hear you. Ah, good. It's probably me. **Martin Duke:** Okay. I guess I'll just speak loudly. It's not much I can do from here about audio. Am I at least audible, Alan? **Alan Frindell:** Yeah, I- I can hear you. It's okay. I don't know, it sounds like you're talking through a piece of fabric or something. I don't know. **Martin Duke:** Yeah, it's a crummy mic. You know what? I got different headphones. Let me try that. **Magnus Westerlund:** Now I can hear you! And I could hear Alan too, so... **Martin Duke:** Yeah. How's that? Is that better? **Mike English:** I do think that's better. **Martin Duke:** Is that better? Yeah, much better, says Mike. Okay, great. Let's go with that. It's... we're already late now, so I'll go ahead and get started. Welcome everyone to our virtual interim. As always, this meeting is being recorded. This is the IETF Note Well. I don't see anyone who's new, but uh, if you are unfamiliar with the intellectual property implications of you being here, or the code of conduct expected of IETF participants, please... scan that QR code or point your search engine at IETF Note Well and read and understand those subjects. As always, we are using the Meetecho client. You've already figured out how to get into the Meetecho client because you're here. Uh, you can use the raise hand button at the bottom of your screen to join the mic queue if we're having a discussion that needs queuing, and if we have any shows of hands today. In general, you should have your audio and video off. Be aware there is about a one-second latency when you choose to speak, so uh, give yourself a little extra time before you talk when you turn on your audio. And as you'll note that I'm wearing headphones, a headset is highly recommended due to the echo cancellation properties of this tool, or lack thereof. Here's some upcoming dates. We are in the April 27th interim, of course. We have two more virtual interims prior to our in-person meeting in London. And the only outstanding consensus call that I'm aware of is the uh, uh, the one on the rewind draft. We have a question out on what to do with that work, and the deadline for comment on that is May 1st. If you have questions about rewind and what this question is beyond just what the email says, you can look at the video or the minutes from our last virtual interim on the 13th, and that was entirely dedicated to that subject. Here's today's agenda. Would anyone like to bash it? We're just going to hand all the time to the editors, specifically Alan, unless there are people who want to raise some other question. All right. Uh, since there are no objections, Alan, the time is yours. **Alan Frindell:** Uh, okay. Should I request... did you upload the most recent copy of the slides, or should I go from somewhere else? **Martin Duke:** Well, let's see here. Revision one, I think that's the right one? **Alan Frindell:** The one that came this morning, which has Victor's stuff. **Martin Duke:** I don't understand this system, frankly, but uh, you've got something. Let me give you slide control. **Alan Frindell:** We'll find out. When we get to the Victor stuff, if there's no content, we'll um, move on. Okay. Um, the first topic in here [MOQT PRs and Issues](https://datatracker.ietf.org/meeting/interim-2026-moq-14/materials/slides-interim-2026-moq-14-sessa-moqt-prs-and-issues-01) um, is new, and there was just some... Omo is here, good, because he had some chat about it on the list. Um, this was a recent proposal by Ian, um, who's... Oh, Ian's here too, good. Um, which was... is designed to solve a problem which is right now you don't always... when you get the beginning of a subgroup, you don't always... when you... when you get a subgroup, you don't always know if you have the beginning of it or not. Uh, and that can be a real problem, particularly for trying to understand like, do I have... if you wanted to implement something like rewind, or anything in that dimension, even as an extension, you need to know if you have the beginning of the subgroup or not. So, uh, that is one problem that it is trying to solve. The other one is we still have this outstanding issue about rationalizing the scheduling between datagrams and subgroups, which we talked about I think in Boulder and we kind of... we didn't really get to the end of it. But so this proposal is essentially removing the degree of freedom in the protocol that lets applications choose their own subgroup IDs. I can pick one that's my birthday, and I can pick one that's max int, and they have... um, they do inter... they're meaningful in that they currently interact in the um, scheduling algorithm in tiebreaker scenarios. But um, instead of doing that, it says the subgroup ID... like when the pub... original publisher makes a subgroup, the subgroup ID is the ID of the first object in the subgroup. And that way any receiver knows if they see subgroup one, and the first object that they receive in that subgroup is 11, they know that this did not start at the beginning of the subgroup. Um, but if it's subgroup ID one and the first object ID is one, then you're like, "Oh, I do have the beginning of the subgroup." Um, but that is a potential API change because subgroup ID's been there for, I don't know, a year and a half, and is that sound right? Anyway, um, so there may be applications out there that have like planned great uses of this thing, and so maybe yanking this is like not cool. Um, so, um, I'll open it up to... to questions. I don't know if Ian, did you... did I cover all of your thoughts um, on this? **Ian Swett:** Uh, yeah, no, I think you covered the major points. I mean, I think the... the... this particular proposal was motivated by a comment that I think Victor made on some PR about like, "Oh yeah, I don't know how you really know what the beginning of the subgroup is," and like, "Well, that seems annoying." But um, but there's also this existing kind of weirdness about like datagrams and subgroups are awfully similar except they're subtly different, and you know, um, this makes them at least a little more similar because like the beginning of the datagram is, you know, the first object is the one object in it, and the first object in a subgroup is, you know, that idea. And so like it creates like clear and strict ordering, which is nice. So, um, yeah, our object model has an awful lot of degrees of flexibility, which I think there's a reasonable justification for most of it, and so um, but like for example today like there's no guarantee that you start at object zero, but like I think there are reasons why you might not want to do that, and so I don't think we want to remove that. But this is kind of a way to constrain things to make um, caching and other things simpler, hopefully. **Alan Frindell:** Yeah. Okay. Queue. Mo. **Mo Zanaty:** Um, yeah, I agree with the problem. I think the problem doesn't even need to be motivated by anything. In every single transport protocol that deals with parts, you always need signals for beginning and end. You need start and end indicators all the time, everywhere, every transport protocol with parts. So, we're clearly missing that with subgroups. You can't identify when it starts. Uh, we... we can identify when it ends because FIN versus RESET, that tells you whether or not the end is clear or... or not. But QUIC has no way to indicate that the start of a stream is a continuation of another stream or is a brand new semantic stream. Um, so we can't rely on a QUIC indicator, and I think we should rely on a subgroup header type, a new value that indicates that this is the beginning of a subgroup. And basically you just want to know: is this stream clipped at the beginning and/or clipped at the end? And if we had signals for those two, that would give us all that we need. Um, I don't... I don't think removing the semantic meaning of subgroup IDs from the app is a good idea because at least in my mind, the reason we added subgroups was to handle things like layers in video. And in a perfect world, when there's no transport irregularities, they would be one to one. You would have a subgroup is a stream and you're done. But if there's transport irregularities, all of a sudden you have many streams and you still have the same subgroup ID. Um, if we... if we eliminate that and say no, every single stream is a subgroup ID, then you've removed the application semantic because now the transport irregularities destroy the application semantic of what the subgroup ID means. **Alan Frindell:** Let... let me jump in and make sure that we didn't misunderstand. So the subgroup ID is the first object that was ever in the stream at the original publisher. So the subgroup IDs are still persistent in case there's a... like if a stream gets broken and restarted, the subgroup ID does not change. **Mo Zanaty:** So you're talking about only the original publisher would do this? **Alan Frindell:** Well, the original publisher is the one who selects the subgroup ID anyway, right? Now we're just constraining the way in which they select it. Today they can pick any number between zero and max int. The proposal is the original publisher has to pick the object... has to pick for the subgroup ID the object ID of the first object they want to put in that subgroup. **Mo Zanaty:** Okay, but then what... what about a transport irregularity on the next hop? I don't see how you're giving any guarantee to the receiver if a transport irregularity in the... in the middle hop forces a new stream, either a new stream shares the same subgroup ID, right? **Alan Frindell:** It has to. **Mo Zanaty:** Yes. So... so how do you know as a receiver whether or not a received stream starts with the right subgroup object ID or if it's a continuation of a previous subgroup? And you don't really know the first object ID in the subgroup. **Alan Frindell:** I think the answer to the question is like the... I mean if you look at this PR, nothing is changing in terms of like how do intermediaries know what the subgroup... what subgroup ID they should use. This is only a constraint on the original publisher, that when it picks the subgroup ID it has to pick the... the object ID of the first one, and any receiver knows by looking at what they received if it was the beginning or not because it either the first object was the subgroup ID or it wasn't. **Mo Zanaty:** So we have a redundant subgroup ID and object ID equal to the same value? **Alan Frindell:** But you can already compress them out when they're the same. So it doesn't have to appear on the wire. **Ian Swett:** Yes. But it would appear on the wire in cases where it was continuation. **Alan Frindell:** In many ways this actually makes things more stable, Mo, because like as things get split up or re-put back together, like again it still has like the same representation as you just mentioned. **Ian Swett:** Okay. Are we clear on what the proposal is? Because it seems like there's a lot of confusion and we should... we should make sure that we have the clarity. **Mo Zanaty:** Well let... let me clarify what when we originally talked about PEEPs and... and made them into subgroups, the core driving use case was video scalability, and in that case, subgroup zero is typically the base layer, subgroup one, the next object over, would be the highest temporal layer, not the next temporal layer, it would actually be the highest temporal layer, and then subgroup two would be the... the... the next layer. So if you look at... if you look at dyadic temporal scalability, you go, you know, 30, 60, 120, the... the... the numbering of the subgroups would not be the object ID numbering. It's not... the subgroup ID order would not be the same as the object ID order. **Alan Frindell:** The initial object... like we're only talking about the first object. **Mo Zanaty:** Yes. **Magnus Westerlund:** If I may butt in here a little bit. I think basically what I think the constraint here is that your subgroup hierarchy of identifiers and your transmission orders do not align, so you cannot ensure this in all the cases that what you're saying to have a consistent meaning of the subgroup ID, you can't ensure that your transmission order is such that you can maintain this. That's the problem. That's a part of the problem at least. Um, because you're saying that you need to transmit the object IDs basically, or at least use object IDs in a certain way that aligns with the subgroup IDs you intend to have. **Alan Frindell:** The constraint that... like if you go read the PR, the constraint is on the original publisher. Today it can pick the subgroup ID to be whatever it wants, and now the... the proposed constraint is that when... when the original publisher opens a subgroup, it does not assign a subgroup ID explicitly, it's just the first... or like it either it does, it just has to match. That's the only constraint. It doesn't have to... transmission order does not matter, right? **Magnus Westerlund:** I don't know. Can anyone... I can hear myself. **Mo Zanaty:** I mean, I... I think I have a problem with this if we get too many in the queue. **Alan Frindell:** Okay. Let me... there's a... there is a queue, so if people understand what the proposal is and then if so we'll process the queue. Okay. Victor and then Colin and then Magnus. **Victor Vasiliev:** Uh, the idea makes sense. I'm not sure... not sure this is correct. One is because subgroup IDs have application meanings and two, subgroup IDs feed into priorities. So, uh, that would change that. Uh, like I... I do think we should solve this, but I am not sure that this solves this and also solves a completely different problem which I'm not sure is even a problem and we should be solving that. **Alan Frindell:** Wait, what problem are you talking about, Victor? **Victor Vasiliev:** Uh, I'm saying like there is a problem of where we don't know where subgroup starts and we should solve that. And then there is a problem where people don't like subgroup IDs and uh, I kind of understand why, but I also don't think subgroup IDs are fine. **Alan Frindell:** Okay. Um, I think at least what you and Mo both said the same thing, which is that the first problem that's... that we agree is a real problem should have a solution, and I agree that Mo's... you can also solve it by using another bit in the subgroup header to say, "This is the beginning." Which would solve most of the problem, it wouldn't fix the prioritization problem, but maybe we don't care. **Victor Vasiliev:** Oh, I... I don't think we should use a bit. I think we should like by default assume that you start with the correct point, and if you don't start, you put like an object extension 80. That's... that's what I... how I would solve it because most of the time you start. Okay, that also... that also is a... we can bikeshed that later. Yes. **Alan Frindell:** Okay. Colin? **Colin Jennings:** Um, so ignoring the how we spell this, I 100% agree with the idea that we need that in the same reasons we need to know the end of a group, the end of a subgroup, and the end of a track, we pretty much need to know whether an object is start of a track, start of a group, and start of a subgroup. All three, not just one. And that's part of where this falls apart is at the time that you send the first object in the stream, if you don't know whether you are going to ever create an earlier object in the subgroup, you... you don't... you can't set this with this algorithm, right? This is why we should just completely separate these two things. They're not... like we're just trying to re... it... we're like the... this has... what we're trying to do here is combine two things that have nothing to do with each other but can be independently chosen and just say, "Choose them to be the same thing." And we're going to get ourselves in trouble with that because of this temporal ordering issue. So, I... I definitely don't think we should do that. Um, I think we should go the other direction. I think you'll have that case come up in layered codec cases in some... in some point. **Alan Frindell:** Do you have a case where the original publisher within a subgroup is going to go backwards in... in object ID? Because that is not, I think, something we intend anyone to do with our object model. **Colin Jennings:** The order that things come out of layered video codecs is quite surprising. **Alan Frindell:** Okay. If somebody actually wants to do that, they already have to like reset the other stream and like I don't know, it's bad. So, I would not recommend it. **Colin Jennings:** No, no, I mean I... I don't even, you know, like the... you can get it if it's based on something else. So I just think that we should just keep these things separate. And the other thing is it... like it came up in the subgroup here, but I think... I think the same issue a- you know, you could make the same argument for the issue around track and group. So. **Alan Frindell:** Interesting. Okay. **Colin Jennings:** You want to know which one... you want to know whether this object is the start of the group or not. All right. **Alan Frindell:** Okay. All right, that makes the problem bigger than we think. Okay, thank you. Magnus. **Magnus Westerlund:** Yes, continuing as individual. I... I also think yes, it's great to fix this, but I think you're unnecessarily constraining things which as have a degree... I mean, basically being in is much easier to put in a marker here in some form that says either this is not the beginning or it is the beginning, whatever how that looks. Uh, and that avoids the whole question: is there a problem with object... object ID order versus encoding order and transmission order etc. across these different subgroups? Because as... the... constraint... yes, scalable codecs behave strangely and you maybe be able to do some tricks, and also you may be able to do things which is differently between one group and the next group because depending on how you change the configuration of the codec within your constraints, but you still want to say, "Have the meaning of one particular group ID." I understand that yes, you have some freedoms about group IDs etc. here, but it's... it's still... I think it would be easier to just say, "Oh, if I actually want my object IDs to grow with transmission order, for example, for the decoding order for the codec, which would be very natural, you want to be able to do that and not have to, 'Oh, then I need to create a hole or do some other trick here.'" So. **Alan Frindell:** Uh, okay. Um, I am going to actually... and with respect to Suhas and Ian, I'm just going to say like I'm not hearing, unless Suhas... I know Ian is in favor of this and I'm suspecting that Suhas is not, and everyone else who said anything has said don't do this. So, I think we need to take this back to the drawing board and move on, unless Suhas or Ian you want to spend more time on it. **Suhas Nandakumar:** Uh, no no, I... I'm on the same... thing that we need to have explicit marker. And one question I had is that if some of these kind of markers implement... is important for relays to understand, then we should go with the marker. If the relay does not even care, then it should be an end-to-end header. **Alan Frindell:** Relay does need to know. **Suhas Nandakumar:** Yeah, then it should go with the marker. Okay, makes sense. Ian. **Ian Swett:** Um, no, no, I think that's what I heard as well. I'm a little bit concerned that like a number of people believe that there are things you could do today that you can't do today. **Alan Frindell:** I am also concerned about that and I would... which is like super concerning to me. Um, but people who think... I hear lots of people talk about SVC, I wonder if anybody has sat down and actually written it, and like because we might find more issues that we really need to solve. **Ian Swett:** I don't actually think there are more issues. I think there's just a miscommunication problem, to be completely honest. But um, but that's fine. We can uh, we can like revisit this and consider like a wider set of options. I mean the goal here really was to try to like not only fix the problem but also the prioritization system right now is like has quite a number of like numbers in it, and at some point we were like, "We only want to give people a byte for prioritization" because if you have too many numbers people just abuse it, and now we've given people like 140 bits of prioritization or something. And so, anyway, I digress. Okay. Let's pass. **Alan Frindell:** Mo, is it quick? **Mo Zanaty:** Yeah, can you just clarify, Ian, if there was any other motivation beyond um, identifying the start? You mentioned the priorities, but um, let's... let's ignore the priority difference between datagrams and subgroups for now. Is there any other motivating factor? I'm trying to make sure that we're not throwing something around that you care about. **Ian Swett:** I mean it does make subgroups... it makes datagrams and subgroups in general like more similar. So if you have a single object subgroup and a single object datagram, this makes them like much more similar, like both on the wire and conceptually. Um, it also like saves a byte or something, but like, you know. **Alan Frindell:** The byte's already saveable. But anyway, let's move on. There's... we don't want to spend... we've already here 20 minutes, so let's go. Um, because this will take much more time. Okay. Um, so Draft 18's coming out soon, which means all the Draft 17 decisions are about to have to get implemented by a wide group of people, and um, some folks have looked at [required request ID](https://datatracker.ietf.org/meeting/interim-2026-moq-14/materials/slides-interim-2026-moq-14-sessa-required-request-id-00) and identified problems. So, the way we spelled for recap, we... so in 17 we split the single control stream into individual bidirectional streams per request, but we wanted... people had some angst about losing control over ordering. So we introduced this required request ID, and the idea was like you can use that to reassemble the order of the original stream, the original control stream, and that would at least give you as much control as you used to have. Now, there are some caveats where that doesn't work anymore. Um, for example, unsubscribe is now a QUICK-level message, and so it cannot be ordered with respect to anything. Um, so that maybe didn't accomplish what it wanted. The bigger problem is that the way required request ID is currently spelled, it... there's unbounded state required on the receiver in order to fulfill it correctly. Um, so um, I think the first question is: we know we need this for joining fetch, right? We know joining fetch has an explicit dependency, like you better not process the fetch before you process the subscribe that it belongs to or it's going to fail every time. So we need at least a mechanism for that. The second question is: do we have other use cases where we really want this? And um, I think the answer is yes. So like request updates between two different streams, like I want to pause one stream and resume another and I want that to happen in a particular order. So I think we still want some non-joining fetch ordering. Um, and you know, but recognizing that we the current 17 and ongoing editors' copy design is that you cannot order an unsubscribe with anything except the subscribe that it belongs to. Um, so think about that. So just I'll lay out some options and we can have a discussion. So, one option is we could remove required request ID completely from the spec. Um, we could use a totally different solution for joining fetch. So, Martin has actually done some work that was like, "What would it look like if we put the joining fetch message on the subscribe stream?" Um, which makes a lot of sense and a lot of people think that have I think for a while have thought that made sense, although in that exploration it turned up a bunch of other problems that felt uncomfortable to solve like how do you handle independent parameters, like can the subscribe and the fetch have different auth, can they have different group order, can they have different priority? And like that sort of you know fell apart there. Like there were questions about fate sharing, like what happens if can the fetch fail but the subscribe continue? And if you're trying to update one of them or the other using request update, like there became an ambiguity. So that was like it's we looked at it, but maybe that's a possible solution. Um, also I want to highlight the latest switch proposal actually does something completely different with fetch, which is that the publisher can initiate the fetch in response to the switch. So you could have a... you could still have fetch have its own BIDI stream, but the you send the somehow the fetch request goes on say the subscribe stream but then the other half of the bidirectional stream gets opened separately. Um, but removing request ID completely is a non-starter if we still think we need like cross-stream request update ordering. Um, Martin, let me just look at my next slide because in... or do you have a... and then we can... **Martin Duke:** Oh, I just wanted to make a clarifying point about 1604. Um, uh, like you're obviously right about capturing some of these trade-offs and like thorny things about it. I would say that most things are solvable with spelling. The one thing that is really not is there... I don't think there's a way you can kill the subscribe and not kill the joining fetch simultaneously, you know, at the same time with with the fate sharing. Everything else is basically resolvable. Um, that is the one that would be very hard to to spell. **Alan Frindell:** Okay. Yeah. Um, so this is an alternate proposal which is like this solves the bounded... like keeps the required request ID design but solves the bounded state problem, which is the reason why you have unbounded state is because normally the number of bidirectional streams at the QUICK layer will bound your state. The issue is a request update consumes a request ID but does not consume a bidirectional stream. So there's like no limit to the number that the client or that the requester can send to you and that can like cause all kinds of problems. So there's a proposal in this PR that would limit, it's a setup option that limits the number of request updates per stream that can be unacknowledged or outstanding at a time. Um, and if you we had this, then the receiver can bound the receive state they need to implement required request ID to the maximum number of concurrent streams they're willing to allow times one plus the number of maximum concurrent request updates they'll allow. Um, um, the nice thing about this is that it adds some relatively simple backpressure for request update flood, which is something we lost um, when we went to BIDI streams. So this brings it back. Um, the math is a little bit subtle here, Martin and I had to convince ourselves that it actually works. Um, Ian. **Ian Swett:** I'm confused, how is this relevant to the other things we're discussing? **Alan Frindell:** The okay, if we go back to the problem we're trying to solve is that required receive the with required request ID, the receiver state is unbounded. This is a solution to that problem. So we can leave required request we if we take this, we can leave required request ID as it is and it's probably fine. Okay, so that solves that portion of that problem. Okay. If we wanted to remove required request ID entirely, then we need to solve joining fetch, which goes down a different area, and we lose some other features, which I think I see in the chat people saying they really want. So um, I don't know that that's an option. Um, let me just look at there's one more thing and we can I don't know maybe we should have taken more questions but... So Victor had a counterproposal that he filed as an issue a few weeks ago, um, which is that we remove required request ID entirely with a more generic system, which is like a message you can put on a stream that says "wait for X" where I assume X is like some arbitrary integer, and you can send that on lots of different streams and then you can send this unblock message on any stream that will sort of wake up all the streams that were stuck on wait for. Um, I don't know, Victor, if you want to say more about this particular proposal. **Victor Vasiliev:** Yeah, this actually was alternative proposal on the original multi-stream PR and we we did not want to like wait and choosing between this two between before merging them. But basically uh my so required request ID right now has a lot of interesting implications where you make it sound like it's like it's something that it is not immediately clear whether it blocks on the ID on the request being received or processed. And I think we agreed that it's received and not processed, right? **Alan Frindell:** The goal was that if technically the draft 0 through 16 control stream only had receipt in its constraint. If I sent you a subscribe and then an unsubscribe, the only thing that stream guaranteed is that the bytes got received before the other, but there's nothing that told me that I couldn't put those things in a queue and run them out of order. Um, so anyway, yes, required request ID was only trying to recreate the same thing that the QUICK stream gave you, which was receipt order. **Victor Vasiliev:** Yes, so this kind of attempt to move it from processing individual messages to just a thing you can put on any stream and have your parser kind of virtually order things for you. Now, due to various concerns it is not reliable, but required request ID cannot be reliable by definition because like okay, what if you issue subscribe and then you issue joining fetch and then you reset your subscribe before the joining fetch but you don't reset your joining fetch. Um, you don't want the joining fetch to hang indefinitely, you do want it to hang for some time but eventually it has to like time out and say you never sent me subscribe, why did you send me this fetch. Uh, and this just is a very abstract version of that that can be implemented and what I like about this is I don't have to implement it for every individual message. **Alan Frindell:** Okay. Um, are there any clarifying I mean I think we could just open it up to people feel free to weigh in on: do you like required request or do you want ordering besides joining fetch? I don't think Victor's proposal and the limiting required the limiting request updates are I don't think we have to choose one or the other, we could theoretically do both. Um, yeah. All right. Um, Martin, is it a chair thing because Mo's ahead of you in queue. **Martin Duke:** No. **Mo Zanaty:** Yeah, so on both issues, I'd like to clarify is what are the real use cases behind because I've seen in the chat the use cases that I'm hearing from the first one about having ordering like Victor said, you know, we're only talking about ordering of the issuance of the of the request. All the comments are really about processing and success of the request, which means to me the application actually has to wait for the okay or the response for that before issuing the next command, not it can't just fire them back to back on the control stream and expect that to work. So I don't think those use cases on the chat are are solved by by the um required request ID at all. In which case, what is the real use case for that required request ID, when does it when does it actually make sense? And on the second one, why do we need multiple request updates on the same subscription? That seems like just a begging for glare or problems on the processing of them. Why what's wrong with an implicit limit of there's only one outstanding and the app has to wait for that to succeed before it can issue a new one? **Alan Frindell:** Uh, I could probably live with only one at a time. I could probably imagine some application where priority subscriber priority changes more than once per RTT. So that's probably the only one. But I do think I mean I know that if not in this group, then when we go to like get reviews from outside of this group, someone is going to come and tell us that we need to solve that problem that like because of what's happened in HTTP with like there's so much research and so many people are looking for control message floods and like this is absolutely a wide open problem that MOQT has. So I kind of think that limiting it either to one explicitly, you know implicitly, or allowing the um where did it go? Um, or allowing the receiver to to bound it in some way is like absolutely necessary. **Martin Duke:** Yeah, I I got in the queue to say that uh I don't think all of these signals actually solve the main problem which people are concerned about, which is make-before-break, because unsubscribe uh has no request okay, it has no request ID, there are no bytes on the stream that have this like, you know, this whatever Victor's like integer X signal. Like the stream just goes away. There's no application-level acknowledgment of that like unless you can get the ack of the QUICK packet that has the has the RESET_STREAM in it, um, you know, you have nothing. So. **Alan Frindell:** You do have when the other side of the BIDI stream closes. **Martin Duke:** But there could be additional that could be related to the... It's not perfect, but it's something. **Alan Frindell:** If I send a RESET_STREAM... **Martin Duke:** It only resets your half. The peer has to either FIN or RESET their half also. **Alan Frindell:** Okay. Yeah, I mean I'm not to be sure I'm not even sure if RAPI QUICK API has that distinction. Um, I'm sure you'll get the FIN. Well you you wouldn't get a FIN, you would get a RESET. But regard okay, so the point is like I think we have a bigger problem here. Um, I'm skeptical that like I'm skeptical even that Google's QUICK implementation has this like half-duplex reset mechanism in a way that that um, which you know is my problem not your problem but uh I you know, like we've already bitten this this thing. Like it we took this whole we decided to get rid of strict stream order control stream ordering, and it turns out like there's things we liked about control stream ordering and now we're trying to glue it back together in a way and there's this hacky thing that doesn't really work. Um, I I think the solution I mean I would prefer this is like a fair amount of surgery on the protocol, but I would prefer that we actually to the extent that things are dependent on each other that we put them in the same stream if we can do that. And if there are cases where we can't do that, we can build atomic operations that um have these sorts of properties. So like if I mean switch is kind of something in this space where like you're like coming off one stream and going to a different stream and it's like a single operation. And maybe we can do more of those, and then we have all the all the dependency properties we have without hacky things that don't work because we don't really have good signals for for stream resets at the application layer. **Alan Frindell:** Okay. So to summarize, you're saying remove required request ID and anything that needs to be and I mean do you for sure joining fetch moves onto a single stream and also to the extent possible that you need other cross-stream coordination, that also we build atomic things that allow that for that? Am I summarizing your point? **Martin Duke:** I think that's the only way to cover the logical like problems we've created other than just reverting the whole thing and going back to a single control stream. Um. **Alan Frindell:** The the other way to think about it is that we were we were hiding problems behind the control stream that we didn't know that we had because the control stream's only enforcing receipt ordering and does not enforce message ordering, like or processing order. And I think Mo's answer of like if you really wanted these things to be one before the other, you need to send them in two different RTTs, like after only send the second thing after you know the first one is done. Um, or or make an atomic messages as you suggest. So. **Martin Duke:** Yeah. **Alan Frindell:** Okay. Suhas. **Suhas Nandakumar:** Uh, I I think we should keep the required uh request ID. Um, I I've not seen any implementation until today that would receive stream the messages off the stream and decide decides to process in out-of-order. Yes, we can an implementation can do that. Um, but the very fact that you're sending things one behind the other and it gets processed in the same way, I I really need to see an implementation that says, "Okay, I got subscribe and joining fetch first, but you know what, I swapped the order and go fifth message first and first message later." I've not seen any implementation. So that argument does not seem to me strong as saying that you know hence we have to remove something. The one thing that that the required request ID guaranteed is that if you process the things in order, there's this this guarantee that again, an implementation can always do whatever it wants to do. Um, having said that I I we we keep going back to this about merging the live and history semantics in one message and every time we do that we kind of open up can of worms and we decide, "Oh, the the nature of the data from the past is so different than the current live, we will not have easy easy merger in in a single API." And hence we we thought over this for an year and decided why fetch has a different semantics versus joining fetch has a different semantics versus subscribe has a different semantics. **Alan Frindell:** Let me just jump in fast and explain: Martin's PR does not change the semantics of the object delivery for fetch and subscribe, it only puts the controls into a single stream for ordering purposes. So it's it's it's not as I don't think it's as extreme as what your proposal as what you think it is, but I think your message is well received. **Suhas Nandakumar:** Yeah, I'm just uh then we have an mechanism to do that today which is required request ID. Um, and and why we are trying to define a new mechanisms to do the same thing and I'm not sure if that's the way we should go. **Alan Frindell:** Right. The motivation was around because I think Martin came from the concern of like the way required request ID is spelled has this huge unbounded state problem and so it's untenable. Um, did I lose audio? Suhas you there? No, I can hear you. Okay, maybe I can't hear Suhas. No one can hear Suhas. Okay. Suhas, you might have to leave and join back. Ian. **Ian Swett:** All right. Um, no matter what comes out of this discussion, like we should at least decide like what this feature means today because like I don't think we have actually agreed up upon that, like I know what the draft says, but like um I mean there are really like very real cases where you would process things like in parallel. Like for example, I receive four subscribes on the control stream in the old world. Like I'm not going to like wait until I get the max um or the largest object from the first one before I like process the next one. That would be like insane for latency. Um, and so like similarly like I think you are just inherently going to want to do some amount of parallel processing. Now, I agree that like it might be on a case-by-case basis and like obviously subscribe update and um joining fetch, you know the thing I hate, um are like subscribe update's reasonable and joining fetch is annoying. Um, but you know, I think it's not the typical case, but like I really think I think the current feature doesn't is both problematic and doesn't actually accomplish the thing that people thinks it accomplishes. Um, and so which is kind of another way of saying like Alan's previous point, which is like all the single control stream ever did is like guarantee that you got them in the same order. It didn't say anything about processing. Um, and so like this really is a latent problem that just is there. Um, so I don't know, I'm not saying I know what the right solution is, but like the current thing I think the current thing is unworkable or it doesn't accomplish the things people want to accomplish as well. So we should do something. **Alan Frindell:** Colin. **Colin Jennings:** Uh, okay, can you hear me now? Yeah, perfect. Okay. So initially I was thinking that we were going to need when we moved to this multi-stream, I was thinking we were going to have to have like correlators across between them this request ID, but I think the more I think about this, I really and I agree with everything Ian said about this problem's existed for a long time and we've just been able to ignore it. Um, but I like Martin's idea of why don't we allow multiple requests on one stream where the stream where those requests need to be happen in lock-step modes? And when you have things across multiple streams, then they can happen in whatever order they happen. I think that would actually give us the flexibility we need and not be too difficult to implement on um I mean any of this stuff's going to be hard to implement, but not be too difficult to implement. So I'm sort of leaning that direction. I think we I mean I'd be very interested in seeing a PR that sort of sketched that out. And it sort of it actually takes us back a little bit closer to what we were before, so it's sort of aligned with some of the implementations too, it won't be too far off. **Alan Frindell:** Are you so I think where that might break down at least with evolving from Draft 17 is that like we use signals on the stream to like what you're saying is like I might put two subscribes in one stream, which today you can't do. And and that would have some semantic meaning because or like what would happen if you reset that stream? Does it unsubscribe from both? **Colin Jennings:** Yeah, well I'm thinking even more, you might put a subscribe in one and an unsubscribe in and and an unsubscribe and maybe that just doesn't work, right? Like maybe unsubscribe is not a message anymore... yeah, well I mean that's that's going to be I mean again, the constant trying to use transport level signals to indicate application layer signals means you can't time those application layer signals. So um if you need to time unsubscribe, um I guess you'll just update it to pause it and then let it clean up later. I mean it's just going to create garbage is what you'll have to do. Like if you care about the timing of stopping a stream coming to you, you'll have to pause it instead of unsubscribing and then later unsubscribe. **Alan Frindell:** Like then you can FIN it on the okay, like that's one way to do it potentially or like yeah, I think it means you won't be able to pipeline certain operations. But I'm not sure exactly. **Colin Jennings:** Okay, anyway, I haven't thought about this enough to have an opinion on it, other than ordering is important, you don't really love required request ID, you would like some other mechanism that had broader atomicity. I mean I look, I think we need to solve the problem somewhere or some- where or other, and I was initially going down the required request ID type way as obviously a way of doing it obviously, but then Martin's comment got me thinking that maybe there's another way to do it. But I guess you you know, I guess actually sitting here trying to say explain that on the call makes me realize it probably doesn't work, so um I will but I do think this could use some thought on I mean this is going to have so many issues um you know, it's not going to be easy to implement. **Alan Frindell:** All right. Uh, Mo. **Mo Zanaty:** Um, maybe if the editors know at the top of their head can you remind me there was that table in the spec that said, you know, which messages go on which streams? I thought it was only SETUP that's that's on the control stream now and then everything else just kind of veers off and it's on its own BIDI? **Alan Frindell:** SETUP and GOAWAY are the only control stream messages in 17 editors' copy if I'm correct if I'm remembering correctly. **Mo Zanaty:** So... so we basically have a dead control stream, right? And everything else is on its own BIDI. All the real stuff is on BIDI. Um, and and then the the real problem that I see that people are talking about like this make-before-break stuff and the and the atomic switch um is is not really in the race of the control messages. There is still a problem with identifying the data plane coming in, whether the data plane coming in is associated with with with the pre or post change. And that's that problem is the real problem, not the not the guaranteed ordering of the control messages. Um, if you do atomic... **Alan Frindell:** Can you give me an an example of a data plane race that you're thinking of because I'm having trouble visualizing one. **Mo Zanaty:** Well well like like the whole point of switch is is not that the whole point of switch is not that, "Oh I just issued these two commands and and as long as they're executed you know in sequence, things are good." It's you want to you want to switch at a data plane boundary, and so that's why they want to introduce something at the relay to um to do that data plane boundary. Um, but then the object the objects that you receive coming in, the data streams that you receive coming in, you need to know are they you know are they at the switch boundary or not. Um, and if you're switching yeah from two different streams two different tracks, it's easy because the track alias will tell you that. Um, but but if you were doing something you know different where you're switching between two different updates of a filter, um then you know then it's not clear anymore. Um, so I think the real problem is identifying when the point of control messages is to is to control the data plane, right? The what really matters is what's coming in the object plane. And if you don't know that the objects coming in this object plane are directly related to this control message, that's where the real problems arise, not the ordering of the control messages. **Alan Frindell:** So we talked about this in Toronto a little bit about can the publisher update its default priority, and we and we faceplanted and parked that issue because there's no way if there's no way to know where in the data plane the default priority changed. Um, and I think that's maybe the thing you're talking about. Like alias is like you said, if you're switching track A to track B it's very clear. If you change if you used request update in your case to like to change a filter, or like we talked about before changing priority, then you have no idea. And I don't think we had a solution for that even when there was a single control stream. **Mo Zanaty:** I agree that that and I think that's really what the applications need, and here we are talking talking about the ordering of the control messages when that's not the real problem. Make-before-break is not about did you get my you know request you know in the right order, it's about did I receive the stuff that I that I need to be receiving before I lose the stuff that that I was happily receiving. **Alan Frindell:** I I mean, I think it's the problem of like the data plane not having a marker of when things change in the control plane is a bigger problem that I don't think that we should try to solve. I think we just have to live with that and the subscriber just gotta know. Otherwise we have to we need to redesign our data plane to include these signals because there's no other way to do it. Um, the only thing that we're talking about here is like ordering control plane, but I think what you're saying is that it doesn't even matter because the other problem is bigger. **Mo Zanaty:** Yes, I agree, it doesn't matter because like everyone has said, this is only about ordering of the request reception, nothing to do with the processing or the acknowledgment or success or failure. And one one final issue is on trying to make atomic messages, I disagree with that approach too because then you don't know then you have to have atoms for all the different combinations of things um that you want to have happen. If if the subscribe part fails but then the fetch part you know succeeds, what do you do if the subscribe part succeeds but the fetch part fails? What do you you'd have to have separate variants for all you know four of those cases. So, you don't want four different atoms to say that, just let the application do things in the order that it that it needs and make the decision at the time that it needs to. **Alan Frindell:** Okay. Ian. **Ian Swett:** Um, I tend to agree with Mo and I think it would be worth us writing down like what we think the use cases required request ID at least in theory might solve or people think it might solve, and I think make-before-break is certainly on that list and maybe switches on that list and like actually walk through the details of like, "Does it actually do anything useful?" Because I think Mo's right that in the end it's actually not going to solve the problem at all, it's just going to like look like it solves the problem, which is like even worse. Um, but I could be wrong and I don't think I know like I would actually need to like write this stuff down I think to like be certain at least for myself that like it wasn't necessary or or to identify like what an alternative solution in the data plane might be. Um. **Alan Frindell:** I mean the ones I can think of: joining fetch is clear. I think pause one stream resume another, or like I want these updates to be processed in a particular order even though they're coming in different streams, or like ordering an update a pause with a subscribe or something like that. Like those are the cases that I think are there, and I agree that they you sort of have to squint or make your eyes fuzzy to believe that it's actually doing something useful, but like it probably is but like you can't guarantee that it is. Um, but I mean could we get should someone I'm not necessarily volunteering myself but maybe um should someone make a slide on each of those for like two weeks from now and we can actually run through them individually and like talk about them concretely? Because I worry right now we're like talking about things in a somewhat abstract sense and I mean at the moment I don't actually know for sure whether any of this is necessary or none of it's necessary or if like as Mo said, maybe we need some other mechanism to try to make like the data plane and the control plane a little more in sync, which I don't think we've only briefly thought about. **Alan Frindell:** I think that is like like I don't know, think that is a like a we okay, at least we can put on our like V2 list if we think it's a pony for now. But like I'm not saying we have to do it, I'm just like what would be the thing we would want if we like wanted to make this perfect kind of thing? Right now I don't know what the answer is. **Martin Duke:** So I'm so like as a chair I'm smelling a faceplant. Um, I think we are at the we are at the point where I think people recognize a number of different problems in this space, everyone has a different solution, um, nobody seems to like anyone else's solutions. I think there are a few things we can do. Uh, this strikes me as maybe a London topic, um, which does not... **Alan Frindell:** My one concern is like we're going to cut 18 and people are going to have to implement something. So maybe the important high-order bit to decide hopefully now and not wait for two weeks is: do people want to implement required request ID in 18 and bring their gripes to London, or do we want to swap it out for Victor's thing and people implement Victor's thing for London and we bring our gripes there? Or we could rip required request ID out, I don't know. **Martin Duke:** Well, I think like at a minimum the minimum thing for the editors' copy I think needs to be your your uh flow control thing to avoid like 18 having a DOS factor in it because I think people might actually attempt to deploy 18. Um, but like so I think we can either leave it as is with I'm sorry we could either do your your thing your flow control fix, or we can do Victor's thing, um uh, or we can just rip out all of this and just say that oh there's like you know stuff is asynchronous and you have to sort of deal. Um. **Ian Swett:** We could in theory do both things of like limiting the number of request updates to some finite number like one or two or something. I'm not saying that we should, I'm just saying that's a special case of this. **Alan Frindell:** Yes. Okay. Okay. Um, I do want to point out maybe that Victor's PR... I'm not sure, Victor, how you planned to limit the number of X's that a sender can send. **Victor Vasiliev:** Uh okay, let me explain. I don't think required like as I said in chat, I don't think we should flow control those, and if you assume those are reliable, you already lost because those are not reliable. Like you cannot make those reliable because one you can send a request and then you can reset it. Yes. Two, you can send a request and it relies on something that's far in the past, and you don't want to remember all requests you have ever sent. **Alan Frindell:** Right, but you can bound that that size of that map to a to a I've seen everything lower than this plus your unbounded plus the bounded state above that number. **Victor Vasiliev:** Yes, but you cannot bound that because the list can have gaps. It can't. Anyway, Martin and I worked through it. I don't want to it'll take too long, but you can limit it as this um... **Martin Duke:** No, I do not think you can limit it. **Victor Vasiliev:** Yeah, I mean Victor's Victor's point about reset is good. Like you might never see a request ID and then what. **Victor Vasiliev:** Yeah, like what I'm saying is like the way required the only way I can imagine this work consistently is like you have required request ID and then you either receive the request or you time it out, and if you time it out you just process it and for joining fetch it means a failure, for everything else it means you don't get ordering you deserve but whatever. **Alan Frindell:** I think that's also a fine fallback. I do think I can argue with you later about the state being bounded because I think it works because that's how QUICK works, but um uh I don't want to argue with you about it now. **Victor Vasiliev:** Yes, QUICK works because it's complicated, the explaining that why QUICK works is complicated and it's interesting. **Alan Frindell:** I agree with you. Okay. Colin. **Colin Jennings:** Yeah, go ahead, Colin. Oh sorry, Martin did you want to be in control? No, no go ahead. Colin. **Colin Jennings:** Quick comment about DDoS. Uh, it's just like look, it does not bother me at all if Draft 18 has a DOS factor. It has many other DOS factors in it, this is only one of ten thousand. Um, so you know, if it has a note that like this is a problem and implementers probably want to limit it to some reasonable amount of state and then close the connection and we're still working on it, that's fine with me. But I don't want to just throw something in there because we feel like we have to have something before we publish 18. We don't need something before we publish 18. Um. **Alan Frindell:** I think Victor's band-aid is fine. **Ian Swett:** The timeout? **Alan Frindell:** I would like this too. **Ian Swett:** So the the band-aid is you you can time it out and if the timeout expires you process it. **Colin Jennings:** Or or reject it. Yeah, we can figure that out. Sure. **Martin Duke:** So the minimum set of the minimum spec that we could have at this point, I mean joining request already has an associate joining fetch already has an associated subscribe ID with it. Um, we could just rip everything out and with some performance hit people would not be able to guarantee ordering and then I guess that means they maybe get more or fewer objects than they want, right? That's the implication of this. Like it's a performance problem, right? **Alan Frindell:** You're saying we ripped out required request ID from 17? **Martin Duke:** Yeah. Uh, now I can't remember if joining fetch I mean whatever, if we did that we would need to at least put one in joining fetch. Uh but the joining but joining fetch already has the joining fetch already has this associated subscribe ID, and if that subscribe doesn't exist you just toss it. I guess the problem is with this forward oh oh you could you could limit to forward restriction. That's the other thing you would need to do because right now there's there's a Schrodinger's cat about like is because joining fetch must be with forward equals one subscribes, like there's a Schrodinger's cat of what is the state of forward right now? So you could eliminate that restriction, eliminate request you know uh required request ID and then there's like very there are very poor synchronization skill uh tools, but like fundamentally it'll work with just some performance hit. **Alan Frindell:** Okay. All right. So taking off my chair hat, like as an implementer not having these janky mechanisms that we hate is like it's probably better at this at this stage. Okay. Can I ask for a let's drain the queue and can I get a show of hands like: leave required request ID in 18, remove required request ID in 18, or remove required request ID in 18 and add Victor's thing. Does does do people see another option to one of those three? **Mo Zanaty:** Um, if I can just put my comment in, the I think the conversation is around whether or not these things are needed. Instead of just voting about it, I think it'd be better and more useful if the people that think they need something clearly spell out the use case and why they think they they need that, and then we will understand whether or not that's a real use case or maybe they will understand, "Oh wait, my use case is not actually solved by this thing that I'm asking for." So I think rather than the editors trying to enumerate what they think are the use cases, the people that that have the use cases should chime in and say, "These are my use cases, this is what I need ordered for," and we should understand whether or not that that ordering guarantee actually solves that use case. **Alan Frindell:** Okay. I'm happy to prefix my question with how many with a yes/no: we need to have some wire-level way to co- like coordinate messages or like to to synchronize messages, or I have a use case for it. And if nobody has a use case, then it's clear we should rip it out. **Mo Zanaty:** Yes, and the only use case that we. One clarification question, Suhas? Sorry, one clarification question: do we think joining fetch does not fall into those use cases, Mo? **Alan Frindell:** Joining fetch needs a solution, there's no question. Like. **Mo Zanaty:** Yes, yes, like I said in chat, joining fetch is probably misspelled as a verb. It should just be a parameter of subscribe. Is a parameter of subscribe? But you're you want to join at this you want to subscribe at this earlier point. It is a parameter of subscribe and I think it should specifically be a location subscription filter on subscribe. But making it a parameter of subscribe would remove this problem. **Alan Frindell:** Okay, no respelling joining fetch, I I call I call faceplant. So joining fetch already already has a associated subscribe request ID in it, like independent before we ever came up with required request ID it was there. It was there when we had a single control stream. Uh, there is just this weird forward thing which we can deal with. Uh, so Alan, you asked for a tripartite show of hands, but you cannot do a three-way show of hands in this tool. Hold on. Ian has Ian's still in the queue, and then first we'll the first question is by is is yes or no, do we need anything at all? And that may help us. Go ahead, Ian. **Ian Swett:** Um, no, you should probably proceed with the poll. I was mostly just going to comment as an individual: I would rather not add Victor's thing before we actually know like if we need anything or what we need, like keeping the draft as it is and just being like, "It's optional to implement" would be better than like adding a new thing that solves a problem that we haven't defined, which seems kind of like I don't know, unfortunate. **Alan Frindell:** So, I'm going to say, um, "Do we need a solution for synchronization outside joining fetch in Draft 18?" Agreed? I'm okay with that. Does anybody else want to argue with the spelling of the question? I'd be shocked if the answer was no. Well, it's already it's already out there. So when I say outside joining fetch, I mean going back to Draft 4 or something. Whenever we did joining fetch, there's always been associated you know uh subscription ID with it that's completely independent of all these other mechanisms. Um, there is like a little bit of a corner case which Alan, you're going to fix anyway this session error business, right, with the... **Martin Duke:** I think it's already landed, or if it hasn't, it will shortly. **Alan Frindell:** Okay. There's this weirdness with like forwarding which is another problem, but I think we can patch that pretty simply to avoid blowing up connections accidentally. Hey Martin, I just I was just reading your question here more carefully. At first I was thinking, I mean I don't I mean when you what tell me what you mean by "in Draft 18"? Are you trying to make a decision going forward here after Draft like we might... **Martin Duke:** Well, yes. So we have spent an hour on this, and I don't feel we're particularly close to converging on a particular solution to these problems, or even like firm agreement there is a problem. So I think we're going to punt this to London and have like a long discussion about it where everyone can bring their use cases and the whole thing. And so like do we want to put in some sort of mechanism now, or do we want to um, if I can editorialize for a minute, like have mercy on implementers and not put in something that isn't very that we aren't very happy about and then rip it out later when we decide what we want to do. **Colin Jennings:** Okay, I I see, I see. You're not saying we don't need to solve this problem, we're just saying don't need to solve it now. That's basically your question. **Martin Duke:** Uh, I think that's fair to say. Uh, I think that's fair to say. Okay, thanks, I get it. Okay, all the "yeses" have gone away, um, after me saying that, for whatever that's worth. So I think what I'm hearing is rip out required request ID for now? **Alan Frindell:** Okay, like happy to rip out required request ID. I think what I would ask is the people who look at the product after that and say, "My thing is now broken," the onus is on you to send a mail to the list or open an issue and say, "The thing I want to do is now broken and I need a solution that does X." If you can't articulate that, we will build no solutions and the final version of MOQT will not have one. Does that sound fair? **Suhas Nandakumar:** So I have one clarification question. So if we remove the required request ID, so we are saying the joining fetch will just use the subscribe ID? **Alan Frindell:** We'll we'll make sure joining fetch still works. So joining fetch has its own request ID and in the body of the joining fetch payload it says, "Here's the subscribe that this is the," you know, that's how you find out the track name to deliver, right, is this subscribe. That has been there since like we invented joining fetch because even with a uni-stream this is like how you correlate these two things. What it does not do is it does not allow you if we rip out required request ID it means you cannot also uh have a joining fetch tied to a request update, right? So if you're doing weird stuff for the for if you update if you change the forward state in a request update to the subscribe, you cannot cause the joining fetch to be dependent on that. That is what we are removing. **Suhas Nandakumar:** Yeah, that makes sense. **Alan Frindell:** Um, okay. Can I ask if you are if you think you have a problem in this area and are planning to do some homework to talk about how to solve this going forward, can you please join the queue and tell me, "I am planning to do some work on this because I think we need to do something in this area," just so we know that it's coming. Uh, I mean I want to rethink the make-before-break type stuff and see I I mean, I agree that it may have never been solved, but like, you know, there was always a goal to solve it. Okay, so um, so Colin, I will probably ping you, you know, every week or two to find out where we are in this. If that's okay, unless you want to delegate somebody else to do the do the hard work on your behalf. **Colin Jennings:** No, I'm good. **Alan Frindell:** Okay. All right. Um, can I just while we have [1613](https://datatracker.ietf.org/meeting/interim-2026-moq-14/materials/slides-interim-2026-moq-14-sessa-moqt-prs-and-issues-01) up, uh I sort of heard I didn't hear any strong objections to having it, I heard the one is like we should just hardcode it to two and not have an option. Do people want to see this in, out, or have any thoughts? **Ian Swett:** I'm inclined to park it until we fix or like sort of the other stuff. **Alan Frindell:** It's un- this is orthogonal to synchronization, right? This is a this just prevents people from sending you too many request updates. **Ian Swett:** But we now allow coalescing them and like merging them together. It doesn't solve really solve the problem. **Alan Frindell:** Okay. You okay, I don't know. **Mo Zanaty:** I I think it's sort of the same kind of thing that if people have a use case for this, let's hear it. Um, because sending more than one? **Alan Frindell:** Yes, on the same stream. **Mo Zanaty:** Uh, let's hear it, um, because um, you know, that would that would motivate whether or not we need a solution. **Colin Jennings:** Uh, I mute and then I unmute and then I mute again before the first one even got processed. That happens all the time. **Alan Frindell:** Do you do you think there's a problem with letting the receiver set a limit, Colin? Like you can do that three times but not ten. **Colin Jennings:** Um, not not really, but I think on all of these things that there's this idea that we're going to limit the state in the the relay but we're going to be in this like when you multiply all these things together as this way of managing DOS, you end up to a very large number of total state that would be possible under attack scenario. So, is, you know, is having a limit a problem? No. But let's say we set it to five. Like uh, is does that actually allow you to solve your DDoS problems? I don't know, I think you need to do a lot I like I I think we're taking a we're taking a very naive approach to solving DDoS. "Oh, we'll just limit the number of X you can do on all of these things across the board," and I suspect that they all multiply together in a way that may be very may add up to a number that's too large from a DDoS attack point of view. So I sort of wish we were looking at all of the DDoS stuff deterministically together um around state management. It may be that limiting the total number of state is how you want to do it versus limiting the size of each thing where you can have extensions, right? **Alan Frindell:** This is less about state. I agree with you, this may not be the end of the story, it may but this is less about state and more about frequency. But that even even that... **Colin Jennings:** Well, fair enough, I mean... yeah, yeah, I mean. But I so no, I don't see this being a problem if we do this or whatever, I just don't know if it's a solution. **Mo Zanaty:** Just just to clarify, Colin, I was not even thinking about DDoS at all when I was asking that question. I'm not thinking of spamming someone with requests, I'm thinking of legitimate cases of whether or not you think application behavior is correct if it issues 10 requests without getting a single OK back or error back on the first request. If you do mute and unmute 10 times within a round trip, do you expect same behavior, do you expect the same stream of OK error OK error OK OK OK OK? And then the the receiver knows what state it really got in. **Colin Jennings:** Well I mean I do think we'll have things where I mean if those 10 requests happen in a human- relatable amount of time like 30 seconds because a proxy just fell over and stopped processing for 30 seconds or something, I mean maybe that's a reasonable number, I don't know. But I mean 10 back-to-back in 100 milliseconds? No, that sounds sort of unreasonable, I agree. Um, but I don't know, I guess I just think that this is you know we need to think about this and we see these hiccups where things just freeze for whatever reasons for multiple seconds on a fairly regular basis and usually user's response is what they're happening isn't what they want, so they click the button multiple times faster, right? And that's not necessarily an error, or we've got to be able to deal with that error, I guess that's what I'm saying. **Alan Frindell:** So I entered the queue to say that uh now that we've I think we just killed required request ID for now, I don't think there is a DOS problem here that's not completely solvable with stream flow control. The only thing that should be on a request stream after the actual request is request updates, so you just... **Alan Frindell:** The problem with it is you can send they're small, I can send you a packet with 100 of them that are unacknowledged and cause you to do a bunch of work um if if I don't feel like I want to process them I just don't and just the request updates are going to pile up, right? I mean I think that's sort of Colin/Ian's point. Just like, "Eh, coalesce if you get too many, squish them into one and you should probably have a rate limiter anyway." So, if people really think that's the answer we can write in the security considerations and move on to. Um, I want to give Victor time to talk about the delivery timeout thing so I'm going to move on here. I I get sort of a medium signal here, we'll try to sort it out with the editors. Victor, you want to present this? **Victor Vasiliev:** Uh sure. [Delivery timeout](https://datatracker.ietf.org/meeting/interim-2026-moq-14/materials/slides-interim-2026-moq-14-sessa-moqt-prs-and-issues-01). So we had for long time an issue where we did not exactly agree on what semantics are useful for delivery timeout. So uh I'm going to propose that we have actually two timeouts with two different semantics. So uh for motivation consider the two following relatively simple use cases. One is video with no SVC, you have just one group per one subgroup per group, and your subgroups by groups are relatively long, they're at least like two seconds long. So if you lose a subgroup, that's a big deal. And then the second use case is a subgroup per layer SVC, which is like if your SVC has three layers every group has three subgroups and that there is one subgroup for base layer and two subgroups for each enhancement layer. So uh currently we define a delivery timeout as a if you send an object, you start timer when you receive the object either from the upstream subscription or from the application and uh if by the time you attempt to send it, it has timed out, you reset the entire stream. Uh, and this so this works relatively well for enhancement groups in subgroup per layer SVC because uh if you're timing out one object on the group that means you're getting behind and you can abandon uh the entire thing uh and you still have the base layer. But if you it doesn't really work well for the base layer for regular objects because uh if you if you have a hic- network hiccup at the beginning of the group, what would happen? If you that you would reset that group and now you just don't have anything to send because you reset the only group you had. Uh and uh Colin do you have a clarifying question? **Colin Jennings:** Uh I guess yeah, I mean clarify this in the context of the base layer's usually set of a time the delivery timeout is on par with the GOP sequence length right in this case like? **Victor Vasiliev:** Yes, that is true, but also in that case, so that has the converse problem. Now like you set delivery timeout per object on that group for two seconds. So now if you're if you have the tail of the group preempted by new group, which you would have in delivery order descending or something like that, the tail will like stay there for two entire seconds, which is not really something you want. **Colin Jennings:** Oh I mean priority's been the solution to that, but I mean yeah fair enough. I just want to like highlight that this problem may not be as big a problem as it sounds but like I didn't but yeah I agree with what you're I understand what you're trying to get at, yeah. **Victor Vasiliev:** So uh next slide. So the actual proposal is so we keep the existing timeout and call delivery timeout object. Uh and then there is a new timeout which is uh whenever you have this subgroup FIN is queued, you start a timer that will reset the stream. So the logic behind the timeout subgroup is that if you have a subgroup and you've never closed it, that means you intend to send something on that subgroup, so it's useful to keep it around. But if you if you sent the FIN, that means the subgroup is complete and now we will wait for a time until it arrives but then we will reset the entire thing. Uh and uh that is the uh the pro- and in the proposed PR you can do both. So uh yes, so that's basically the proposal. **Alan Frindell:** Uh my comment is like I think it my bar is: is it useful to somebody? I hear "yes." And is it easy to implement? And I think that's also "yes." So I'm kind of in. **Victor Vasiliev:** Oh, it is not one to one what we've already implemented in our implementation, but it is pretty close. But we do have a timer that like can remove and one of the particular problem with delivery timeout object right now is that like once you pass object to the QUICK stack, you no longer have visibility into what's happening on the network, so you can't time it out unless you have like a QUICK implementation that has really deep integration. The nice thing about subgroup is the once you send the FIN, since it's very semantically well defined, it you can reset it even after you passed all of your data down to the QUICK stack and it will do the right thing. **Alan Frindell:** Uh, I guess my only I'm is there anybody who wants to come in and say "no don't do this, this is a terrible idea"? All right. I I don't hear anybody. It sounds like we'll merge it then. Uh unless somebody wants to speak out against it. Mo. **Mo Zanaty:** Not against, but just to clarify, Victor, do you see use cases for both at the same time or do you want to make them mutually exclusive so receivers don't have to have two timers? **Victor Vasiliev:** Well, one use case would be you have object for enhancement layers and subgroup for the base layer. That requires the other PR that lets you set it per subgroup, but... **Mo Zanaty:** So you do want to see both of them concurrently? **Victor Vasiliev:** Uh yeah, I don't see any reason why not. In fact, I think it's actually easier to implement if you implement them concurrently because then you don't have to check one or another. **Alan Frindell:** Yeah, I just came in and I guess I had the same well I had the same question, but uh I was wondering how these things would interact if the subscriber requested one type and the publisher had a different type or the subscriber but I guess if they both fall if it's just the min of all these timeouts is when you check the object then that's... **Victor Vasiliev:** Yes, so we usually follow min, so if publisher specifies one and subscriber specifies another it's like both are active. **Alan Frindell:** Yeah, okay, but yeah if you're running two timers then I think that all just sort of falls out. Thanks. Okay, we'll move on. We will try to get this into 18. Okay. Um, I sent a mail to the list a week or two ago um based on some... so I did some work to try to implement GraphQL subscriptions using MOQT. Since MOQT is pub-sub and GraphQL subscriptions are pub-sub, it seemed like this should be easy, right? And it turned out that it it didn't flow well, partly because of the decision we made in 17 to require negotiation for unknown parameters. So um this is how this is how it works. So in HTTP, when you're doing a GraphQL subscription, you send a POST, uh and the query body, like "what are the things you're you know the all the filters whatever that are going to run server-side that generate your subscription" go in the HTTP POST body. So the question is like: let's say you want to map this to a MOQT subscription, how do you do it? So the first problem is like the only thing that the sub- the only subscriber's input that's going to realistically go to the publisher is the full track name, and that's limited to 4K in our spec. Um, and if you had a short enough query body, in theory you could make that part of the track name. You could make a track name tuple be like some JSON blob or something that was like, "This is or that is the track name" or something like that. Um, but it could be it could be too long, and I don't know that anybody would really do it that way because your logs would be super gross um and maybe even leaking information that you wouldn't want in the track name. So um the next thing is like, "Well, could you put that in a subscribe parameter?" Um, now MOQT does have like the requirement that you cannot change the content of the track based on the parameters, so you would sort of need to either have a unique track name that was generated in some way, or you could do something like hash your query and that's part of the track name so that nobody else like can get deduplicated on your subscription. Um, but there's nowhere to put if you want to put it in a parameter today, there's no parameter for you to put it in. So we could add some like totally generic blob parameter, like "request metadata" or something so that not you don't have to like essentially build an extension in order to implement this like relatively like straightforward use case. Um, the problem with doing that or the other option if we don't add a standard parameter is that you would have to have an extension, and every hop along the path needs to negotiate this extension or the thing is just going to fail. Um, also I don't know how in practice how big these query bodies get, but it did make me start to wonder if we actually put this in parameters, are we going to hit our 64K control message limit? Or do we still think 64K control message limit is a good thing? And if you had a super you know the a super long query, maybe you need a different solution. So that's one problem. Um, the second problem that I thought of is that like in HTTP, you have all these terminating proxies, like your edge load balance or your origin load balancer, and these hops like often are adding HTTP headers that get forwarded upstream. So like some examples are like you know the the the client's original IP address gets like lost through these intermediate hops, but like that edge proxy will almost I imagine other CDNs work similarly where they extract that they they grab the IP address, they stick it in a header, and that goes through for logging, it's used for spam prevention, it's used for all kinds of stuff on the origin. Or similarly, like if the client terminated with a client certificate, like we might extract information from the certificate, pass that along as headers. There's a VIA header that gets updated. Um, so MOQT has no way for subscribers to like for subscriber-sent requests, there's nowhere to put sort of these arbitrary like metadata fields in the protocol. So if you're just building like MOQT proxies, like they're limited in this way. Publishers can use track properties, but I don't know that's really the intent. Like I don't know if like the VIA like if you were doing a publish, you could stick a VIA header into a track property, but I don't know that that is really what we want there either. Um, so I'm running out of time. Um, this is my strawman. Uh, we could revert parameter-based TV back to TLVs, um with even and odd, which was the system in Draft 16 and earlier. Uh but we could take something like we did for track properties, which is we can add a range of those which are mandatory to understand. So I can send you one, and if you don't know it but you can see that it's in a range that we define, then you just say like, "Wait a minute, you sent me this thing and there's no way I know how to process it because you said I have to understand X and I don't understand it." Um, and the receiver would fail those requests, and we have the logic that like if you're going to forward this request onto some other MOQT entity, you include the ones that are unknown and we retain our like parameters don't change the track content, like track name is the only thing that controls track track content. Um, so in four and a half minutes or less, attack my strawman. Victor. **Victor Vasiliev:** Yeah, I I'm a bit confused. So if you're putting query somewhere other than the track name, how do you maintain this property that the content only varies based on track name? **Alan Frindell:** You generate unique track names for for such content, either based on a hash of the query or totally unique. **Victor Vasiliev:** Okay, so you put like hash of the query and then you use like this parameter to pass the actual query. **Alan Frindell:** Yes. **Victor Vasiliev:** Uh, I'll need to think how I feel about this. **Alan Frindell:** Okay, let me know how you feel about it. Colin. **Colin Jennings:** Uh, look, I don't want to lose your point that there's lots of good reasons we might need unknown parameters to flow through relays and how we want to deal with that. But I do think in this the problem you're addressing right here I was thinking about this later after we talked about last time. I was thinking that it does seem like when you want to send a big, arbitrary sized blob to a server and then get back some data based on that blob, what you want to do is publish to it to have to have that data and then you want it to give you back effectively a reference to the other track that you subscribe to to get the responses from that. I mean I think that's probably like it really scares me the idea of having these you know is moving away from the things being not aggregatable that look like they're aggregatable if you just have a hash collision. Right? I I understand hashes don't really collide, but like you know what I mean, like I I like I don't want to start sliding down that slippery slope of relays being like, "Oh, well we looked at these headers and decided that these things were not aggregatable or cacheable and moved them back up." I mean it's part of how we get the performance of the system is not having all the problems HTTP has when it comes to this problem, right? **Alan Frindell:** Yeah. I do hear you. I mean I think maybe focused more like the benefit of the strawman is that it also solves I mean forget forget about the GraphQL thing and think only about this sort of piece of it, which is like: how do we expect intermediaries to like to proxy MOQT and add their useful stuff as it goes? **Colin Jennings:** Yeah, but I mean part of the problem is we don't expect client IP addresses or essence to be I mean if the we expect we expect the stream to be aggregatable, in which case those can't arrive. Like this... **Alan Frindell:** I mean that's just I mean in live video ingest is a classic like one there's many one-to-one cases for MOQT. So like there are... **Colin Jennings:** Right, but there's many but there's many one-to-one that won't be aggregatable but I don't think that we want to break one-to-one flows so horribly. Well, I mean this highlights towards me that you need a mechanism that's much more like you can send a beacon header along through the relays and then things that collect certain pieces of information send it back through a beaconing mechanism for every client that connects to that track, not just one, right? Or something like that. Like I I like these use cases are real, but this doesn't like like just passing them back for the first one doesn't really solve the problem in the back in actually the majority of these use cases' problems, right. **Alan Frindell:** Okay. Victor. **Victor Vasiliev:** Uh, I think I agree with Colin's like observation that like the key difference between MOQT and uh HTTP is that MOQT always assumes that you're going to like infinitely fan out your subscriptions, uh which is why we do not allow subscriber-to-publisher flows, basically, of anything. Uh, so like for ingestion it's not a problem because like if you're doing publish you can just attach like track properties, you can put all of that there. And for subscription it's just not a problem if you're doing like many-to-one f- like fan out. But the only case where it's really interesting if you for some reason have one-to-one MOQT proxy uh for subscribers, in which case you would want to do that, but I would actually argue that in this case you don't want to proxy MOQT, you want to proxy your web transport session to the thing that can actually like you terminate like on your load balancer you terminate the QUICK and HTTP/3 and if it gave you client cert you put that on like your WebTransport HTTP request header and then you send it to something that can actually speak MOQT. Uh, so that's at least like my impression of how you'd do that in practice. **Alan Frindell:** Okay. Um, I'll go back and think about it some more, but it's another voice for keep the restriction that we have. Mo, it'll probably have to be quick. **Mo Zanaty:** Uh, yes. I I this brought up very important points to me, not related to uh big query blobs, but related to our problem with end-to-end uh extensions, end-to-end parameters, and track-related parameters. We're shoving a bunch of stuff into parameters that really have to do with tracks. For the publish side you're right, we have track properties to to be able to signal things on a track level. Subscribers cannot signal anything up on a track level. Should we consider having a track parameter, not a message control parameter, but track parameters for subscribers to be able to indicate that? Because right now we're just lumping things like delivery timeouts and stuff that we want subscriber, you know, metadata end-to-end about a track, we're shoving them into control message parameters. Should we have a special class for that just called track parameters, they're specifically for subscribers to indicate that, and should we have a mechanism for end-to-end of those parameters? Not just hop-by-hop. **Alan Frindell:** Uh, yeah, I don't know, maybe I'll think about it and make a request metadata proposal or something. And it's just like, "That's end- if it goes end-to-end that's end-to-end and I don't know, you can hide whatever you want in there." **Martin Duke:** Okay. Alan, do you have a useful signal on this issue? **Alan Frindell:** Um, I mean I think yeah, at least in Draft 18 people are seem to like what we did in 17 more than not, and I don't know if anybody else is wants to jump in other than or weigh in, reply on the mailing list thread since we're out of time. The message was from a couple weeks ago. Um, one other super... well no, it's fine. There's we don't have time. **Martin Duke:** Go ahead, we're out of time. Thanks Alan. Uh all right, the next um as always things will discussion will continue on GitHub, so if uh almost all of these issues have a GitHub issue with them or a PR, so please comment on them if you like or just take it to the list. Our next meeting is in exactly two weeks, uh the headline subject is going to be Mo's filter proposal which has been floating around uh for months and months and months and we're going to try to drive that one to a solution. So please come ready to discuss that in two weeks and we will see you then.