Session Date/Time: 30 Mar 2026 16:30
This transcript is of the Media over QUIC (moq) Working Group interim meeting, held on March 30, 2026.
Magnus Westerlund: Hello everyone, welcome to today’s interim meeting. So, I will get going on these kind of administrative parts while we maybe have some more people joining. So, as usual, the Note Well applies for this meeting. So, I hope you are all aware of the rules here on what applies so, if not, please read up.
So, as usual, join the mic queue if you want to speak, participate in show of hands if we have any, I don't know if we will. And, please unmute, but there's a short delay like, just before you start talking, so. And headsets are good for the audio quality in Meetecho.
So, we have some dates. We have draft-ietf-moq-transport 17 consensus call that ends the 7th of April. And we have additional virtual interims scheduled on the 13th and 27th of April and 11th and 26th of May. And we have the hybrid interim in London between the 9th and 12th of June.
So, today's agenda is that we first have a server-side ABR demo by Tongyu. And then we have the editors have the rest of the meeting time, so. Yes, Alan.
Alan Frindell: Hey, so due to the 30-minute shift here and some lack of awareness on my part, I don't think I will be able to be here for the last 30 minutes, and Ian will not be here for the last 15. So, um, I don't know if there is a way that we can shuffle things around. I recognize it's very late for the friends who are presenting. So, um, maybe nothing can be done, but that is the constraint.
Magnus Westerlund: Yeah, I don't want to delay this further for these, the server-side ABR because yeah, so. We will see what, worst case, we waste 15 minutes in the end then, so. So, let me share the Server Side ABR Demo on MoQ slides. So, you, yes, unmute. Well, stop my video and you can go ahead. Tongyu, I will pass you the slide control.
Tongyu Dai: Yeah, hello everyone, and ah, could we use my screen share because there is a video in the slide?
Magnus Westerlund: Yes, you can do that. Request it and I will approve it.
Tongyu Dai: Yes, yes, yes. Thank you. Wait for several minutes. It's very quick. Okay, can you see my screen?
Magnus Westerlund: Yes, it shows up. So.
Tongyu Dai: Oh, nice. Oh, hello everyone. I'm very excited to be here to share our work of server-side ABR on MoQ. This time we are not to present any proposal today, and we want to share our implementation experience about MoQ standard and collect your feedbacks and suggestions.
Firstly, I'd like to use this diagram to give you a high-level overview of our demo's architecture. Starting with the video ingestion, we use FFmpeg for encoding and output the content in CMAF format. From there, we have two parallel branches, achieving the low-latency HLS and MoQ pipeline. This setup serves an internal research project where we want to compare MoQ and low-latency HLS, such as latency and QoS performance. And for this presentation, we will focus on the MoQ pipeline shown at the bottom. Based on the CMAF format, we have implemented a repackaging to MoQ compatible draft-ietf-moq-cmsf, which is then stored in Publisher. For the Publisher and Relay, we forked a branch from MoQ-transport GitHub repo and upgrade to support version 13. And our server-side ABR is integrated into Relay. Then we update our product H5 JS player to be fully MoQ transport compatible. Based on the logs from this end-to-end pipeline, we can break down and analyze performance metrics at every stage.
This table summarized the implementation details for each part of this pipeline. Since I just covered them, I won't go into detail. In the next, I should introduce our design of server-side ABR algorithm, but before that, I'd like to raise a key question for us. To achieve server-side ABR, Relay needs to know exactly what which tracks are available, as well as their specific bitrate and other metadata. So the challenge we met before is how does Relay build up this context? And address it, we have a workaround by having Relay directly parse the catalog and, I know that it is not that subject to the standard definition, and it is a temporary implementation. We expect the working group to consider this requirement and provide a formal way to achieve that.
And based on our workaround, this flow chart displays our design for server-side ABR interaction. While the server is transmitting track A, if a network change is detected, it will trigger ABR decision to switch to track B. At this point, the server sends a PUBLISH message to the client, carrying the decision for track B. After that, it will ignore all subsequent track switch information. Only when the server receive a PUBLISH_OK message from client, it will start to really send the data of track B. We expect the interval between sending the PUBLISH and receiving the OK message to be roughly one RTT, so the impact should be small. But there is another specific scenario to consider. If track A was initiated by a SUBSCRIBE, it must be formally closed by a SUBSCRIBE_DONE. This may create an overlap where the server transmits both track A and track B concurrently. During this period, the ABR must intentionally ignore the switch triggers caused by the legacy track A. Of course, if the track A was initiated by PUBLISH message, this overlap doesn't exist. I think this is possibly another area where the MoQ working group can improve for better efficiency.
And before we jump into the demo video, I'd like to use this screenshot to help you quickly understand our demo. As you can see, the top half is our player interface. In addition to common player features, we also integrated real-time log information into the video overlay for better visibility. And the bottom half includes three time-series pictures. They separately display the buffer length of player, and the ABR decisions and actions, and also the according player events and control message. Okay, let's move on to the video.
In the initial stage, we capped the downstream bandwidth at 15 Mbps. Once playback begins, the player start from the lowest bitrate and maintain a buffer length about 0.7 seconds. This is a value we hardcoded into player to avoid frequent rebuffer. And in this demo displayed here, we have four bitrate versions for track with average bitrate of 800 kbps, and 2.5, 4.5, 7.8 Mbps. Because our current bandwidth limits is 15 Mbps, the server will quickly switch to the highest bitrate. Yeah, you can see the profile selection has been switched to the highest one.
And in the next, we drop the bandwidth limits to 5 Mbps. As you can see, the buffer is quickly reduced and the video start to rebuffer. And immediately, the ABR decides a downswitch with a new pair of PUBLISH and PUBLISH_OK message. And several seconds later, the buffer length starts to recover. But this phenomenon is very interesting. We noticed that even after the bitrate downswitch, it takes several seconds before buffer begins to recover. Based on our investigation, we find that the main reason is that after the server sends the PUBLISH message, the previous high-bitrate content is still being downloaded. It means that the network congestion hasn't been resolved. It will significantly extend the rebuffer time. So for server-side ABR, I believe it is necessary to have the capability to cancel the ongoing object downloading. This functionality isn't achieved in our demo yet, and I hope working group may take this case into consideration.
And return to our demo, once the switch down take effects, the buffer quickly builds up to about 5 seconds. This is because the current playback is away from live edge and it allows for more content to be downloaded. After that, the ABR makes a few minor adjustments and remains relatively stable. No more to introduce.
Yeah, go back to our slides. I'd like to go into finer details and share some insights we obtained during our practice. Firstly, we look at the challenge of bandwidth estimation for server-side ABR. In our current experiment, we utilize the connection tracer function in QUIC-go to get CWND and RTT information. Then we estimate the download speed for each object by simply dividing the CWND by RTT. But we also observed that compared to traditional client-side bandwidth estimation, this approach may result in more fluctuation, especially when dealing with smaller objects. So a bandwidth smoothing algorithm is very critical. It should be designed and tuned based on the specific configuration of your stack.
Another detail is about our initial bitrate selection. We currently use the lowest bitrate under the default alt group as a starting point. In fact, as a server-side ABR algorithm, there is also a lot of room to optimize here.
And based on this demo, we did some performance evaluation before, and in this process, we found there is a significant relationship between transmission rate and the size of object. It shows a clear linear correlation cluster as displayed here. So when the object size is very small, it brings greater challenge to bandwidth estimation. This may to some extent guide our encoding settings. For example, maybe it is not very suitable to set one frame per object.
And that's all for our demo introduction. But as follow-ups, there are two critical questions we want to solve if we want to continue to polish our demo. And the first topic is the case of SEEK. In the current MoQ-transport spec, if a user seek back to historical content, we need to use FETCH to request the track information. However, the FETCH command explicitly specifies a single track. So server-side ABR doesn't work in this scenario. And I know that we could enable client-side ABR if we make the FETCH with small range and high frequency, but within a single session, it will increase the system complexity to make the client-side and server-side ABR work in parallel.
And the second topic is about our ABR improvement. In our demo, the server-side ABR relies only on bandwidth information to make the decision, which is frankly outdated. So we hope to have a status exchange method between the client and server within MoQ transport. Especially the buffer length information from the player can significantly improve ABR performance. I know that there are lots of discussion around event timeline, maybe it could be an option. Yes, I think that's all of our introduction and if you have any requirements like feature or performance evaluation, we are glad to provide any help based on our demo. Any question? Oh, I can give the time screen back to you.
Magnus Westerlund: So, no questions here? Mike, go ahead.
Mike English: Thank you for this presentation. Um, there’s a lot to digest here. I think uh, one of the interesting points is about the applicability of ABR whether you are using subscribe or fetch. In my mind I had previously thought that existing ABR mechanisms would be more easily adapted to fetch, um because they’re similar, you know, to an HTTP GET request. Uh, but I wondered if you could speak more to that.
Tongyu Dai: Oh, yes, I think, for fetch or for the functionality of SEEK, it might be more related to the business decision and in our ecosystem, we think that seek behavior is very important and necessary for user. So we have to support this feature. If, based on this assumption, we have to support SEEK, so when we meet the behavior of seek, we have to use fetch, and we have to switch from server-side ABR to client-side ABR. I think there could be more gentle solution for this case.
Mike English: Cool, thank you. I’ll probably have more questions to follow up with later, as I digest more of this, but thank you for the presentation.
Tongyu Dai: Thank you.
Magnus Westerlund: Yeah. Uh, so yeah, if there’s no more question now, I yeah, I mean please have a discussion on the mailing list or reach out directly to Tongyu, so. Thank you very much for this presentation. So, is it Alan next? Um, Alan is I guess not back yet. Ian, can you get started on something?
Ian Swett: Yep. Sorry. Um, okay. You will get the slides control.
Alan Frindell: Ah, Alan is back. Okay. Did you already go over the other one?
Ian Swett: No, I did not.
Alan Frindell: Okay, let me, can I switch? Let me start, I do.
Ian Swett: Okay.
Alan Frindell: Sorry. All quest slots for request were already taken. Let me see if, oh, because Ian's there we go. Okay. And I have slide control. Okay. Cool.
Okay. This is the timeline of interims between, with some imagination about what's going on for the ones that aren't scheduled, between now and when we would like to issue our first working group last call on MoQ transport. So, 3:30 you are here. We're going to talk about issues that are marked needs discussion. We have three more interims scheduled before we're going to cut the draft 18, which is going to be our next interop target, about a month before the London interim. So, um, my thought is we would use each one of those dedicated to one of the three big topics that we have remaining outstanding, one for rewind, one for filters, one for DTS and switch. I think the order is TBD, I just picked a random one. Um, but, and I have another slide on like what I think that we would do there. Um, Martin, you have a question?
Martin Duke: Yes, what do you mean by finalize?
Alan Frindell: It's on the next slide. Okay. Keep, hold, hold on.
Um, then we would essentially like, I don't know, for most of the other interims, virtual interims, I just said like, I don't know, we could pick up other things that need discussion. I feel like we do periodically need to go over like thing PRs and issues that, you know, the editors and authors can't quite resolve or we have a small question. So having some time for that is good.
June 11th and 12th, this is in case people haven't thought, that will be our last hybrid interim before our first working group last call. So, if you're thinking that there's lots and lots of time to discuss all your great ideas for draft-ietf-moq-transport V1, there's not that much time. So, um, I don't have any preconceived notion about what we will need to be talking about there yet, but that is like our last like super high bandwidth long stretch of time.
Um, then there's two more hybrid interims that are going to come, one where we assume will be following up on London resolutions. Uh, the draft-ietf-moq-transport 19 deadline will happen, uh, two weeks before IETF 126. That is very close to the three-year anniversary of MoQ transport 00. Um, and then we will have IETF 126. Then within like two weeks later, we will have I assume another, we will possibly need one more of these meetings to resolve anything that we have not um nailed down completely following up from 126. Um, we expect to have zero open transport issues at that time, except things that we're holding, like maybe renumbering and other minor editorial issues. Um, we would expect to cut draft 20 at that time in approximately mid-August targeting issuing our first working group last call. Um, Martin.
Martin Duke: Just a fountain of questions. Um, so according to this roadmap, do you think, so what is your position on the fall hybrid interim that we would otherwise do?
Alan Frindell: Well, we have seven other adopted drafts. And we will probably have working group last call feedback, uh, that we need to address. And you mean do, I mean, what people have said about hybrid interims is you always have one more than you think you need. So I wouldn't say that we should probably plan for a fall one anyway, but at this time, I don't expect it to be a major MoQ transport focused interim.
Martin Duke: Okay. Well, it may be true that we have seven different drafts. I am not personally excited about the idea of perpetual hybrid interim meetings because we have active drafts. Like MoQ transport is a, um, you know, is a different kind of lift than this other stuff, and I would hope that we could cover it with the normal business of the IETF otherwise. Um, yeah, like I'm happy to pencil in a thing, but the real question is like how much effort should I put in, and maybe this is not just the, I mean I don't want to completely hijack your slideshow. But like I would welcome feedback from the group on should we plan, should I make arrangements for an October interim number one and number two, should we like clinch it where people are booking travel or do we just want to have like a penciled-in date for this because I kind of don't want to be doing these forever personally.
Alan Frindell: I don't want to do them forever either. It's probably prudent to like begin scheduling, like light scheduling, um, of, of something in the fall, but I don't know. I mean, this is also, I mean, I'm sure that there are some people who are quietly smirking that I am overly optimistic about what's going to happen in the next four and a half months. So, um, I don't know that we.
Martin Duke: Okay, let me ask the question a different way. When would you feel sure that we needed a new hybrid interim where we could tell people to book travel?
Alan Frindell: Probably by the end of London, we would have a pretty good sense if we if we think we're, does that jive with other folks? Like I'd probably, it would be clear if we weren't, yeah.
Martin Duke: Okay. Ian go ahead and then Suhas.
Ian Swett: Uh, yeah, I think we need at least one more. I think we'll know for sure at the end of London, but I, I have a very difficult time to imagine, um, because there are going to be some people that in this time frame like around June-ish are going to start actually doing production deployment. I know that other people already have production deployments, but I suspect like this summer will be a big time for all the people who have been waiting for like a stable enough draft to ship. And like that always just brings up stuff. So like it might not be like redesign the spec stuff, but like there's going to be stuff that needs discussion. Like I just can't imagine that won't be true because like there are a ton of people who have implementations that have like very little deployment experience, so. But I think our goal individually would be that the, the one in October, late September, whatever we end up doing it, the goal would be that that is the last one if we're like doing a good job.
Suhas Nandakumar: I, I think I, I agree because, um, let's, let's say with the optimistic plan that we, on 13, 27, 5, 11, we agree to land a version of all those things. Then between 18 and 20 people would implement it and come back with feedback, and and that would need a in-person meeting to kind of resolve some of the edge cases or, you know, rough edges on on those things. So, I think this year I I'm anticipating we'll have another virtual interim for sure. Sorry, yeah, hybrid interim for sure. Thanks.
Martin Duke: Okay. I'm hearing a lot of we'll know by London, which is fine because London is when I would normally schedule the next one. So that's, that's not an issue. Um, as an individual, I push back a little bit about like we need a little more discussion. Like there's lots of venues for discussion, there we could do these virtual interims, we could there's there's even a mailing list that we can use to to discuss technical issues, or so I hear. Um, so, you know, like I if it's if it's sort of nitpicky like implementation feedback, maybe we can do this without a ton of synchronous time. But, uh.
Alan Frindell: I'm with you Martin, but I think let's let's see London. Let's see where we are at the end of London.
Martin Duke: Yeah. Okay. I, I am happy to wait until London to make this call because that's when I usually make the call anyway. So. All right. I'm eager to see the next slide so I'll shut up now.
Alan Frindell: Yes, but pending your next slide.
Okay. What do I mean by finalizing these three proposals? Um, this is what this is what I think ought to happen in my world. The proponents of those proposals should be driving resolutions of issues and concerns asynchronously or using ad-hoc meetings between now and when they are scheduled to go. Um, the proponents will prepare materials, and if there are people who really and if there are detractors can also prepare materials and make those available one week before the meeting so that there can be ample asynchronous discussion. Then at the designated interim, there's final presentations of what is going on, the last there's a live discussion. We will take a show of hands. There will be an on-list consensus call, is this going in MoQ, the MoQ core? If it's yes, we'll merge module whatever we need to do to to wrap that up. If there's not consensus to merge it in, there's I sort of see three options, right? We could there's still a little bit of time between now and the current draft so it's possible we could take a second swing at it using one of the slots that's not full like London or etc. Um, the feature could just continue as an extension draft and potentially merge into a MoQ V2, uh, or stay as an extension that's fine. Or we can change our timeline and delay MoQ for this thing that we think we absolutely need to have but is somehow not ready. But so to your point Suhas about like, oh, we might need time to follow up on issues, I'm really trying to get ahead of that, right? Let's let's get these proposals much more solid with back behind them so we don't pull them in and slow ourselves down. Um, so anyway, what do people think about this? Or Martin, is this sync sync with what you have in your in your head?
Martin Duke: Um, I, uh, I can live with this. Um, I'm a little surprised that you don't want any implementation.
Alan Frindell: I mean I do.
Martin Duke: Yeah. Well, I mean, like that is what we discussed before, and um, you know, you have me as the proponent of switch going on the 13th.
Alan Frindell: You mean rewind.
Martin Duke: Rewind, sorry. Um, yes, I don't want anything to do with switch. Uh, and like the draft will be ready on the 13th, I can, but like there's a 0% chance that we'll have two two interoperable implementations at that time even though I have two people including myself interested in in implementing it. Uh, Luke do you mean you're going to implement rewind? Are you making the same mistake I am? Well anyway, uh, like if you want to take an it you know if some number of people read it and filed issues and resolved those issues and that's good enough for you and for the group that's that's good for me. But I think at one point we discussed implementation as a as a gate for this stuff.
Alan Frindell: Okay. So in I think to your point, the other two do have some implementation. So if you want I this that schedule is just a schedule. Like if some or a proposal if somebody if the I would maybe ask why don't the three proponents either in the next minute or after this meeting just work out what slot you want. Like I don't really care the order.
Martin Duke: Let's not do it in the next minute, but uh we can I I can start a thread about that.
Alan Frindell: Um, okay. AI note-taker, there's an action item for the chairs to decide the order of the virtual interims.
Martin Duke: All right. Uh, I can live with that. So I mean, so code is not listed here, but that's sort of.
Alan Frindell: I can I mean I also wrote the slide yesterday. Let's add code.
Martin Duke: Well, desired but not necessary or or like um, I don't know. Because like um the reason I mean I I think this is obvious to everybody but like the concern is that implementation raises issues, right? Um, uh when we I mean even when we've done like the bidirectional stuff that's there've been like six seven issues that I've found personally just doing that for subscribe namespace. Uh and these are pretty big rocks, like the same thing is going to happen if no one's written the code yet. Um. Okay. So anyway, I like I think I'm inclined to. So I guess what I'm saying is maybe this timetable's not realistic if we expect code.
Alan Frindell: Well, uh like people have had at least switch code for a long time. Nobody's got DTS code I don't think. Filters there's at least one implementation. Um okay. So maybe we shift it by one and if you want to go in the third slot like rewind's the one. People also have some version of rewind but not the draft version of rewind. Like I've heard many people say they lie about it. Like.
Martin Duke: Rewind is considerably smaller than the others, so like I don't think it'll take that long. I just like I'm seeing like two weeks to close out one of these and I'm or I don't know. I'm what I'm going to stop talking. Suhas go ahead.
Suhas Nandakumar: Uh, just one on the scheduling here, because um we see filters or whoever takes the slot for like right after the NAB, I was thinking if we can if we can have that week that slot to be on Wednesday rather than on Monday because people will be coming right out of NAB and we might not have much time to kind of prepare or like based on these requirements have me people have set up a meeting with everyone to kind of asynchronously resolve all the issues. That was that was one. On second point on the implementation yes, um I agree with Martin. Um some some are small some are big, but at the same time, uh the the main things I think we need to say is proof proof there's a use case and also if there's an implementation that's that's a bonus to it and and that way we can talk about implementation experience and and that's personally happened with us for filters when we implemented we went back and simplified some of the things or clarified some of the things better. So I I think having some implementation experience will will help uh and and also backing that with the use case that solves in the industry will be helpful too.
Alan Frindell: Uh, okay. I think I've said every it so I'm going to leave to the chairs and the proponents to work out who gets which of the next three slots, but I still think we should I still like this let's that's what we should use the next three interims for. Um. Anybody have any other comments?
Okay. Can't wait to oh Martin.
Martin Duke: Oh yeah, just uh okay. Like I'm personally a little hesitant about this about those three finalized words, uh for the reasons I've just described. Um and like I guess with the understanding it might slip to London maybe that's okay. Uh can anyone is anyone just I'd like to open the floor to anyone who thinks this is nuts and like we really got to there's like a bunch more work to do and new topics to open um while like that this is just not an adequate timeline to to resolve the issues.
Okay. No one is approaching the mic, so um you know this is not like a consensus document or whatever but I think and there's a lot of people not here today uh including Will and Colin who often who like to raise issues. But um uh let's tentatively adopt this as a as a way forward and we'll see how we how we do.
Alan Frindell: I mean the editors are going to drive to this schedule. Um. Right. So if you have issues like expect us to be and our goal is to net close five issues per week. So like pay attention if you like turn on your GitHub notifications now people. Like we we we want to move. Um because done is a feature.
Martin Duke: I agree. I mean, um you know like assuming this is a timeline I'd hate to be like April 27th people are still writing code for filters and like it's like oh filters are out because like it's not ready on April 27th. That seems a little harsh, but maybe that's not what you're saying here.
Alan Frindell: I don't think I like I said like as long as there's some implementation that's flu like you know this flushed out an initial set of problems, like I think that's probably sufficient. I don't want like I don't know not going to throw DTS under the bus but it's the newest thing, like nobody's written a line of code for it like that's probably needs a little bit more. And if it needs to slip to London it's fine but like yeah we got to set deadlines right.
Martin Duke: Okay. Understood. Okay. Thank you.
Alan Frindell: Let's let's move on to our issues unless I'm so thanks for letting me do that.
Okay. Okay, starting with our not transport issues. Uh, I'll just these are the issues that are marked not transport, everybody who owns a not transport issue has been pinged on their issue and has been sent an email in the approximately one week ago to address it, so I will just give the three people listed here although one of them is not in the meeting, 15 seconds to close that issue right now because you're not asking for a transport change, so please go file your issue somewhere else. Um, I don't know if uh Suhas you want to say anything about your issue quickly or Mike.
Suhas Nandakumar: I was trying to use the 15 seconds to close the issue.
Alan Frindell: Excellent, go!
Okay, super. Um, uh chairs I may ask you to help with 1235. I did leave a note in there like three weeks ago saying I would close that a week ago. Um, and there's been no further communication. Um.
Okay. Now issues. Uh, request cancellation should be able to specify an error code. So since we went to bidirectional streams, you cancel requests by sending STOP_SENDING or RESET_STREAM. Um, so there is a slot there to put in a code. And and so this seems like a no-brainer, and it's actually an improvement. UNSUBSCRIBE, for example, didn't have a code. Um, so this is kind of nice. My only question is, do we want a single code point space for resetting both request streams, which are bidi, and data streams, which are uni? Um, meaning like the same code point means something different depending on whether you received it on a uni or a bidi stream? Or do we just have one single space of RESET_STREAM codes and some of them are control messages and some of them are data messages and if you get one on the wrong context then it's like you just treat it as a generic error. Ian.
Ian Swett: When I filed this I was only thinking of bidi streams. Um and so I don't think I have a strong opinion on unidirectional streams.
Alan Frindell: We already have RESET_STREAM codes for data streams. The question is, yes. So, the question is do I just add control reset codes in that same table or do I make a separate table and and do I and do I allow the same code point to mean different things based on context?
Ian Swett: I'd probably put it in a different table until we know better, but I don't know. That's mostly just out of a more just user annoying, but I don't have a strong preference.
Alan Frindell: Suhas.
Suhas Nandakumar: Um, I I am more inclined towards um keeping them separate. Um, I agree with Ian. Um one one problem I'm not comfortable at this point is that I get two means um if that's on bidi stream I have a different logic or for some unidirected stream I use a different uh error path rather that I those those are already separated in the context. So having them separate might be more natural way to build the software.
Alan Frindell: You're saying you don't want the same code to mean two different things. That means you want a single space.
Suhas Nandakumar: Yes, yes.
Alan Frindell: Um okay. Uh, I'll write it as a single space then unless. Well, wait. I heard I guess I heard Ian express the opposite view like you do want two tables but then Suhas says don't use the same code points.
Ian Swett: Uh, no, single space is fine.
Alan Frindell: Okay, I'll put them in a single space. That's fine.
All right. Okay, Luke, do you want to come to the mic and say something? I see you typing in the chat but I can't.
Luke Curley: Sure, I'm in a coffee shop, so I'm trying to be quiet. Um, yeah, my only thing is uh these can't be sent back to the original publisher, so so long as the application's aware that like they can't just write error code you know 400 and expect the publisher to get it because a relay will have to merge multiple subscriptions together. So if this is like transport errors, it's totally fine. But if this is like application-defined error codes, uh we fan out, we don't fan in in MoQ transport.
Alan Frindell: Uh, that makes it makes sense that yes, any subscriber like you when you're going through a fan out like these don't get proxied. They're hop-by-hop, I guess is a different way to put it. Um, that seems fine.
Okay. Um, uh redirects 1481 and there's a PR 1534. So 1534, um, adds a new error code called Redirect, and when it's present in Request Error, then there's another little struct that shows up there. Uh, that struct has a connect URI which can be empty if you want either if you're a client or if you're want to reuse the same if you're a server but you want the client to reuse the same URI they connected on. And then you can and then you give a full a full track possibly a full track name, so you can um so for Subscribe Namespace or Publish Namespace you might give only namespace portion and leave the track name empty, or if for Subscribe or Publish there could be a full track name there. Um, that seems pretty useful. I think that makes sense. The PR also adds a Redirect message. The message can be used to redirect a request that you've already sent an OK for. Um which is sort of like a mini GOAWAY, um where you can sort of GOAWAY a single subscription or um so uh I think this was mostly FYI or if somebody thinks we really only want the error and we don't want the message. I guess I'm amenable to that. Um this will be useful when we rebound to HTTP/4. Um. Okay. Mo.
Mo Zanaty: Just to make sure I understand, is is the motivating use case actually to to redirect to a different track or is the motivating use case to redirect to a different server for the same track?
Alan Frindell: I think there's both use cases. I mean, certainly moving to a different server is useful in say conferencing where you're like, hey everybody else in your conference is actually on this other server, can you please just go over there and you will get everybody will be happier rather than me proxying between the two servers. So like that's a that's a use case. Redirecting to another track I mean HTTP's had 3XX forever, there's lots of use cases where like, hey, I gave you an old name but now there's a new thing and I want you to get the new thing instead.
Mo Zanaty: On the track on the track redirect, are are you requiring that it's after the track is already established or is it possible?
Alan Frindell: I think the more common use case, well, sorry, when you say track redirect I think there's both cases, right? There's a case of as soon as I got the request, I know I don't want to serve you here, I will redirect you. Another one is I accepted it and I was serving you here for a while and then I changed my mind I want to serve you.
Mo Zanaty: Let's start the first one, the first one the immediate track redirect upon subscription. You're saying this has to be after the OK?
Alan Frindell: No, there's both way, the PR has both. Yeah, both you can either send it in Request Error or if you've already sent an OK there's another message called Redirect which goes on the bidi stream. which has the same format as that block.
Mo Zanaty: All right. So I'm reading the PR too.
Mo Zanaty: So it's not a message if you do it immediately, it's a message if you do it later. Okay, kind of ugly but it works.
Alan Frindell: Okay. Suhas.
Suhas Nandakumar: I I do understand the first case where uh I connected to say Alice's track and basically say you Alice's track you go to server B instead of server A. But what the other use case where you're already receiving Alice and middle of the track you'll be asked to move to server B. I'm just trying to understand like what kind of what use cases that addresses today.
Alan Frindell: I mean similar to if you look at cases where you are you might be serving a number of tracks on a downstream session that are going to a different sets of upstream, right? And one of your upstream tells you like it's going away, so you don't want to end the entire downstream session, but you may want to say like, hey, this particular track you need to go somewhere else like to serve it. We were already this was fine here now but maintenance is happening or whatever that track needs to move. Um, it also doesn't seem to add that much complexity but if people are like only only the first thing and not the second thing, like I don't I don't feel like I desperately have to have this.
Suhas Nandakumar: I right, like when you explain explain that way I I do see there can be use cases where I'm getting my super bowl from one place and I go I've been asked to move to different place because maybe I change the provider or whatever it is. but but this does not expect this this on the subscriber side the expectation is that they clean up everything and they go and set up everything. there's no state transition that happens carried over from the server side, right? It's like.
Alan Frindell: Correct. I mean, it's effectively it's exactly this is exactly like a GOAWAY except for at the track level instead of the session level.
Suhas Nandakumar: Got got it, yeah. I I can't see strong use case but uh as you said it's not adding extra additional over complexity than what it is the we do with the standard connect redirect, so that should be okay.
Alan Frindell: Okay. Martin.
Martin Duke: Uh, yeah, I can live with this. Um, as an implementer it's a little gross to have like this different Request Error format, um and I wonder if you what and I can see some downsides but like could we just make this more composable and just have Request OK followed by an immediate redirect?
Alan Frindell: Uh, we could but I think it, you know, depending on how your subscriber works, it might be like, Request OK! Great! Set up a bunch of machinery! And then Redirect! Oh, tear it all down. So.
Martin Duke: Yeah, okay. Um, yeah, I hear you.
Mike.
Mike English: Yeah, um so I’m just kind of reading this PR now. Uh, so apologies if this has already been covered. Um, I’ve heard some people asking for a kind of go-away or redirect behavior that would be not just hop-by-hop. Has that come up yet in this discussion? Um, basically for uh use cases similar to like content steering. Uh, with uh like with HTTP traffic you have an opportunity to make steering decisions, you know, at each segment, um and you can use various mechanisms to to redirect traffic. Uh, do we need something like that? And is this at all an attempt to address that?
Alan Frindell: I think I put in language that says that you are allowed to sort of continue, like if you receive a Redirect message on an like there may be cases where you want to where a relay might get a redirect upstream and it may want to send its re-send that redirect or I don't know if we want to force it to happen but I think it allows it to happen.
Mike English: Okay. Yeah, this is my question is, like, could we allow that and does that get like really hairy or not? I don't know.
Alan Frindell: Uh, I mean I think we should allow it and if it's hairy it's a relay implementer's problem.
Mike English: Okay, fair enough.
Alan Frindell: Uh, okay. Mo.
Mo Zanaty: Since this is like GOAWAY and we added timeout to GOAWAY, do we need timeout here as well? It's the intent that you can still stay on this track anyway? Um, it's advisory to redirect or, you better go away, you better redirect because after this amount of time the I'm going to stop serving objects from that track.
Alan Frindell: Um, I uh it's a good question. Um, the first case doesn't need timeout because you already blew it away. The second case could use a timeout potentially. It's advisory anyway, but might be useful and then I then it breaks my format or I have to put it in. Anyway, I could live with it. Do you really want it?
Mo Zanaty: I personally don't have a strong use case for for either of these, but I mean just if the intent is to mirror GOAWAY, and we had a timeout to GOAWAY, to make it, you know, signal to the client that hey you have this much time before this definitely will go away.
Alan Frindell: Yeah, okay. Um we'll think about it. Suhas.
Suhas Nandakumar: I as much as um I like the timeout because that will help me to go to the other go subscribe and get the track data coming in and then I can finally unsubscribe from the the current one. The the one problem is that that would expect that relay to be told about that time um from the upstream content. With the GOAWAY it's like the relay's local decision about oh I I'll not be able to serve more tracks or more connections I'll ask the person to go, it's not depending on anyone um the content provider to tell or or make the decision.
Alan Frindell: I yeah, I guess this this could be more like ASAP, right? Like GOAWAY is like draining, this is like GTFO man, like go somewhere else.
Suhas Nandakumar: Right. and in this case like the currently we don't have a protocol machinery that basically says content provider says, hey, all the relays next five minutes you basically whoever's connected this track or this relay should go to relay B, that kind of thing we do not have. and that's that's need to be set off out-of-band. If some information like that is provided to relay application, then that can use that GOAWAY to make uh that then make it easier uh like make-before-break kind of scenario.
Alan Frindell: Okay. Are you saying yes timeout or no timeout?
Suhas Nandakumar: I'm saying no to timeout for right now.
Alan Frindell: Let's go with that. I want to move to another issue.
Okay. SUBSCRIBE_NAMESPACE overlaps. Okay. This is um like sort of an idea that came out of the idea that you might want to narrow or widen an existing subscribe. So current text says you can't have overlapping subscribe namespace and that was before bidi streams because it would always result in completely duplicative information. There was never anything more you could do or change if you had a nested one. Um, but now that you can have them on independent streams and there's different options, I can have some that are um publish only or and some that are namespace. So, if I want to narrow or widen one or change the properties, like I basically have to send a reset if I send a reset and a new subscribe for the the wider say namespace that's racy, if the sub if the new one gets there first I'll get an error, or I can do it slowly right I can send a reset or fin and I have to wait for the peer to reset or fin the bidi stream, then I can send a new subscribe namespace that then I know I won't overlap but it's slow. Um, and using a Request Update is not a great option here like we for two reasons. One is that the namespace fields a the namespace is not a parameter so we would have to make it a parameter in order to use Request Update to change the namespace prefix. Um, and also because responses are compressed against the the requested prefix, you would need like a signal in I guess you would use Request OK. So maybe that's not horrible. But you you'd need a signal in the stream to tell you when it changed because it changes the interpretation. So, uh I think I have one more slide which is basically like maybe we should just allow the overlaps. Um, Martin has a interesting quip about a man with two watches, um which is that you could have two namespace subscriptions that have conflicting information. Like one says namespace and the other one says namespace and then namespace done. So do you have it or not? But I think they should eventually converge. Uh another issue is that subscribers sort of need to know because we say that when you close a subscribe namespace it implicitly namespace dones all of the ones so there's an example where like you had two subscriptions, they gave you the same namespace and one of those finned but you should you need to know that oh it's actually still open like from the other one.
Let me pause there. Um people like should we just allow overlaps? Luke.
Luke Curley: I was just going to ask, is this an problem with Subscribe as well? Like when you reset a subscribe stream, how when are you allowed to open a new one for the same track?
Alan Frindell: There's a whole other thing about duplicate tracks later in this deck. So let's not talk about subscriptions. Let's think about subscriptions. Anyway, my my opinion here is I allow overlapping because like it's whatever it's two copies of the same data, they made a mistake but it fixes it avoids races.
Alan Frindell: I am inclined to agree. Mo.
Mo Zanaty: What does it mean if you subscribe twice? If you subscribe once to the higher level namespace, don't you get everything in the in the more defined namespace anyway? So what does it mean to subscribe twice?
Alan Frindell: So here's so one reason you might change is you have different options, right? One might have forward zero, one might have forward one, one might have publish, one might have namespace. So like.
Mo Zanaty: But if they're but if they're different, which which one governs?
Alan Frindell: Because they each have their own independent bidirectional stream response stream now, the oh you're saying for for zero?
Mo Zanaty: Data streams. I mean, you're going to get objects if if you're forwarding. So which one governs? I think it's confusing if we if we go down the rabbit hole of allowing both. Um, especially if it's only for this transition. I agree with Luke that we have the same transition for base SUBSCRIBE, um and I think it's a pretty heavyweight solution to say allow allow overlapping just for the transition case. If people have other use cases for allowing overlap, which I think is the case for some the base subscribe, let's talk about that. But if it's just for this transition I would not do this just for the transition.
Alan Frindell: So I definitely think there's cases where, for example, I want to subscribe namespace star with namespace only, but I want to subscribe to a more specific namespace for publish only.
Mo Zanaty: Oh my god.
Alan Frindell: Like I want to know about all your namespaces but I only want tracks here.
Mo Zanaty: This brings up my pet peeve of having the Subscribe Namespace message be such a generic thing that's overloaded to mean both name discovery and wildcard subscribe. Um, so I would suggest two separate messages if that's your intent, is that you want one that's just getting names and one that's actually publishing getting getting actual published objects. I would rather see a PR to split that message and remove that one parameter, make it two different messages, um than to allow overlap just for that one case. Um. You have a case of overlap for for not with different name versus object?
Alan Frindell: Um.
Mo Zanaty: Like it seems like the broader namespace would always, you know, just give you everything, so why would what's the point of the narrower namespace?
Alan Frindell: So here's a okay, like the what got I mean Martin was thinking about this he filed some of these issues. I was also thinking about how you might use this, you know, relay-to-relay setup, where you might have you have a downstream subscriber that wants A/B publishes. So you send those interior to relay network you're like, somebody's interested in A/B. Then another subscriber shows up and is interested in publishes for all of A. The right answer is I need to modify that upstream publish subscription to be wider now. And in order to do that I have to actually cancel the old one and wait for it to complete and then issue a new one. Otherwise it might blow up. So, that's the use case for like widening or narrowing an existing one. It's not that I really want them to overlap. Overlap doesn't make any sense as you point out.
Mo Zanaty: I agree that the change makes sense. How to modify a namespace subscription makes sense. I don't think the mechanics should be, um, you know, allow infinite overlap.
Alan Frindell: So, maybe the right answer here is we do use Request Update, like you can change the prefix.
Mo Zanaty: Change the prefix to a parameter instead of a field?
Alan Frindell: Then you just need one then your narrowing or widening case is handled without overlaps.
Mo Zanaty: That expresses a lot more intent and that and that that's something that is clear. These other these other mechanisms seem like they're really, really fuzzy and you don't really know what's happening as the receiver of these messages. So I think the Request Update makes a lot more sense. It's direct, it's clear, the right semantic.
Alan Frindell: Yeah. Ian.
Ian Swett: Uh, yeah. I I think allowing Request Update and updating the prefix and just saying like when the request OK basically everything after the request OK message comes, like that wasn't obvious to me but like now that you say it out loud, it's like, oh, that's actually kind of nice that those are all in the same stream. So like that's pretty trivial processing logic.
Alan Frindell: It only occurred to me as I was saying it out loud.
Ian Swett: Yeah, yeah, no. I like before I'm like, this seems crazy and impossible and I'm like, oh no, this is trivial. Never mind.
Alan Frindell: It's annoying we don't have anywhere else we send a namespace as a parameter yet but whatever, these are all solvable.
Ian Swett: I mean, that's just, I mean, everything's becoming a parameter, so it's fine. Like that's just spelling. Um, yeah.
Ian Swett: So I think we probably should just do that but I mean, do we want to split Subscribe Namespace into two? Like we've kind of discussed it before, like is that a good change? Can we get a read from the room?
Alan Frindell: I would love to not change it. I mean, I'd like to not change it in the sense that I'd like to not change things, but if we're going to end up changing it at some point I'd rather change it now. Um Mo, I know what you're going to say.
Mo Zanaty: Editorially, I'm with you, it's a it's a change, but I mean logically, they're just two totally different things. They have nothing to do with each other. I'm asking for names of things and I want to receive a bunch of objects. Two totally different things. It seems weird that we're lumping them in the same message. And there are probably there are probably a lot of conflicting parameters that would not make sense that uh, you know, if you're doing one versus the other.
Ian Swett: So, so in the original incarnation you always got both, and then at some point I added a param that allowed you to say like, do you want one or the other or both? And you're just saying why don't you just make them different messages. I don't have a strong preference, but.
Alan Frindell: We have a long queue here. I want to talk about a bunch of other issues. Um, can we just like make just come in and say split, don't split. Victor.
Victor Vasiliev: Yeah, so right now we're as far as I can remember they're not quite splittable because the the message space is the overflow path when you're publish flow control when you reach the limit of how many subscriptions you have. You still need the bidi stream either way? Yeah.
Alan Frindell: I mean, I think I might the way I might spell the split might be like we get rid of the both option, it's maybe even the same message but there's a flag, I mean the bit is part of the message type, right? the last bit of it is just like did you want namespaces or did you want, you know, so like otherwise the mechanics would be the same but you just had to pick one and you can't change it.
Okay. I may do that. I'm going to roll on.
Okay. How much time do we have? 25 minutes. Okay. Fill Timeout. We talked about this in Boulder. There's a PR that's open for it. Um, as a refresher, this is like how long the subscriber telling a relay how long it's willing to wait for it to fill a gap in a fetch. There's an open question on that PR which is like what does this timeout time? Is it the timeout that the relay would pass to any upstream fetch it needs to make in order to satisfy. So if for some reason it had to issue two, does that same timeout get passed to both fetches? Or is it more like a budget? So for you can make as many or as few fetches as you want, but like I don't want you to spend more than X milliseconds doing upstream fetches. So if the first one takes half your budget and the second one takes half your budget then you're out and the third one should just treat it as if it was zero. And there's a practical example here of like how that might be different. So like in if you're fetching zero to 10 and the cache has only even objects and your fill timeout is 2 seconds, like you will get all of the objects with option A, but the total time will take 5 seconds. In option B, you would get 0 to 4, 6 and 8 and 5, 7 and 9 would be marked unknown and the total time would be 2 seconds. So um and then some people are like, well why don't you just do A and have the subscriber side have a 2 second timeout? And the difference between that is that in that case you would not get 6 or 8 for sure and you might not get 4 either. So I don't know, maybe I'm leaning a little bit towards the budget approach. I know Suhas feels strongly for budget. Does anybody have a preference? Mo.
Mo Zanaty: I agree with budget B. It seems unreasonable to expect that subscriber to have any idea what A would do, right? A could go for minutes.
Alan Frindell: Yeah. Okay. Anyone else? Suhas.
Suhas Nandakumar: Yep, budget B.
Alan Frindell: Okay. That's anybody want I'll give 5 seconds for you to get in the queue to say something else. All right, ship it.
Thank you. Okay. Send Rate Param. Um and there's a PR for it. Um, this is something that applies in FETCH. So in Subscribe it doesn't make any sense because Subscribe data is already paced at the encoded rate. Um, it's live. and it's also unsustainable. You could set it much higher than the live rate, in which case it'd do nothing. You could set it lower than the live rate but that's not going to last. Like you'll build basically intentionally build a queue and you'll eventually get too far behind. Um, so this is only for data that's coming out of your cache. Um, you probably can't use a transport pacer to implement it. It's for the whole connection, that would only work for the whole connection and this is like basically per fetch stream. The way it's written is like, this is a ceiling and not a floor, so it will not come to you any faster than that, but if there's other high priority stuff that would slow you down. So if you're like, oh, like fetch like read out of the sort of the like don't read out of the cache any faster than this. Um, overlaps with receiver's use of flow control. Okay. So the then we already have in FETCH the ability for the receiver to sort of control the rate at which the sender sends by reading more slowly. and that re-controls the rate of flow control release. Um, the rewind proposal lacks that. Um, so maybe we need this anyway. So I don't know, do people want this? It's an R it's marked as an RFC thing. I think Will asked for it. Victor.
Victor Vasiliev: Yeah, so one observation I have is uh I am not entirely sure we want this as a mandatory to implement feature because this has a lot of sharp edges. Uh my second observation is that this is basically what it's trying to do is like there is this idea that like where you're trying to this is basically a worse version of rewind in the sense that like to the extent that rewind is like you have a fetch and then you transform that fetch on subscribe for the last mile. Uh if you do something like that you would want to set the pacing and that would give you the semantics you would get from 1354. Uh so I am not sure if we have add something like rewind whether this will be helpful at useful at all.
Alan Frindell: Wait wait, I'm confused. I feel like this is like less useful for FETCH but more useful for REWIND. But so how are you saying that REWIND would solve this?
Victor Vasiliev: Uh yeah, that's what I am uh I mean, I don't think REWIND even works without this. So like I've not read the current version of REWIND, but REWIND has to deal with some version of this problem.
Alan Frindell: I mean, the other way to do it, so you can do it with timing or you can do it with some kind of flow control messaging, which is like a way to flow control an entire subscription, basically, which is something there's an open issue that we never talk about, which is that. Um, okay. Um, kind of a long queue. Luke.
Luke Curley: Yeah, I'm just going to say this won't really work that well in reality. Um, because you're trying to slow down media, which is variable bitrate in principle. Um, I and it's not clear what would happen even if there's congestion, does this like flow control like retroactively apply if congestion control was the limiting in the past? Um, I would much rather people just fetch kind of like HLS/DASH where you just fetch a group at a time instead of like giving me half of a frame an object and doing a pacing in the application like this. Um.
Alan Frindell: So, there is a specific use case, which is particularly, you know, like prefetch is a good one, where it's like I'm I want to pre I somebody might watch this thing, so I want to prefetch it, but I don't want to go as fast as the wire can go because I'm going to prefetch four things, or like I want to reserve some bandwidth in case somebody else wants to do something.
Luke Curley: You're making the assumption you're at the live playhead, right? Like it only works if you're currently at the live playhead and you know the exact like send rate. Um.
Alan Frindell: You do need to know something about the rates for it to be meaningful, but I don't think you need to be this works entirely in a VOD-based system that's fetch that's driven by fetch.
Luke Curley: I'm just saying, I think this is something that needs to be implemented. Uh, I don't think it'll actually work. Um.
Alan Frindell: Okay. Ian.
Ian Swett: Yeah, I think we should park this for now. Um, like the SCONE approach is connection-level and so this doesn't really like do the thing you want for the SCONE thingy. Um, and also like from experience like trying to do like the Netflix style pacing stuff, like that's kind of I mean, I wouldn't even call it straightforward. Um, that's definitely a very doable thing if you have server-side ABR and you have communication from the client like what is my buffer depth and stuff like that. But like trying to get the client to do it, like I don't even know if I want to try. Like it seems like incredibly difficult and like so I guess um I think probably the server's actually in a better place to like try to do try to implement this algorithm and it's very unclear to me like if there's a case when like the client can really provide like a useful enough rate that it's worthwhile. Like if someone had an implementation that used this to do like the thing we're talking about and it actually like worked in the presence of other subscriptions and things like that, like maybe I'd think differently, but like for now I'd like park punt V2 whatever we want to call it.
Alan Frindell: Well, it sounds like REWIND may need it, so maybe the answer is like make this REWIND'S problem.
Ian Swett: Sure. That's fine. But this also only works for FETCH, right? So like.
Alan Frindell: Well, it would be FETCH and REWIND, right? Or maybe only REWIND.
Ian Swett: Sure. Sure. Um, the one thing I do want to talk about though is since you mentioned SCONE, like yes, the SCONE rate is the entire connection but FETCH is a total hazard, right? If the network said, don't go over this rate or I'm going to freaking drop your packets, right? How on earth I mean, I could change which tracks I FETCH, but I can't control the rate at which they come. So if I FETCH too much like I may blow out my whole SCONE window and get rate-limited by the by the network. So.
Ian Swett: Sure. So we can so if we want that we should add that though. Like we should add like a feature that targets like the SCONE style use case, which is session-scoped not track-scoped. because that's like what SCONE does. I don't know.
Alan Frindell: Okay. Mo.
Mo Zanaty: Um, I'm not a big fan of this. Um, I'm sorry, I didn't read the PR or the issue, but just a rate itself is not going to work, you need a lot more detail than just a rate. You have to do the interval over which the rate is, the peak and the and the average, it's just all kinds of ugly things. If you're going to implement a pacer on the on the server, on the relay, then you have to provide the parameters of that pacer in this message. It just seems like a lot more work for the relay to implement a pacer potentially per per FETCH, and the client can already do this, right? And like Luke's point, you know, it makes no sense to do this on a on a sub-object level. You'd want to do it at an object boundary, and so the client can already do that, it could FETCH the object that it wants and it can timeout you know the objects in multiple FETCHES. So I don't really see the advantage of being able to do a giant FETCH and specify a rate versus being able to do FETCHES atomically for the things that I want at the timeframes that I want.
Alan Frindell: It's more overhead, but yeah. All right. I'm I'm hearing most people say like throw this in the garbage like park this and or make it REWIND'S problem. So and no one is I think Will was the one who really wanted it, but maybe is not here to defend it. So I'm happy to um move that along.
Okay. How much time do we have? Okay. This was um like sort of an idea that came out of the idea that you might want to get some information out of the out of the endpoint you're talking to, not the application that's sitting on top of it. So, it proposes that we reserve the all the tuples that start with .session, which applications can never never use that and it is only for the receiving stack. Um, so I can then subscribe for something like .session trackname log, and I actually have an implementation of this. So, if you don't if you if you get a subscribe for trackname you don't understand, you just return not supported and nothing happens. Um, but you if you like in my implementation, it will then give you a trace log of everything that's happening inside my relay that is related to your session and it streams it back to you as objects on that track. Um, the another use case which is where it came from was in 1507, which is like I might want to get congestion control information periodically out of the out of that stack, and that could be served as a track. So, I subscribe and that, you know, to this like magic track and it like shows up. So, I mean as far as MoQ transport is concerned, it's just like we are just reserving this as an extensibility point and saying you can this is a special space. Applications do not use this. Um, what people can build on top of it is what comes next.
Luke.
Luke Curley: Yeah, I think it's fine, but I just think extensions are nice because you know if the other side supports it. Like in your case for Moxigen live trace logs, I wouldn't know if I'm talking to Moxigen, so I just have to like try subscribing to this trace log endpoint. Or you know send a congestion control and like just in my code expect to handle that I get an unimplemented message. Versus extensions negotiated as part of the handshake, you know right away that it's supported.
Alan Frindell: Well you can do both, right? I could I could advertise a set of param set-up option that says these are the .session namespaces that you're that are available. Um. But.
Luke Curley: Sure. But.
Alan Frindell: What it saves is you having to let's say you create I you know, you have an extension that functions like a track. Now you have to define all the messages and the data streams and everything along with it when MoQ already defined all that machinery and you can just reuse it.
Luke Curley: Well, you kind of have to do that anyway based on how you encode the objects. Um, it's similar like how you define an object is encoded is very similar to how you would define an extension message encoded. Um. I see your point.
Alan Frindell: I've found it to be much simpler like for the log implementation, it is like I already have all the machinery I need to publish to a track, so like I didn't need to create anything in the parser, I didn't need to you know or the framer, like I just watch for a subscription and and just catch it before it goes to the application. So, anyway, and if you don't like it you don't have to use it.
Suhas.
Suhas Nandakumar: this this seems fine, but one question I have is that let's say someone an extension A defined .session logs and another extension defined .session also logs but with a different encoding. Are we saying that's not allowed, or are we saying that each extension if the relay supports extension from draft A it does one, or if it supports both, somehow there should be way to negotiate that.
Alan Frindell: I mean I the like third bullet here is like, if if you're going to do something here you should register it with IANA like and say like, this is my thing.
Suhas Nandakumar: Right. in that case I I would say that we just a track name would not work. We need to have something like say if I have a session laps/logs, moxigen/logs, right? Then I need to have hierarchy.
Alan Frindell: Yes, yes, yes. I saw your feedback on the PR that we need namespace in there and I added it. So you can.
Suhas Nandakumar: Yeah.
Alan Frindell: Cool.
Mike.
Mike English: Sorry. I can't hear you right now, but I think you can hear me. Um, so my my question is uh on uh kind of registering uh whether .well-known was considered. I realize that's like HTTP, but like should we try and follow a pattern that's already established for how we avoid conflicts here?
Alan Frindell: My other if you look at the last question on here, my thought was like should we just reserve everything that starts with dot? Oh, you can't hear me. Can you hear me now? Okay. the question is, or maybe the answer is we just reserve every tuple zero that starts with a dot and then we can figure out what happens later.
Mo.
Mo Zanaty: Seems useful, but if you're going to have a setup option for it anyway, um I'm unclear why we need to register anything. The setup option would define something and that spec would tell you exactly the name could be, you know, foo. It doesn't matter what the name is. I don't see why IANA would need to get involved. some separate spec would tell you what this option negotiates.
Alan Frindell: You may be right. That's fair. Luke.
Luke Curley: Yeah, and the final thing is I think for this use case we want to make sure it's hop-by-hop only. so it really needs to be .session needs to be like you must return an error if you don't support anything. Um I just really don't want like a relay forwarding this to the upstream broadcaster, for example, and like returning the wrong congestion control information. Uh so this is more than reserving dot, this needs to be you must never forward .session upstream.
Alan Frindell: I mean it's not even really even upstream like it it's intended to be handled at a lower level. the application should never see something with .session.
Luke Curley: Yeah, but I think that's different between well-known and HTTP, because HTTP you can send that upstream.
Alan Frindell: Right. I think that was what I think Claude and I took a look at well-known and decided that that wasn't quite the right model. or that wasn't the right name, I didn't want to use it.
Victor.
Victor Vasiliev: Uh I think you do want to reserve those because the the goal of reserving here is not we're not concerned about whether it's supported or not, we want to make sure that the dot ones do not conflict with existing application names. Okay.
Victor Vasiliev: So, by saying by reserving them we say don't use this until unless you write a spec.
Alan Frindell: Fair. Do you want me to change that to just say anything that starts with like I could change this to say anything that starts with dot is and then we can solve the rest of it in some extension draft or MoQ V2.
Victor Vasiliev: Everything that starts with dot is reserved everything starts with .session is reserved and must not be forwarded.
Alan Frindell: Okay.
Luke Curley: Um, so I heard Luke say no this is not good but I heard some other folks say yes it's useful. No, no, I think it's fine. I just think I would do an extension anyway. I wouldn't use this but it's okay to add.
Alan Frindell: Okay. All right. uh I will we will make some updates and and try to land this.
I'm not going to get through all the slides. Oh, if I skip multiple subscriptions we can get to the end. Um.
I want to talk briefly about things that we never talk about. Um one of them is delivery timeout that this issue is very old. I know Victor has different plans about what to do about delivery delivery timeout. I had filed a different PR to make delivery timeout so that you could override it on a per subgroup basis, um but Victor is like no we have to fix delivery timeout first. So I don't know when we want to talk about that and there's two other these other two issues one about you can't flow control a subscription. Do we want to do anything about it? And another one about like the like VOD is like kind of sucks over MoQ and like it's not in our charter but there are VOD people who swing by or people who like live and VOD and they're like what are you going to do about it. So I don't know what we want to make how we want to make a plan to resolve these issues.
Suhas Nandakumar: I had one question on the 1316. Do we know what sucks? Maybe it's there in that issue or if someone can come forward.
Alan Frindell: Yeah, probably. It's a long issue, there's a lot of discussion on it. Um this is something like it it could be something nice for the for the proponents to come and present. I mean, I'll say this like MoQ offers you zero benefit over what you can do with HTTP for VOD when there's opportunity to do a lot more.
Suhas Nandakumar: Yeah, that's what I don't know what that lot more is needed.
Alan Frindell: Well, go read the issue. Okay.
Ian.
Ian Swett: Um, I I got the reading on 869 that like people weren't interested in full-on like subscription-level flow control at the moment, um but if people wanted to see either a PR or a extension draft like I could write one or the other. um and so I don't know if I guess if people want to tell me what they want me to do. like I can close it with no action under the idea that like no one ever cares or I can do one of the other two things. Um, like I have a sketch of like what subscription level flow control would look like if we just wanted to do the straight-up like stream byte cutting sort of approach. I don't know.
Luke.
Luke Curley: I was going to say for the last one, uh I would love to somebody if anybody wants to work on a draft or an extension for some way you map HTTP to to MoQ as in like you'd make this get request and like the headers or track properties that get returned or stuff like that. Uh because I totally agree with what Alan said, like did we need HTTP. it already kind of works. Uh this client's out there, but I'd love to find a way to reuse existing stacks without forcing everybody to switch over to MoQ for for VOD.
Alan Frindell: I mean maybe what I would like is I know Yei-qui was a big I think Yei-qui filed that issue I don't know if he's here. Um, but it would be great if we could drive get the VOD proponents to ask like what exactly are they asking for um or make like, yeah, propose an extension or propose a specific thing to do here like I don't know what to do with this issue but it seems like we should maybe spend more time talking about it.
Luke Curley: Yeah, like I'm I'm not doing fetch, I'm doing HTTP instead for VOD. Like I'm in a similar boat.
Alan Frindell: Um, I'm not sure I'm getting a clear answer. I guess what I'm saying is if you care about any of these things, please grab the wheel and start driving them because I think they're on a path to close with no action if we don't.
Suhas Nandakumar: maybe like one question to Luke um is that like when he said he's using fetch it's not using fetch instead of HTTP. like look look is it because fetch provides exactly the same functionality as HTTP or it's it provides less functionality hence you prefer to use HTTP like that.
Luke Curley: Um it provides the same and even like that existing issue about send rate, like we were saying like, you know, if the solution is just to fetch individual groups one of the at a time, that's that's what HTTP does. Um and and the idea is like I'm pointing existing HLS players at my relay and they just speak HTTP and they just fetch MoQ groups and they don't realize it's MoQ they just think it's HTTP.
Suhas Nandakumar: Got it. the reason I ask is that at some point we we had a prototype to implement what is called as MGET to basically HTTP version of a new GET uh that would do MoQ again that was an experiment to see how MoQ is different from HTTP rather than anything else but but yeah that was the idea there.
Luke Curley: Most of my stuff is about backwards compatibility, honestly, like it's just impossible to upgrade every client over to MoQ overnight and like having it fallback, yeah.
Alan Frindell: I think this is less about that Luke and more about do we care that MoQ is a nothing burger for the VOD world.
Luke Curley: Yeah, no I I get it. And I think the answer is yeah, it's a nothing burger, but like we want VOD so we can use one protocol in the future.
Alan Frindell: Yeah. Mo.
Mo Zanaty: Yeah, I I read through 1316 and I think um there's some dependency on 869 because one of the things it's asking for is rates, uh rate rate limiting. Um, so I would if you're going to prioritize things I'll prioritize 869 because that's one of the mechanisms that 1316 is asking for. the rest of 1316 I think if we land filters would already give it what it wants um because it's it's asking for, you know, being able to do things with different priorities, with different um you know subgroups, you know uh different slices of the media which is what filters would allow you to do with multiple fetches, not with one fetch. If they want to do it with a single fetch, um no, I don't think there's a good path to having a single fetch ever deliver multiple streams. So if that's.
Alan Frindell: Have you read REWIND?
Mo Zanaty: Well, but that's that's uh I don't think anybody is planning to do REWIND like for hours of REWIND, right?
Alan Frindell: Maybe. Maybe that's the solution. I don't know.
Um, I see we're at the top of the hour. I did skip over like the meatiest issue here which is not great since I just also made a plan to like not have any time to talk about issues for the next two months. So, I would encourage people who care about multiple subscriptions to a track to read through these slides, read through the relative issues and or PRs and express your opinions. Um, we weren't really sure and I think our proposal I don't even remember our proposal we came up with different options um.
I think maybe we said like I don't even remember. So please please take a look um maybe we'll find a time to fill in. Mo.
Mo Zanaty: So the the latest filters does allow you to propagate aggregate and propagate upstream any any kind of filter that you get from your subscribers. So if that was the overriding concern for this, I would say we don't need to do anything else.
Alan Frindell: Well, so if we do the free yeah so if we take what you did there I haven't read the latest proposal yet but I need to go through and look at it. If we did the same thing for subscription filters or if we moved subscription filters into the other filters, I think that would be a solution.
Mo Zanaty: Well, what I heard before was that the subscription filters were not a problem because you can implement those you could aggregate those by doing them in sequence. Like, you know, if if one person wants just the first three seconds, somebody wants two minutes later, then you would first subscribe for the first three seconds and then you would later subscribe for the two minutes. Like there was no way you could have conflicting requests because you could always sequence them and never have to do a full subscription upstream.
Alan Frindell: I don't know if that's really true, but if we could convince people that it was and didn't have to do anything I could live with it. I think that may be challenging. Anyway I want to be mindful of folks' time um so um please take a look. Magnus.
Magnus Westerlund: Yeah, I think it's time to wrap up here. Thank you for everyone for contributing today and see each other in two weeks' time and maybe we'll figure out who's actually on the agenda then so, but uh. Okay. Thanks everyone, see you next time.
Suhas Nandakumar: Thank you.