Markdown Version

Session Date/Time: 16 Mar 2026 08:30

Sure, here is the complete verbatim transcript of the audio from the provided video:

Speaker 1: All right, do you see my screen?

Speaker 2: Yes, but not in presentation mode. There we go, in presentation mode. Yep.

Speaker 1: All right, awesome.

Speaker 2: Cool, thank you. Glad we tested that.

Speaker 1: Out of curiosity, just in case this happens again, what did you just do? How did we fix this?

Speaker 2: So, if I were to grant you slide control, that's a control over in the participant list side. But when you ask for screen control, you pop up in the queue like you were going to ask a question, and if I hover over you, I see the accept or deny your request to. So it's confusing because the approval is in a different place.

Speaker 1: Gotcha.

Leslie Daigle: So, I think it's about time to get started on the MOPS working group meeting. I'm Leslie Daigle. I'm a co-chair of this working group. My co-chair, Janna Iyengar, is not able to join us this week, but he will be with us in Vienna. And I'm joined on stage by Glenn Deen, who is our technical advisor. So, as will no doubt be news to everybody here, there is NOTE WELL. These are the conditions for participation in IETF work that you agreed to when you registered for the IETF meeting. So these, if you have any questions, you should read through this slide well, and you should, if you have any further questions about the contents, you should refer to the documents that are listed here. Giving you a moment to read through the NOTE WELL, and also decide whether you are willing to be a note-taker. And that offer is open to people in the room, as well as people online. Some meeting tips, if you haven't already seen these six times, and resources. This is our agenda. We've done the NOTE WELL. We need a minute-taker. Anyone? I can ask for volunteers online, but I know that some of them are up at very early hours of the day.

Magnus Westerlund: I can take notes.

Leslie Daigle: Thank you very much, Magnus. Everyone is welcome in the shared note for note-taking to help Magnus out, but... All right. So, this is the agenda as it was posted and shared with the mailing list. Are there any bashes to the agenda? All right, not hearing any bashes to the agenda. Just a couple of working group updates. I'd like to take this opportunity to say thank you very much to Kyle Rose, who was the co-chair of this working group since its inception until now and has stepped down. So, thank you, Kyle. And as I said, Janna Iyengar will be joining us... is co-chair now and will be joining us for the working group meeting in Vienna. So, welcome, Janna. And we're going to start now with the working group documents going over the Network Overlay Impacts to Streaming Video (draft-ietf-mops-network-overlay-impacts), and Glenn will be playing the role of Sanjay. Just give me a sec. Just doing the clicky-clicky thing.

Glenn Deen: I will warn you, I speak a lot faster than Sanjay does. But he speaks much clearer than I do. So, feel free to give and take. So, I'm Glenn Deen from Comcast NBCUniversal. I'm one of the co-authors of this and I'm the current editor of the working group document, along with my co-author co-editor, Sanjay Mishra of Verizon. It's a little bit early for Sanjay, so I said I would do the session today. Uh, this is the working group document for MOPS. We're currently at version 3. We had intended originally to do a version 4 rev before this meeting, but between the last meeting in Montreal and this one, we were soliciting some input, and in particular, we talked to David Schinazi, who knows a lot about things like network overlays and what went into the design of a lot of the stuff that is in specs here or standards here at the IETF. And so we got some very good feedback from David. And we thank David for that, by the way. It's very welcome. And so, we're at version 3. We're going to go through some of David's comments, but our ultimate plan is to rev to a version 4 ahead of the next meeting. But I'll get to that in a minute. So, David posted these comments to the list. We asked him to do that. They were very helpful. We would like to encourage you to follow suit if you have opinions. And we're going to walk through these. So, if you find things that we talk about today in this review of the comments, if you go, "Well, I sort of agree with part of what David says, but I would like to add some additional context of my own to it," we really do encourage that and welcome that. So, please do do it here in the room, but also do it on the list so that we have a broad conversation. So, David did a great review of version 3. He went through it section by section, and he did a really great thing where he said, "Well, here's where it's good, here's where it could use some work." And I will say that we didn't agree with everything David suggested, but that's normal for the IETF process. We did, you know, even where we didn't agree, we think it's good because it opens a discussion front for the group to discuss. So, one of the strengths is it actually talks about a real-world problem we have where we have privacy-enhancing technologies and the impacts that they have to video. The gap that he said was, he said, "Well, you raise the problem, but you don't give any solutions or remediation plans." And we said, "Well, that's true." And in an earlier version of the document, we sort of started down that path. But with the guidance of the chairs and the group, the idea was that we would do a problem statement document, capture that, and then do follow-on work where we would talk about remediations. And so, it's by design that we don't have the call to action to fix it inside this particular draft. This is the problem statement, with the intention that there will be a follow-on document of some form that will actually address some of these things in the architecture approaches. So, one of the things David really focused on is privacy-related stuff. And obviously, a lot of these overlays as built out are really designed as privacy-enhancing technologies. And we've been very clear from the very beginning when this was first an individual draft and then when it got adopted by the working group that the purpose here is not to undermine or replace privacy-enhancing technologies. It's to find a great way to coexist where they still do their thing, but that the video playback and the video access through them works really well. So, one of the things we talked about in the ID and that David highlighted, and it's something the group may want to weigh in on, is this notion that a lot of the privacy-enhancing stuff, as done today by the IETF, has been sort of designed to be hidden from even the application running on the devices. And this was for, you know, there's a lot of threat factors here that it deals with. But one of the basic ones is that it deals with preventing malicious apps from tricking users into disabling the privacy. So, it sort of hides itself and says, "Well, you know, I'm there and the apps don't know we're there, so that way the app can't say 'turn me off'." That's an interesting problem vector because from the perspective of the video playout and the fact that the video playout wants to understand its pipeline and be able to work with it in the most efficient way possible, having an actor in the stack or somewhere in the transport between the video player and the video content source over on the network creates a bit of a problem, right? We want the video player to be aware of things. And that was one of the points that was made in the original ID. But then you've got this other view where the privacy-enhancing technology should be hidden from the application. So, that's an interesting quandary we have there and I think it's actually kind of a neat-set, one of the original and one of the core problems that is facing us here. You know, should you be able to hide? And how well should you be able to hide? And what kind of relationship should applications, video applications, have with these underlying privacy-enhancing technologies? The other thing that David called out that we, you know, in the document really took a line, we said, "Look, there's this next generation of privacy-enhancing technologies today, MASK being like the one that's front and center." And we distinguish them from traditional VPNs. And if you read the draft, there is a number of reasons why we distinguish it that way. But one of the chief ones is that these new privacy-enhancing technologies are more hidden, they're more automatic, they're less visible to the application as we discussed. Whereas VPNs, at the application there, you can typically discover if your application, "Are you running on a VPN?" It can be something as simple as, "Say, what's my IP address that I actually am accessing through?" Or can I do, you know, run a traceroute across the network, get the results back. Those, you know, traditional stuff over traditional VPN works pretty well. And there's obviously other tricks that people use for VPNs, like look at do DNS lookups and see what the DNS results you get if you're in a split horizon perspective. But we think there's a great deal of difference between a VPN and these privacy-enhancing technologies like MASK. And we think that in particular for the draft and for the later follow-on draft about remediations, those would be very different because the VPN world is a much simpler time, a much simpler way of doing the hiding from observers, but it doesn't hide itself from the application space, if you get my drift. And so, as we go forward, I'm not sure we agree with making these the same. It might be worth ultimately pulling in some consequences of the VPN and talk about them a little bit because they do change policy aspects that are like privacy-enhancing technology. But they're not... the remediations ultimately might be very different between the two. Again, VPNs don't try to hide themselves from the application. So, if one of the remediations is make the privacy-enhancing technology more visible, well, the VPN already has that property. Does that make sense? So, David made a couple recommendations. He said, "Well, streaming operators, you know, maybe you could do your own privacy-enhancing technologies and implement encryption." Because in the case of a lot of these privacy-enhancing things like MASK, if you are yourself encrypting technology or encrypting content or the stream, it won't go through the privacy-enhancing stuff because it sort of says, "Hey, you're already private because you're encrypted." So a way to sort of force in some cases, not all cases, but for some implementations, if you go encrypted, then you won't, you'll exclude yourself from them. That's not universally true. It's true to a few cases, but it's not universally true. And there's some of these things that we've observed through testing that literally if you actually also have a VPN installed on the device, the privacy-enhancing technology itself won't turn on. It'll go, "Oh, there's already a VPN present here," even if it's not active, and it won't even allow itself to be turned on. Although some of them will say they're turned on when they're not actually turned on. That's a different problem. So, and you also pointed that we had a typo, if you notice we flipped the 2 in the 6 in those RFC numbers. And so we'll of course get that fixed up. Oopsies. So, and he also called out, we talk about transport middleboxes and stuff in the document about middleware, middleboxes are evil, I think was the short version of that expressed. And we'll fix up that reference. So, that's what David recommends. So, what are we going to do with this David's stuff? Like I said, we agree with some of it, we partially agree with other bits of it, and we somewhat disagree with other bits, like the VPN stuff. So, we're going to do a version 4. We're going to include more diagrams so we can discuss this stuff and talk about it in more detail. We're going to incorporate David's input. But David is one really smart guy, that's true. But he's also one guy. We could use more eyeballs and more opinions into this because this is a working group document, it's not a Sanjay and Glenn document with a few comments from David. It's a working group document. You're the working group. Please give us some comments. Give us on list. It would be very helpful. Even if you say it stinks, telling us it stinks is a valid comment. We may not like it, but it's a valid comment. We would like to get this thing to working group last call potentially for Vienna. So, give us your comments, give us your feedback, and we will take into account just like we're going to take David's great comments in.

Leslie Daigle: And what's on slide 8?

Glenn Deen: Thank you.

Leslie Daigle: Okay. Thank you. And I'll take this opportunity to remind people that please log into the Meetecho client, particularly if you are in the room, since remote people already have. We are running the queue entirely from the Meetecho client. All right. Oh, I think I have to take that back. There we go. So next up, we have Lenny.

Lenny Giuliano: Thank you. Okay. Uh, do you see my slides in the correct mode? All right, great. Um, so I'm going to be presenting this proposal on behalf of my co-authors. Um, so, this is a proposal on Dynamic Internet Multicast Tunneling. Um, now I'll start by saying that this is probably not the right working group to adopt this work. However, I do think that this might be the working group with the folks most interested in this proposal. Uh, so that's why we're here. So, uh, why, uh, what is the purpose of this draft on dynamic internet multicast tunneling? Um, let's start with the problem. And that is the problem statement, and that is multicast, as anybody who knows, who's worked much with it, requires every layer 3 hop between source and receiver to be multicast-enabled. To overcome this hurdle, we sometimes use tunnels and overlays. Static tunnels specifically, you know, GRE has been used over the years, over the decades, as the most common way of tunneling traffic over parts of the network that were not able to do multicast. And it's been used for, you know, since the previous century. Um, the problem with GRE and static tunnels is it requires manual configuration of both end, both sides. Um, plus you have to add, run routing through it for RPF to work. Um, dynamic tunnels, and the most common being AMT (Automatic Multicast Tunneling), doesn't support routing protocol traversal. Um, so what that means is, if there's, say, more than one relay, how does a gateway know which one to use and which one to RPF through? Um, so the only protocol that runs through an AMT tunnel is IGMP. Um, and the use case is that there are CDNs and content providers who are looking for zero-config tunnels, you know, the dynamic tunnels we're talking about, as middle-mile tunnels to connect multicast islands together. Um, and so an AMT gateway, a router, would need to be the AMT gateway. Um, and how does a gateway, how does a router become a gateway without knowing what is the right relay to join a particular source? So, how do they know which relay they can use to join which source? Um, and I'll show a picture in a moment that kind of illustrates this problem. Um, so what this proposal does is it uses BGP, specifically extended communities, to... in the BGP route to the source, we're using a BGP extended community to specify the AMT relay. We're essentially encoding the AMT relay in this extended community. Um, this relay that is encoded in that extended community must have multicast connectivity, whether it be native or tunneled, to the source that is being advertised. Um, and AMT is just one dynamic tunneling mechanism, others can be used, for example, those that operate with PIM Light. All right, so just a quick reminder of how AMT works. Just a refresher so that this proposal makes more sense for those who aren't intimately familiar with AMT. So, imagine you have a multicast-enabled network, a multicast-enabled content provider, and a multicast-enabled local provider, and multicast flows natively the way multicast was intended to flow. Um, but that represents a very, very small amount of the internet. The other 99+ percent of the internet is unicast-only. And if you have an interested receiver, they send an IGMP report to their last hop router, which doesn't know what to do with it because it's not running multicast. Um, so if that host is an AMT gateway, which is a thin client that sits on the host and it can magically discover a relay and it can build dynamically an AMT tunnel to that relay, send an IGMP report through it, and the relay will join on behalf of that as if the gateway was directly connected to it. And it would send the traffic to that gateway and separate unicast tunnels where the multicast would flow over those unicast UDP tunnels to separate gateways. That's how multicast works, that's how AMT works. And if we add SSM, that's where we end up with TreeDN, which is a tree-based CDN architecture. Um, so imagine you have the big eye internet that is unicast-only. You have a native multicast-enabled network, we'll call it on-net. Um, this is a TreeDN provider. You have multicast content on that network. You have a multicast native receiver, and traffic flows natively. Now imagine we have a receiver that is off-net, that is, the receiver is on a unicast-only network. Um, we add some AMT relays, we send the traffic natively to those relays, and an AMT tunnel is built to deliver the content to the off-net receiver from its nearest relay. That is RFC 7404. Um, now, what is this proposal suggesting? Um, this proposal essentially extends the TreeDN architecture to do middle-mile tunneling. What you just saw was last-mile tunneling, where the viewer is sitting on a unicast-only network, the interested receiver is sitting on a unicast-only network. What if you have multicast islands? Um, so in this case, we have three different multicast islands that are not directly connected to one another. They're separated by a unicast-only abyss. Um, and we have on the left side, on Multicast Island number 1 and 2, we have two different sources. On Multicast Island 3, we have a native receiver and we have a couple of off-net receivers that are nearest its relays. So, let's add some AMT relays here and one gateway. Um, and imagine we have all these receivers, the native as well as the off-net receiver, sending IGMP reports through their native links or AMT tunnels to their nearest relay. Now, the question is, some of these receivers want to join source 1, some want to join source 2. How do the routers know which relay can reach which source? Um, and the answer is... oops. Um, the way to solve this problem is that when the ASBR in Multicast Island number 1 advertises reachability, the route to the network with source 1, it will add a BGP extended community that specifies relay 1. And likewise, relay 2 will do the same. Um, and those routes, BGP routes, will propagate throughout the network. And now these routers all know which direction to send their joins or their AMT discovery messages. Um, so the joins for source 1 get sent to relay 1 and source 2 get sent to relay 2. Um, and the way we learned it was through this BGP extended community. And once we have the joins set up, this is the direction of the traffic flow. Purple is native multicast flowing and blue is tunneling, either AMT or PIM Light. So, what TreeDN did was these tunnels, these last-mile tunnels, from the relays to the receivers. What this proposal is adding is the ability to do middle-mile tunneling from a gateway in one island to a relay in another. So, router-to-router tunnels using AMT. Um, what is the implications of this proposal? Um, well, this is a really flexible architecture, flexible dynamic architecture that allows core routers to become AMT gateways. Um, there is nothing in the AMT spec that prevented routers from becoming gateways. The issue was how do they know which relay to use? It's not obvious how they would do that in a world where you have more than one AMT relay. Um, another implication is that routers can be both AMT gateways and relays at the same time. They can be a relay to downstream gateways and they can be a gateway to upstream relays. Um, so they could, you know, build tunnels in both directions and send and receive multicast traffic through those tunnels. Um, this essentially extends the TreeDN architecture to support middle-mile tunneling because really, previously TreeDN really only covered the last-mile aspect of tunneling to address receivers on unicast-only islands on unicast-only networks, as opposed to what if the source was on a separate island than where the receivers are? All right, why would we want to do this? Um, there are content providers and CDNs who want to originate and transport multicast content that can be received by multicast islands downstream that are anywhere on the internet, that are not directly connected. Um, so CDNs have said, "Hey, we'd love to do multicast, but we're afraid that networks might not want to run PIM directly with us. So instead, we'll just tunnel it to them via AMT." Now, you could say, "Well, why don't you just use GRE? Router-to-router tunneling, that's what GRE is for." But these CDNs and content providers have said, "Well, we don't want to use GRE. GRE requires us to configure our endpoints and we don't want to do that. We want to just have a zero-config way. Here's our relay and anybody who wants to access this content can dynamically build a tunnel to us and we don't want to have to configure endpoints every time we have a new interested downstream island in this content." Um, so next steps, we're actually presenting this in MBONED. We think that MBONED is probably the right working group to adopt this because this is an AMT relay discovery mechanism. Um, and MBONED is really where that kind of work has been done and it is chartered to handle. Um, but the reason I'm seeking feedback from other working groups, just presented this in PIM, presenting this here in MOPS. Um, as I mentioned at the beginning, I believe MOPS has the folks in the room who would probably be most interested in this architecture. So, that's why we wanted to share this and we would welcome any feedback or thoughts and any questions.

Leslie Daigle: Great, thank you, Lenny. All right. Are there any questions? Anybody with questions, please get in the queue. Questions or observations. Sounds like you've got everybody convinced already, Lenny.

Lenny Giuliano: All right. I'll ask for working group last call then.

Leslie Daigle: There you go. All right. Thanks very much for the presentation, Lenny. And I'll urge people to reach out to Lenny and his co-author directly if you have particular thoughts on the proposal. There is an internet draft quoted in the agenda that details the work. Okay, thank you, Lenny.

Lenny Giuliano: Thank you.

Glenn Deen: All right. For this spot, I will be playing the role of Glenn Deen and not Sanjay Mishra. So, this is the regular update synchronization between the work being done at the SVTA (Streaming Video Technology Alliance) and the MOPS group here at the IETF. And as usual, I also go back and give them an update on what we're doing here at the IETF. So, it goes both ways. So, SVTA, we do a lot of things, a lot of different groups from advertising, quality, low latency, live stuff, DASH-IF, metadata, operations, audio, security. We kind of do the whole shebang for video. The goal of the organization is to make video awesome on the internet. And so, we touch all the different aspects from caching to players and everything in between. So, organizational updates. One of the big things we've been going through, you know, we've been merging and growing. In 2024, we merged with DASH-IF, they came into become part of the SVTA. Prior to that, OETC, if you ever use a thing called TV Everywhere, OETC is the authentication mechanisms that enable TV Everywhere streaming on the internet, so if people go "What's OETC?", that's what OETC is. We just merged in the Ultra HD Forum, came in and joined the SVTA in January. And if you go to Ultra HD Forum, they still have a website, they talk about stuff, they do profiles and stuff like that for presentation of Ultra HD content and other things in productions and in distribution. So, SVTA does a lot of stuff. They are involved in a lot of work, some of which is relevant to the IETF and some of which isn't. Hopefully, the stuff I'm going to talk about today is relevant to the IETF. The first one I want to highlight is the Edge work. Edge is focused on building a environment that allows for hosting of generic cache things. And when I say thing, I mean we don't care what protocol talks, it could talk MOQ, it could talk SAI, it could talk something you invented last week, or it could just talk good old HTTP. We don't care. But the idea is that it's a workflow and it's a specification for how to host a cached environment at the edge of the network. And what this really means from a traditional cache, the difference is that when you move stuff to the extreme edge of the network in this vision, that the awareness and the integration with the caching infrastructure and the stuff hosting the cache with the network itself is far more integrated than in a traditional caching environment where you might drop a cache in, plug it in, give it an address, let it go do its thing. In this world, the caches, the hosting environment, and the network itself have a greater level of integration and awareness. So when things happen, like let's say a cache does fail and the cache wants to say "I want to send my clients," or the routing request wants to send the clients for that cache to someplace else, it isn't just a matter of move to a cache that has an IP address in a similar grouping. It might mean that you need to have more network topology awareness because going horizontal may not be the right thing, going deeper in the network might be the right thing, but it might be the right thing to go horizontal. So, there's more knowledge here as well. And also, because you're at that extreme edge, you have a greater potential influence over the security state of the network. So when the network itself is doing DDoS protections and other things, you might be more integrated and aware of that. So, that's what's going on there. And so it touches a lot of IETF-related things like BGP and other important stuff we work, we do here at the IETF. So, there's a new draft, a Foundations document, that the SVTA will be publishing. It's in ratification right now within the group. And I anticipate it will get published probably in about the next two months, maybe. Low latency streaming. We've had a lot of work going on for a lot of years, on this group used to be called Live, and then became Low Latency. All that along with some other work going on at the SVTA is trying to really drive the latency of video streaming way down, you know, highly responsive. I'm not going to give a number here because a lot of people have different definitions of what an ultra-low latency response time might be, but we want the number as small as possible. So this is starting to touch on things like L4S, which is an obviously an IETF specification. But here is where it comes into work that at the SVTA. One of the things we've discovered as we started rolling out streaming and L4S integration is that we do not have a standardized way today of capturing from at the IP layer all the way up to the video statistics layer a unified measurement mechanism. And this becomes important because you say, "Well, I deployed across an L4S network. Great. And I deployed it on an L4S-enabled application. Great. Did I get an improvement?" How do you measure that? And there isn't a standard way across the IETF or any other place to do that. So, one of the things we are looking at is how do you define that and would this be a maybe some joint work we could do with the IETF on defining a standardized apples-to-apples comparison mechanism for doing that. And part of the problem here is that you have different means, approaches, and even measurements at the different layers. Obviously, the IP layer and the TCP layer have their stuff because those are network parameters. As you move up the stack, you get into things like, you know, the actual video player statistics and, you know, if you're in the industry, stuff like Conviva stuff. But you also may be looking at stuff looking at, for instance, if you're doing adaptive bitrates where you have three or four bitrate things to choose from, measuring and collecting what bitrate you're playing back at any given moment and correlating that with your behavior all the way down the stack all the way to the IP layer is important if you want to understand if L4S is making a positive impact to your application delivery. So, finding a way to measure that in an apples-to-apples way is something we're looking at taking on. Will Law's in the audience. Hello, Will. One of your favorite things, MOQC, is a topic that the SVTA keeps looking at. One of the things we do over there and we've driven a lot of people to come into the MOQC discussion from the SVTA over a period of years by bringing like to them saying "Hey, these guys at the IETF are working on MOQC, you should go look at it if you're interested." And a lot of people that are now active in the MOQC group started over at the SVTA and joined the IETF to do that. And we still do it backwards. So, every time we have one of our big meetings, we have updates on what's going on inside the MOQC working group and get the SVTA as an operations group and a deployment operations team aware of what's going on in the MOQC and how it's evolving. So, we're trying to keep the two synchronized. Obviously, there's SVTA Open Caching which ties directly into the IETF CDNI group. The other big area that's starting to become a topic of conversation on multiple fronts and if you remember in Montreal, MOPS had Roger Pantos came and gave a presentation on Apple's proposal for doing key rotation in HLS. There's a lot of key rotation work going on at the SVTA. We are focused on DASH-IF in the DASH-IF workgroup obviously, and then within the more general security group, we have a lot of work being done in there about key rotation. The issue of course here being that as you start rotating keys very quickly, you can cause your security services to get overwhelmed because everybody's hitting them at the same time if you're doing very short key rotation windows. And that causes a problem. So, a lot of people are now proposing different approaches and different optimizations for how you can communicate those keys effectively. And some of them are doing like these pre-calculation of keys and pre-distribution of keys in order to make that work. So, there's joint work going on there in the key rotation stuff at the SVTA. And finally, there's an area that I think will have some value ultimately to the IETF as well. We're looking at how to use AI tools in making specs in our drafting process. So if we go back and take a look at that Edge spec, the Edge spec took us roughly from initial conception where we got a few people together to start writing it to now when we're going to publish it, it took... it'll be about 11 months. Okay, that's actually not bad for some specs. Obviously, some stuff here at the IETF takes a lot longer. We think we can get that down to even less time. And the vision here is find ways of using smarter LLMs trained around the technical specification topics and giving them with good queries, do things like initial drafts of technical specs so that instead of spending say three or four months spending, you know, arguing what should be in the spec, we do a first pass as quickly, quickly writes it with a knowledgeable LLM, then you get the experts to sit down and say "Oh, well that's true, that's not true, we can make this better, we can enhance this." And so the idea being that can you knock that 11 months down to 6 months to produce a very valid spec from conception to publication. We think you can. We're going to be doing some work internally to do it. We've already started using AI tooling for a lot of the work. In fact, that Edge spec, one of the things we wanted to do was get a survey out to people who are potential adopters of it. And so we used AI tooling to generate a survey and then the experts in the working group went through and honed that survey questions so that they were better, but I'll tell you, we didn't maybe spent about two hours doing it of the honing part. It's an extensive survey. The AI did a tremendous amount of the work upfront and did a good job. And so we did that. And then when we need to presented to, we threw the survey into an LLM, had it produce a report to educate the membership on what it's going to do. So, we're using it already to speed up work. Work that may have taken us hours, days, weeks is now taking us a few minutes in many cases. So, that experience and that learning might ultimately be something we can offer back into the IETF to say "This is how we found a way to use AI tooling to enhance our technical work." The other thing we also found already is being useful is cross-references and getting the terminology right across different people, the way they express it in different documents for different organizations. So, very powerful, used in the right hands, can really help our jobs get easier. So, a lot of people in this room probably already work for companies or organizations that are members. We'd like to have you come out to our meetings as well. So, there's going to be... if you want to see some stuff, if you're in NAB in April, we will have a booth at NAB talking about stuff and there's going to be a lot of good demonstrations of technologies that we have there. We are doing in June an interim meeting along with DASH-IF usually has a working group meeting in Berlin in June and this year is no exception, but the difference is we are adding some interims. So, for active working groups that are very active and want to get together and meet, we are going to also get some space to do that. And then in Rennes, France, we are going to in the fall in October, lovely time to be in France by the way, we're going to have our fall meeting and ahead of that, we're going to have a day 0 workshop focused on AI technologies for both video production, video delivery, video operations, but also AI for developing specifications. So, if you're interested in any of the topic spaces, you might want to come out for that. And that I think might be the whole thing. Do any questions from anybody? If you want to know more, hunt me down, ask me questions or drop a thing on the list if you want to know how to get engaged. All right, thank you.

Leslie Daigle: Thank you, Glenn. So, and next up, we have one more industry update.

Young-Gon Choi: Okay. Thank you, um, thanks for having me here and then give a presentation about MPEG in, you know, short time, Glenn. Um, so I think you know MPEG, I think you guys have heard about MPEG many times and, um, you know, I'm particularly working on the systems side of MPEG, which is a kind of format, you know, used to be delivered over, I mean, delivered over IETF things going... yeah. Um, so I don't know, you know, I don't think I need to spend a lot of times for this presentation entirely. Um, so MPEG systems famous on developing a kind of very well-used standard like MPEG-2 Systems, file format, and DASH and CMAF and MMT, all those things. So, uh, you know, if you give you if I can give you a kind of, you know, picture, you know, what we are doing is that you know we are not doing any compression, but you know we're doing something other than compression, um, and you know, which is sits between the application and the network. Um, so we have a container format, file format, you know, mainly the kind of things you know to put media data in one package and we do a lot of metadata definition for the kind of networks or kind of application can optimize the kind of media consumption. And also we do a lot of, you know, kind of adaptation of files into delivery format. So if you look at DASH is one way of doing that. You know, for DASH, we need to deliver the file over HTTP. We developed format for manifest and also we develop a rule to fragment, you know, file format in kind of smaller pieces to be delivered over an HTTP. Those are kind of delivery adaptation. So, a lot of standards you know has been widely adapted by the kind of industry and you know a lot of, you know, work together with a lot of other organizations like SVTA is going on. Um, and we also have a lot of items internally kind of let's say exploration activities. So I can one of the purposes of my presentation here is to kind of get some, let's say, promotion or some solicitation of your comments or industry kind of insight. Um, so the topics we are you know currently very, very interested in is kind of one AI-related. I know kind of we cannot just leave without AI in these days. You know, even in the standard, you know, it's getting interesting that, you know, SVTA is also looking at AI as well. I know that some of the kind of AI-related work items in IETF as well. Um, so what we are looking at AI two different topics. One is an obviously kind of how do we, you know, distinguish the content developed by kind of humans or originally captured versus something you know AI-generated or AI-altered. Um, so you know that there's an industry activities widely you know consider mostly considered, but you know we want to have a kind of more focus on the kind of ISO-based media file format-based things and how do we actually apply that through the kind of delivery and streaming kind of environment. And second thing is, you know, what if we want to deliver AI itself? Right? So, what if we want to deliver a kind of model to the client so that something could be done by the client? And a lot of, you know, interesting ideas cooking inside MPEG is you don't have to do a lot of, you know, signal processing-based algorithm to kind of compress the media, so you can just kind of use the AI engine itself so that content can be, you know, generated at the client, right? Then you know you need to deliver AI itself, not the kind of media data compressed bits, but simply you can deliver, you know, AI itself to the client then you know it can be used as a kind of generation tool and so on. Um, so we are looking at, you know, what's the best way to deliver AI itself to the client. Um, and some other topics you know our you know more you know regular topics we are working on but you know we got a lot of you know different input from the industry that you know we are looking for different problems at this stage. Um, you know, one of the things is kind of ultra-low latency, you know, it's not a something new but people once you know looking for something, you know, other than HTTP-based. Um, so like a good example is MOQC, right? So what if you know you know Quick is widely deployed and you know Media over QUIC becomes a kind of main stream of the delivery format then you know what's the kind of MPEG-based you know format role or the kind of use of that. Um, and another topic is that a lot of, you know, streaming format, streaming infrastructure today is more like a pull-based client user oriented, but you know we see a lot of new movement, you know, more of using the server-based intelligence or kind of pushy-based you know needs. So we are looking for how do we actually somehow kind of cope with that. Um, and network-friendly containers, you know that history of MPEG systems file format, which has been originally developed for storage format, not the kind of streaming format, but it happened to be a kind of widely used format for the kind of streaming in these days. So what if you want to have a totally different or new format which is more, you know, streaming or network use friendly so that you know can you have a can improve ISO-BMFF or can you have a new format you know more network and delivery-friendly format. And VR or spatial computing that might have a kind of lot of different needs because you know it means that you're not accessing the media or data in sequential order but you will have a kind of lot of different dimensions and and so on. So you know those are kind of new topics we're kind of actively investigating. And also we got a lot of needs on energy consumption. I know that there are some IETF activity on the energy consumption as well, so those kind of new topics. Um, so it looks like a similar message to SVTA. Our plan is to have a kind of more workshop bringing some industry experts to present and join and discuss. So we're looking for two different days. One is July 10th and another one is July 14th because the kind of in the week of July 14th is our regular meeting week and maybe we can have it you know before the meeting or during the meeting and the place will be Geneva, Switzerland. So anybody interested in, you can contact me or come to the meeting for your advice or input or you know we'll have an open call for presentation end of April so you might suggest to present good items you know some kind of insight to us. That's it. Thank you.

Leslie Daigle: Thank you. Any questions? Any comments? Everybody's ready to be done with the day. Oh wait, I have one comment and observation to help you get some people out there. I looked it up. The Montreux Jazz Festival is July 3rd through 18th. So your dates align with Montreux. So if you want to do some jazz and attend an MPEG meeting...

Young-Gon Choi: Oh, okay. Sounds good. Thank you.

Leslie Daigle: All right, thank you very much and are there any other items for the working group? Going once, going twice... All right. Thank you everybody for coming out to play and thanks again to Magnus for being our note-taker today. Thank you.