Session Date/Time: 17 Mar 2026 08:00
Jeffrey Zhang: This deck... you sure you want to close the deck? Confirm. Okay, we are back.
Lenny Giuliano: All right. Hey, Jeffrey. Thank you for joining.
Jeffrey Zhang: Hi, Lenny.
Sanjay Mishra: Hi, Lenny. Sanjay.
Lenny Giuliano: Hello, Sanjay. Hello, Jeffrey. So, um, folks in the room, we... or out of the room, we... um, we need a note taker. Uh, so we are soliciting who would like to be a note taker?
Jeffrey Zhang: AI? [Laughs]
Lenny Giuliano: Okay, I don't think AI is an acceptable answer yet.
Jeffrey Zhang: Sure. Yeah, thank you.
Lenny Giuliano: Did you find a... a volunteer, Jeffrey?
Jeffrey Zhang: Yes. I thought Jeffrey was volunteering. I will have a volunteer here.
Lenny Giuliano: Okay, cool. All right, we'll wait a couple of minutes for, uh, I suspect there were people in the side meeting that are trickling out. Um, so we'll give them a chance to join us. In the meantime, uh, for those of you who are here, um, this is the MBONED meeting. Um, if you don't intend to be in MBONED, you are not in the right room.
All right, I guess we’ve probably waited long enough. We can probably get started. So, welcome to the MBONED working group meeting. Um, this is the Note Well. This is, um, the same Note Well you’ve... everyone has agreed to, um, at the beginning of the... as part of joining the IETF meeting. And um, this is the same Note Well you have seen, um, and it is just a reminder. Please do read and note this Well about, um, contributions.
Okay, meeting tips. Um, for those folks in the room, please do, um, join Meetecho. We will be managing the queue for the... for the mic, um, using Meetecho. Uh, so even if you are in the room, uh, don't just walk up to the... don't just walk up to the mic without, um, joining the queue in Meetecho.
Okay. Here's our agenda. It's a light agenda. Um, we should be likely done early. Um, Sandy's going to talk about multicast use cases for LLM. Um, I will be speaking about dynamic internet multicast tunneling. And Sanjay is going to describe Aircast's experience with internet multicast at the, um, Australian Open. Jeffrey, are there any issues to be aware of?
Jeffrey Zhang: Uh, no, I... I was just waving to Sandy that... yeah. Sorry.
Lenny Giuliano: Oh, right. Okay. And we'd like to thank Jeffrey for standing in. Um, Greg is, um, sick at the moment, and I am remote, so Jeffrey has graciously accepted... he will be the in-room delegate, um, sitting in the chairs. And he is available to, um, break up fist fights that might break out. Um, okay. Any bashes to this agenda? Okay.
Um, let's start with the status of active working group documents. We have a number of documents. Um, we'll start with the non-multicast-to-the-browser documents. Uh, so the draft-ietf-mboned-amt-yang document, uh, did pass working group last call, and it's been submitted to IESG for publication. Um, the draft-ietf-mboned-multicast-yang-model draft. Um, this went through working group last call at the end of last year. Um, unfortunately there were zero responses from non-authors. Uh, however, so which is insufficient to advance the document to determine that there is, um, consensus to advance. Uh, however, um, off-list there was a thread with extensive feedback, um, with... to the authors. Um, and as a result of that feedback, the document was updated. Um, and... and there were enough, uh, changes and updates to warrant a new working group last call. So we are currently in that working group last call. We also requested another YANG Doctor's review. Um, that working group last call is currently, uh, going on. We're towards the end of it. Um, and that working group last call ends this Friday. So, um, just a reminder, folks, we thus far haven't heard from any non-authors, um, on this one. Um, so again, we... we can't advance the document... advance documents without, um, feedback and consensus and support from the working group, uh, that, um, there is consensus to advance. So please, um, do speak up, uh, whether you support or, um, do not support, uh, advancing the document. Um, this is one of the key contributions that we ask for. Um, and, um, remember, um, you know, reviewing and commenting is, uh, something you can do for others and others can do for you. Um, so that's the multicast YANG models draft.
The next one is the multicast draft-ietf-mboned-redundant-ingress-failover draft. Um, this was another, uh, document that went through working group last call last year. Um, didn't get a whole lot of, uh, support on list. However, there was, um... the document subsequently went through, um, directorate reviews, uh, which, um... the... the feedback from that suggested, um, some fairly extensive, uh, updates. Uh, which again was done and we are now in another working group last call for that. Um, that, uh, is going to go for another couple of weeks. Um, but, uh, this is a short draft. Um, encouraged folks, uh, take a look. Um, please comment on this one. Again, we can't advance these documents unless there is a consensus that there is support to advance.
Um, so that's working group last call draft number two, and there is a third one. And that is, um, we just started this one. This was the draft-ietf-mboned-non-source-routed-sr-mcast (Non-source Routed Considerations in SR Networks for Multicast). Uh, this is another very short draft. Uh, and it, uh, is... just began working group last call. Um, so we have three drafts in working group last call and, uh, we're, you know, requesting again, folks, do speak up whether you do or do not support advancing these documents. Um, Jeffrey, you're... you're one of the co-authors on that. Do you have anything to add?
Jeffrey Zhang: Um, yeah. Just as Lenny mentioned, this... this is, um... the considerations for deployment options in the SR networks. Um, it's very useful informational document. We have the co-authors from major vendors and operators, um, um, hopefully this represents a consensus of how or we think, um, what are the best or available options for SR multicast, so it will be a guidance for deployment of multicast in Segment Routing networks. So I believe it's very important and informational. So if you could read about it, comment about it, and express your view. Maybe you do not agree with what's expressed in the... what's said in the draft, but please, it's important to speak out.
Lenny Giuliano: Thank you. Um, and again, it's a short draft, um, just like the redundant ingress failover. They're both pretty short. Um, the last draft, um, in this bunch, uh, in... in this group, is, um, the draft-ietf-mboned-multicast-security draft. This was adopted, uh, last year. Um, I don't see Kyle. I don't see any of the authors, um... Um, I... I’m not aware of any updates to that draft. Um, do we have anybody involved in that draft? Uh, no Kyle, no Max. They're both sleeping. Lucky them. Um, okay. So no... looks like there’s no current updates to that security draft.
All right. And the next set of drafts are the multicast-to-the-browser docs. Um, draft-ietf-mboned-dorms, draft-ietf-mboned-cbacc, and draft-ietf-mboned-ambi. Um, so dorms, um, is still alive. It is, uh, awaiting security director review. Uh, that was requested many months ago, six months ago or so in October. Uh, and we never got, uh, that review. We have subsequently requested three times, uh, for that sec... that... that review. The authors are still interested in receiving, um, the security directorate... directorate review. So, um, we are, uh, eagerly... we basically closed down the original request after it was kind of ignored for three times. Um, and, uh, we’ve reopened a new one. So we're hoping that we'll hear back from the security directorate.
The other two drafts, um, cbacc, the authors believe this is still a relevant draft, but it's not, um, solving a whole... a problem people have now. Um, so, um, it's... it's awaiting comments from the CCWG as well as, uh, MBONED. And thus far folks have been quiet about this. Um, so the... the belief from the authors is there’s just not a lot of interest in this draft right now. Um, so they suggested parking it for now. Uh, the ambi draft, uh, is another similar one that, um, seems to have lessened in relevance and interest, um, largely overtaken, the... the authors believe it's kind of been overtaken by the multicast extensions to Quick draft. Um, and so, uh, due to this lack of interest, um, they... they requested it be parked for now. Uh, so we've done that. Um, and with that, that is all the status of all the active working group documents. Um, okay.
In terms of agenda, Sandy, if you want to come on up. Um, I think... did I hear Sandy’s in the room?
Jeffrey Zhang: Yes, she is.
Lenny Giuliano: Great. Come on up. You want to pass the clicker?
Jeffrey Zhang: Yeah.
Lenny Giuliano: All right.
Sandy Zhang: Good afternoon. This is Sandy Zhang from ZTE. Uh, this time I'd like to introduce a new multicast use case for LLM synchronization. So I, uh, present this draft on behalf of our co-author, Yisong and Junye. [Clicking sound] Is it not working?
Lenny Giuliano: Sandy, you... you may need to, uh, click on the slide clicker in the participation... Yeah, now. No?
Sandy Zhang: OK. Oh. Yeah. It works. Yeah. So let's see the, uh, use case in LLM synchronization. So we know that there is an emerging inference cloud services now. So, uh, to use the services, it can deliver large-scale real-time inference and fine-tuning and model optimization services on GPU cloud platforms. So the LLM models will be delivered to many GPU clouds to run the inference service. So there is the multi-cloud LLM synchronization. So the model is in the centralized rep... uh, repositories. So it can be automatically replicated and sync to, uh, the distributed GPU clouds. Uh, the GPU clouds may spread in different regions or different carrier networks.
So we can see that, uh, there is some challenge during the LLM synchronization. So the first one is the high concurrency because a popular large model will, uh, be, uh, downloaded simultaneously across dozens of GPU clouds and the size of the model will be, uh, 70GB to 1TB. And so it will leading to IO bottlenecks at the storage repository and it will delay the model distribution at scale. So we know that, uh, because for now, uh, the synchronization only used the unicast, so it will lead to the IO bottlenecks. So the second challenge is cold start latency, uh, because, um, the inference service cannot start until the model is fully downloaded to the GPU cloud. So if the, uh, download is slow, so the slow speed will significantly, uh, slow the cold start latency. So it will delay the user access to inference. Though the... the service is, uh, separate from the LLM training and inference process, but this synchronization process directly affects the efficiency and reliability of inference service delivery.
So we know... so because this is a typical multicast use case for the synchronization, the large models to multiple GPU clouds. So if we use multicast for the LLM synchronization, it can reduce the IO bottlenecks from simultaneous downloads and it will improve the transmission efficiency and minimize the cold start latency. Um, because we know that the GPU clouds may span multiple regions and operators, so the multicast used... uh, the multicast technology used, uh, is must capable of operating across core and metro networks.
So this is some candidate, uh, multicast technologies. So the first one is PIM-SM. We know that is a traditional, uh, multicast technology. It requires a multicast tree to be established in advance and all nodes along the path must maintain state information. And it's slow to respond to the network topology changes. So it may be suitable for the scenarios where the set of the destination GPU clouds is relatively fixed.
So the second, uh... the second multicast technology is SR-P2MP. So this technology relies on a controller to implement multicast traffic engineering and the... in the replication nodes, uh, there is state here. So a multicast tunnel must be established beforehand, too. And it's... it's slow to respond to the network topology changes, but beforehand we can, uh, compute some candidate path for fast route rerouting. So, it may, uh, be the fast for the topology changes, but it needs more computation. Uh, and it can be suitable for the scenarios where the set of destination GPU clouds is relatively fixed.
So the third multicast technology is BIER. We know that BIER is a new multicast technology and, uh, it's a stateless, uh, technology and no need to establish a multicast tree in advance and it responds quickly to the network topology changes. So it's no requirements that the destination GPU clouds set to be fixed.
So, uh, we present this draft here and we solicit working group feedback and we can discuss more detail, uh, for the requirements or potential gaps here. Yeah. That's all for today.
Lenny Giuliano: Are there any questions for Sandy?
Sandy Zhang: Yeah.
Lenny Giuliano: Sandy, what... what is, uh, your, um, thoughts for the, uh... the draft? Is there, uh... which working group, uh, do you think, or will you be seeking adoption at some point, um, when and in... and which working group do you think?
Sandy Zhang: Uh, in fact, we are not much sure where can the draft to be, because, so we present this draft be MBONED and BIER and PIM and RTGWG. So we are not much sure where can this draft to be put on, but if any suggestions, welcome.
Lenny Giuliano: Great. Any other comments from folks? Questions, thoughts? Okay. Well, thank you, Sandy.
Sandy Zhang: Thank you.
Lenny Giuliano: Next up is me. Jeffrey, um... do you see my slide deck? And do you see the proper view of it?
Jeffrey Zhang: Yes.
Lenny Giuliano: All right, cool. Okay, so um, I'm going to present a new, uh, draft that has been submitted on Dynamic Internet Multicast Tunneling. Um, I'm presenting on behalf of, uh, my, uh, the other co-authors, uh, of this draft. Um, so as we all know, uh, you know, as well as anyone, better than anyone perhaps, um, multicast, in... in order for multicast to work properly, it needs to be running, uh, at every layer 3 hop between source and receiver. Um, they need to be multicast-enabled. They need to be running a multicast routing protocol like PIM. Um, this can be a significant hurdle, um, so things like tunnels and, you know, overlay networking and tunnels, uh, are frequently used to overcome this hurdle. Um, and, uh, you know, since the beginning of the MBONED, um, static tunnels, specifically GRE, uh, have been popular, uh, tools, um, to, uh, build over... you know, to tunnel over parts of the network that are not multicast-enabled, um, to connect multicast-enabled parts of that network. The challenge with GRE is it requires, um, manual config, uh, on both, um, endpoints. Um, you have to statically configure, you know, the tunnel source and the tunnel destination on each side. Um, and then you have to run routing, um, through those tunnels, uh, in order for RPF to work properly. Um, now there are dynamic tunnels, uh, specifically the most common, um, and top-of-mind one would be AMT (Automatic Multicast Tunneling). Um, which, um, you know, in the case of AMT, it's... it's great in that, you know, it doesn't require manual configuration on both ends. It's kind of a zero-config, uh, tunneling mechanism. Um, the challenge with it is, uh, it doesn't support, um, routing protocol traversal. Uh, so if there are, uh, say multiple relays, it's not clear... obvious, uh, you know, the question is how does... how does a gateway know, um, which one to pick? Uh, that's easy to do in the application layer if... if the gateway is a host, but what if the gateway is a router? Um, you know, how... how do the routers know which one to use? Um, you know, the routers don't have the same flexibility in the application layer that say hosts do. Um, and then once you do pick that, um, you know, again, the... the only protocol you can run through AMT is IGMP. There's no... there's no, uh, routing protocols, um, like BGP that you can run through it.
Um, the use case here is that, uh, CDNs and content providers are interested in zero-config tunnels. Uh, so they like AMT. Um, and they want to use AMT in the middle mile to connect multicast islands of networks. Um, in... in this case, routers would be functioning as AMT gateways. And how do they do that without routing information? How do they know which relays can reach which sources? So I'm going to show a picture in a moment, um, that kind of illustrates this challenge.
Um, so the solution, uh, we've proposed here is, um, to essentially use, uh, an extended... a BGP extended community. Um, so in the... in the BGP route to the source, uh, um... to the multicast source, uh, the reachability, you know, the route... the reachability of that source, um, we're going to add an extended community that includes, um, and is encoded within that extended community is encoded the AMT relay. Um, and, uh, this relay must have multicast connectivity, whether it be native or tunneled, to the network, um, uh, to the source, uh, that the route is being, um, advertised from. Uh, we... we talk a lot about AMT, but it could... it could work for any dynamic tunneling, um, option. Uh, for example, if you're using PIM Light, uh, and you're using a... a some type of dynamic tunneling mechanism, um, this... this would work the same. We're just embedding the, um, the UMH inside the extended community.
All right, so how does this work? Um, let's... let's start with just, uh... just a review of how AMT works. Um, because once you see how AMT works, um, it... uh... it... it's clear, uh, what this... it makes more sense what we're trying to do here. Um, so again, uh, the folks in this working group are very familiar with AMT, but I'll... I'll very quickly go through this. So you imagine you have a multicast-enabled, uh, network, a multicast-enabled content owner, and a multicast-enabled, um, local last-mile provider. Um, multicast goes from source to receiver, uh, natively, um, the way God intended. Um, but that's a very small part of the internet. Um, you... know, the other 99-point-whatever percent of the internet is unicast-only. So if you have an interested receiver who sends an IGMP report to their last hop router, nothing's going to happen, uh, because that last hop router doesn't support multicast. However, uh, if that, um, interested receiver has an AMT... is an AMT gateway, it has that thin client, AMT gateway client, um, built in to it, um, it is able to magically discover the nearest AMT relay, uh, and it builds an IGMP... it builds an AMT tunnel, um, which is a special UDP encapsulated tunnel. Um, once that tunnel's up, the gateway sends an IGMP report to the relay. The relay joins, uh, natively, um, just as if that gateway was directly connected, uh, receiver, um, and sends the traffic to... sends the data, the multicast stream, over the unicast, uh, AMT tunnel. And as you have more gateways, they follow the same procedure.
So that's how AMT works. Um, and you notice, uh, you know, this... this use case we're describing is mostly, you know, is... is in the last mile. Um, so TreeDN is, uh, a tree-based CDN architecture, RFC 9706. And it essentially is the, uh, you know, the addition of SSM + AMT. SSM to simplify the native part of the multicast network, uh, and AMT to deliver traffic off of that network. So imagine you have the Big I Internet, uh, which is mostly unicast-only, and you have a native, uh, multicast-enabled network at... a TreeDN provider. Um, you have a native source, a native receiver, again, traffic flows natively across that network. Uh, and then if you have an off-net receiver... this is a receiver that's, uh, not connected, it's on a unicast-only part of the network, um, we deploy AMT relays, deliver the traffic natively to those relays, and from the relay, uh, it is sent over AMT to those receivers.
So that's how TreeDN works. Um, it is basically just SSM + AMT to deliver a CDN-like service with trees. Um, so now let's add dynamic, uh, internet multicast tunneling, the proposal that we're describing here, uh, to see how it works. So, uh, again, we have the Big I Internet that is mostly unicast-only, um, and we have, uh, in this case, three multicast islands. Um, and these three multicast islands are islands that are not directly connected to one another. Uh, they are... these are islands that are multicast-enabled, networks that are multicast-enabled that are separated, um, by a unicast-only abyss. So, um, we start with, uh, we have sources in the islands on the left, um, Source 1 is in Island 1 and Source 2 is in Island 2, and we have a bunch of, uh, interested receivers. There's a native receiver in multicast island 3 and there's a couple of, uh, off-net receivers, um, that are closest to the relays in multicast island 3. So, um, let's add some AMT relays and some AMT gateways. Um, the IGMP... the off-net receiver... the... the three receivers on the right, uh, send IGMP reports either natively to the last hop router or, uh, over AMT tunnels that they've built to the nearest AMT relays. Um, and now the question is, uh, what next? How do the routers in this network... how does the AMT gateways in the multicast island 3 know which relay to reach which source? Um, how does it know to use relay 1 to get to source 1 or relay 2 to get to source 2? Uh, and the answer comes from a BGP extended community. So an ASBR, uh, that is in multicast island 1 will set, uh, in... in as it advertises a route to that source, um, or the network that that source is in, uh, it adds an extended community that specifies and encodes the, uh, relay IP address. Uh, likewise, relay 2 does the same thing, and the BGP routes propagate the way BGP routes propagate through the network. And now each of these, uh, devices in multicast island 3 know how to RPF toward, uh, which correct directions to receive the content. And the multicast traffic flows, uh, natively through the islands and over, um, AMT or PIM Light tunnels. Um, so the new part, the old part of TreeDN that's always been around were these, uh, AMT tunnels from the relays to the receivers. What's new... and that's the stuff on the right. What's new is the stuff, um, in the middle, uh, on the left, uh, where we have tunnels going from, uh, router to router, uh, instead of router to host. Um, and this is kind of tunneling in the middle mile. Um, this is what this, uh, solution, um, proposes.
So, in terms of implications, uh, of this, uh, architecture, it's... it's a very flexible architecture. Um, it allows core routers to become AMT gateways, uh, and, and, you know, works in a world where there's more than one relay. Um, again, p... there's nothing... AMT could always support router-to-router tunneling. The challenge was how do the routers know which ones... which... which relays it should, uh, use to get to a given source. Um, that's what this problem... that's what this, uh, extended community is solving. Um, this ene... this proposal allows routers to be both AMT gateways and relays. Uh, it can be a relay for downstream gateways and it can be a gateway for upstream relays. Um, so, uh, routers can do both. Um, and it... and it essentially extends the TreeDN architecture to support middle-mile tunneling, really connecting those, um, router-to-router tunnels connecting multicast islands. Um, previously TreeDN really only talked about last-mile tunneling, um, but, uh, you know, we... we've heard from quite a few folks in the content business, between CDNs and content providers, saying, well, what about, you know, connecting the last-mile tunneling? And that's kind of been the use case, um, that, uh, we've been hearing from, um... content providers and CDNs say, hey, we'd like to originate, uh, or and or transport multicast traffic, uh, that can be received by islands downstream anywhere on the internet. Um, and these are, these are, you know, islands that are not directly connected. So we... we can't, for whatever reason, you know, just connect to them, run PIM through the interface. Um, they're many hops away, or maybe they don't want to run PIM. Um, so, uh, the key thing to note is like, there's nothing new here, um, uh, in terms of the concept, um, of tunneling. Uh, we... this problem has been solved by GRE tunnels for, you know, the better part of 30 years. Um, but the issue is, uh, those CDNs and content providers don't want to use GRE, uh, because they don't want to have to, um, configure both ends and turn on PIM and turn on BGP between that tunnel. Um, what they say is, we just... we want to be able to transport this multicast content. We want to have an AMT relay and we want to tell folks, you know, if you want this content, just come to our relay, uh, and, um, build zero-config tunnels. And and they can support many different, uh, downstream islands of interested, uh, multicast, um, uh, in-of islands that is interested in these... in this multicast content. Um, and it's a zero-config tunnel. They don't have to specify, uh, and configure the tunnel source and tunnel destination.
In terms of next steps, um, we believe that, uh... we've spoken to MOPS, um, we've spoken to PIM, uh, we think both are very... should have interest in this. Um, IDR is probably another, uh, working group that would have interest. We haven't spoken to them, uh, in this IETF. Um, but, uh, we, the authors... and I'm taking chair, my chair hat off, um, uh, and defer to, uh, Greg, um... uh, believe that... that MBONED is... is the right working group for this, because this is essentially an AMT relay discovery mechanism. Um, just like Dryad, RFC 8777, um, um, uh, work was done in and in this working group, this is kind of another, uh, AMT relay discovery mechanism. So that's why we believe, um, MBONED is the right, uh, working group to adopt. Um, we don't believe it's ready for adoption, uh, among other things, we've received some feedback, um, some really good feedback some from some experts in IDR talking about, um, the survivability of extended communities and how extended communities might not be the best tool if the goal is to see this information propagate across the internet. Um, so, uh, we're contr... we're con... we're considering alternative, um, approaches, like, uh, you know, new attribute instead of a community extended community. So, um, keep an eye out. We will be, uh, updating, uh, this document, um, as we think through the best way to handle this. Um, but, uh, um, and over time, uh, we expect to, uh, request adoption, um, in MBONED. Um, but in the meantime, we are seeking feedback on the idea. Um, I think MOPS is, uh, probably a working group that has the most interested folks in this that would have the most, uh, the profile for the most, uh, the folks who would be most interested in this and have these use cases. But we'd also love to hear from PIM and IDR, um, on this proposal. Um, this is different than other, you know, relay discovery mechanisms like static or or Dryad. Um, it's... it's a bit more flexible. It's more router-friendly, um, uh, and and it allows... Stig asked a really great question in PIM, um, this allows the relay, uh, to change and and downstream, uh, as these BGP routes get propagated, um, downstream networks can add their own relays, um, in the in that extended community. So the relay can be the the relay closest to the receiver, not closest to the source. Um, so you get much better efficiency. I'll pause there and, um, be happy to answer any questions anybody might have. Are there any questions?
Jeffrey Zhang: I don't see hands raised.
Lenny Giuliano: Okay. Um, well, on behalf of our... my co-authors, I ask, uh, please do review... review the draft. We'd love to hear what you think and we'd love to get... get more feedback on this document. Um, and with that, we'll move on to the final, uh, agenda item. Um, Sanjay, uh, has bravely been staying awake.
Sanjay Mishra: Good morning, everyone.
Lenny Giuliano: And um, let's see here. Oops. Hold on. Um, I am going to grant you access to the slides. There you go. And Sanjay, you have, uh, you're the last, um... you're the last presentation in the agenda, so you have the rest of the time as long as you can stay awake. Um, you... the... the floor is yours.
Sanjay Mishra: Well, thank you. And I got my cup of coffee, so I think I'm good. Um, uh... So the first thing before I even start, since we have a little bit more time than, um, allocated, um... feel free to interrupt me and ask questions, right? And, uh... and by the way, I do have my partner Craig as well, uh, so he can chime in as well, uh, as needed. And then, yeah, just, you know, this is about our experience using AMT in the real world, uh, and and glad to share, uh, you know, what we did, how it performed, lessons learned, next steps, and what you can expect from Aircast next.
So, um, let me just start by giving a little bit of an update as to what we are doing at Aircast. Uh, so really what we are trying to build is Aircast Live, which is an IP infrastructure that is purpose-built for live streaming, right? I'm sure everybody here is aware that live streaming on the internet today is largely using an architecture that was built for on-demand content, right? And I think we can probably go back into the history of how IPTV streaming started and so on, and that architecture is optimized for on-demand, right? There is content available whether it's movies or TV shows, sitting on a hard drive, and you want to serve it to clients. Uh, however, when you start doing live events, the content itself is... is different and people expect different things as to how that content should be delivered, right? And and for that reason, um, we have really thought that it was time to actually create an infrastructure that is optimized and suited for live, uh, events first. And by the way, we are not suggesting that you should use the same network for on-demand purposes, right? The way we view it is we have a network that is designed for on-demand, a network that is optimized for, uh, live content and they coexist, uh, at the same time, right?
Uh, some of the other requirements that we've been working on with look, we have to make this content available, uh, over all kinds of IP including Wi-Fi, cellular. Uh, and we have some interesting, um... whether it's mobiles, whether it's your, uh, IPTV set-top box, computers, browsers, uh, everything. And, uh, you know, we are very much, uh, wanting to make sure that we build and all the technology that we use is built on IETF standards rather than creating our own stuff sometimes, even though it may be tempting just for reasons of speed.
And the key aspect, and we'll talk about this a little bit more, is... you know, this is where live content is really different than the on-demand content. Um, uh, and and if you go back before IPTV streaming when we used to have those antennas on our homes, um, we were all used to having a network where the video from the live event would reach you before the sound would reach fans in the stands, right? If you were at a football game, by the time the sound reached the people in the stands, the video would be at your home, right? That's the performance we are used to, that's something that we are aspiring to, to deliver, even over IP and all the challenges that the internet, uh, throws at you.
Okay, so, um, let's first understand why is live different than on-demand, right? Um, of course, we talked about, uh, delay, but also if you look at, and this is, uh, a challenge that a lot of our, uh, IPTV providers have been having is, um, the on-demand, uh, you know, traffic and people, number of simultaneous users and so on, is fairly well understood and I would say relatively smooth. Of course, there are, you know, peaks during the evening hours and so on, but they're very well understood. If you were to overlay when there is a live event, let's say the Super Bowl or we had the the cricket final two weeks ago, you know, the demand skyrockets really, and all of this demand is synchronized, too, by the way, right? Uh, when you are doing on-demand, 8 o'clock on the East Coast is different than 8 o'clock on the West Coast, allows you to kind of spread the load, uh, and I would say more deterministic and flat. But when you layer on the live events, it becomes really, really peak... very peaky or bursty behavior. And these peaks could be four times, eight times higher depending on what the event is. And and I think, Lenny, you've done a great job of sensitizing everybody, right? The resource requirements for on-demand content, uh, are pretty substantial and when you have these peaks driven by live, uh, you end up having to dimension your networks for peaks, otherwise everybody gets frustrated and I think, uh, we've all been through some experiences where lots of consumers getting frustrated because they can't access the content or the quality is and there and all the challenges, right? So really it calls for, hey, we need a different architecture that can scale to these, uh, substantial peaks that you have in the resource utilization and so on, and do it without, uh, a lot of cost, right? So this is a performance issue from a user standpoint, I need delay, I need to make sure if I have three TVs, they are all in sync, to if I am a provider, I want to make sure that I'm not having to dimension my network for these peaks, which happen not very frequently but it is, uh, it's something nonetheless I have a lot of spare capacity that is going to have to sit idle or you have a lot of consumers frustrated, right? So this is really kind of what is motivating us to to create a network that is really purpose-built for live.
Okay. Uh, so the key technologies that we're doing are... you know, we use multicast, right? I think, uh, that's something that I think we've shared previously. And also, uh, we are using... you know, everybody's familiar with on-demand streaming you have, you use something called chunking or break up a big video file into lots of smaller files and send the files over from A to B and we know, uh, the delay that it creates is kind of, if you choose a bigger file size, the network becomes a little bit more efficient, the overload on the CDNs is reduced, but then the delay, uh, for the clients is going to increase, right? Which is really not an issue if you're doing on-demand, uh, but that approach doesn't necessarily work very well for live streaming and we've all seen in live streaming using the traditional approach, uh, you would get delays anywhere from 20 seconds to a minute or even higher, right? Um, on the Super Bowl, you know, the traditional streaming delays, uh, were over a minute, actually, right? So, uh, that... that's quite an eternity I would say in live streaming. So what we have done is we have a slightly different approach what I like to call flow-based, which is instead of waiting to generate a full file for 5 seconds, 10 seconds, 20 seconds, uh, we capture video frames and we send them out, we encapsulate them in IP and we send them out rather than waiting, uh, to collect a whole bunch of them, right? So this is what really creates more of a flow. And, um, and as I said, we use multicast. Now, we use AMT because I think we've all understood Lenny's and everybody in the MBONED, uh, group is quite familiar, right? The the Big I Internet is not... is unicast-only, and we have to leverage, uh, AMT so that we can have, send these multicast traffic over, uh, the internet and then we can have these multicast islands so we don't use the multicast nature. Uh, in addition to that, we have our own SDK that folks can use on these client devices that can receive, uh, our flow-based streams and, um, and then present them to the, to the clients.
Okay. Uh, and by the way, just a quick reminder, feel free to ask, raise your hand, ask questions, uh, I like it that way. So, uh, let me share and I think this is an update that we, Craig and I, had shared at the last IETF in Montreal, uh, where we had deployed our multicast and flow-based, uh, technology at the US Open, uh, tennis in New York, right? But this particular system was only for distributing content to folks in the precinct. So it was really never made it out into the open internet. Nobody at home could access the stream, this was for delivered over Wi-Fi, which has, uh, multicast support though not always enabled, uh, to folks in the, in the stadium, right? Uh, and, uh, we had shown that we can do sub-second latency, it worked actually very well with tens of thousands of people and, uh, we had multiple streams, uh, going at the same time.
So, uh, given the success we've had at the at the US Open, uh, we said, well look, we're going to try and do something a little bit better, what if we were to enable the same concept but, uh, to people globally, right? And that's something, uh, that we did at the Australian Open, uh, about five, five, six weeks ago. Uh, so, um... before we get there, right? So this is kind of the architecture that we had deployed at the at the US Open. And this is I think a chart that we had shared briefly, uh, briefly as well, which is we ingest the content, uh, the raw feeds coming in, uh, we take them to our own onsite hardware where we, uh, digitize them and package them in IP packets and send them over, uh, and we connect to the local, uh, guest Wi-Fi network onsite, right? And we distribute the content. What we've done this time is, uh, we've actually also connected our onsite, uh, hardware to the internet outside and to AMT relays so that folks can access the, the content, uh, sitting on their couches.
Okay. Um, so this is, uh, a little bit of a testimonial, but before we get to the testimonial, let me just share... so this is what the network looks like today, uh, that we used at the Australian Open, right? So we have the studio feeds coming in on the left, we have our, uh, Aircast hardware which, uh, receives, uh, converts them to, uh, digitized video streams, does the IP packaging and then we have what we call, uh, you know, AMT relays... we call them Aircast traffic servers, they do a few other things but the key part is the AMT relay and we have multiple of them, uh, in various parts of the world. In this case, we had one in India, one sitting in the basement at my home, uh, and we used the the Juniper MX204, uh, routers as AMT relays. And then the clients on the internet were using our, uh, Aircast SDK, uh, to basically access the content. Uh, now, uh, we have an, uh, Aircast application on iOS, uh, but you could also use VLC, by the way. You can use, you know, the VLC 4.0+ client and receive the streams as well.
Okay. Um... So let me tell you, uh, what happened, right? So we're very excited, we think this architecture on the previous page was going to work, uh, very well and, uh, you know, Craig and I, uh, arrive at the, uh, at the tennis stadium there and, uh, you know, as they say, right? You know, you have a plan and then you, uh, then you hit reality, right? And the first thing we found out was, uh, we needed external direct addresses to the external Big I Internet, and the, and the team told us there, can't have it for multiple reasons, right? Uh, so we are like scratching our heads and what are we going to do? Uh, so what we ended up having to do is basically, our plan was to use GRE tunnels to send the traffic like Lenny was talking about previously, by the way, which is why I'm so excited about some of the work that was just presented, is it allows us to overcome some of the challenges that we have with GRE to getting traffic to the various relays, right? Uh, we could not do that, right? You know, just the GRE could not work without having external IP addresses, uh, and we ended up having to to use Wireguard, uh, to tunnel traffic from our equipment onsite at the venue to our AMT relays. And I'll have a little picture to show you, uh, what ended up, what we ended up having to do. Um, and then there was another little challenge, uh, which we are still in the middle of debugging. If you're using VLC, we found out that if we used variable bitrate, which is what you have to do if you want to optimize, uh, and reduce the delays, uh, there seems to be a bug somewhere where VBR streams over RTP were not working, uh, and CBR were working by the way, so, uh, just in case folks are interested, we figured out a workaround to it as well, uh, and kept our delays low, which I'll share in a second.
So this is kind of what we ended up having to do. So we have our original plan was to basically send the traffic from our sources to our relays, uh, on the, uh, the Juniper MX204s. Uh, that plan, uh, had a little bit of a challenge working. So we ended up having to use Wireguard on top of it as well, use Wireguard to send traffic over, uh, to another, uh, router, uh, and then send the traffic over to the 204, which could then act like, uh, the AMT relays that we should, right? So a little bit, uh, a challenging solution, uh, and you know, uh, but it did work. Uh, this is something that we ended up having to do not our first choice, uh, but this is what we ended up having to do, uh, onsite.
Uh, let me, let me tell you some what the the data... the results were. And by the way, some of you, I know got a chance to, uh, experience these streams yourself, so happily, uh, you know, feel free to share your experiences. Uh, and we were actually quite surprised, uh, the the glass-to-glass delay, right? So, am in-us ingesting the analog feed to a display on your computer, uh, or your mobile phone, the delay was actually somewhere between 500 milliseconds to 900 milliseconds depending on where you were, right? So all over the globe, uh, you could, uh, access the streams within under a second and by the way, this included, uh, us being, uh, AMT streams on cellular, uh, on in the venue itself. So imagine what happened to the stream now. It actually came back all the way from Australia from Melbourne to Boston, which is where the AMT relay was at the end, and then go back there and, uh, the delay was somewhere about 700 milliseconds or so, right? So quite quite impressive performance. And, uh, we did a little trace route and we found out we had over 20 routers between, in this route by the way, so pretty, uh, pretty complicated setup, but, uh, it was performing quite well actually. Surprisingly way better than, uh, we could have, uh, expected, you know, we expected given all the, uh, how challenging the setup was and we were really having to add more and more devices to overcome all the little hurdles, uh, that we came up with. Uh, and by the way, this performance was using our, uh, on iOS using the Live by Aircast application, right? If you use VLC on a PC, you probably see a little bit higher delay, about another second to a second and a half delay just because of the the client-size buffer that exists in VLC 4.0 that is not configurable or not configurable easily, at least there's no GUI to do that yet. Uh, so we were able to do this and in case people were wondering we were using 1080p streams at at 30 frames per second, bitrate was just about the 3 megabits per second. And and the performance was actually over cellular as well, very good using 4G or 5G, uh, at the venue which was a fairly, uh, you know, busy cellular environment in terms of, uh, 20, 30, 40,000 people in the, in the precinct at any time and us, uh, doing video streaming at 1080p, uh, worked quite well. And probably let me go back to, uh, you know, the quote from, uh, our partner, uh, at Tennis Australia, right? I'm not going to read it, you can read it, but, uh, very, uh, very, very pleased, by the way. And just so you know, streaming performance over IP, uh, compared to in different parts of the world was different. The IP streams in Australia were actually delayed two minutes, uh, so you can imagine we were sometimes even on a different game altogether. Uh, there were some streams in the US which were, uh, probably the fastest streams in the US were about 18 seconds behind the Aircast streams, right? So we were, we've been doing as expected, by the way, given the fact that we are using a flow-based technique and not having to wait to create these big files and so on, uh, substantially faster than anything comparable.
Okay. Uh, and by the way, feel free to raise your hand ask questions or send them through chat, uh, happy to happy to take, uh, take them. So, the... there's a lot of work to be done, by the way. This was still a experimental setup by invitation only, uh, and so on. Um, there are lots of, uh, work that we need to do and I think some of this will have, we know for a fact has work that we would much rather follow the IETF standards rather than inventing anything ourselves, right? I think the first part is, uh, relate to what I would say security and authentication related issues. Uh, the way we have the streams, uh, setup right now, if we gave you the the relay address and the stream addresses, uh, anybody can access the streams, right? So there was really no authentication happening or admission control, right? Uh, so I think it would probably behoove us to to bring, uh, some level of sanity there because otherwise it's so easy to pretend you are somebody else and, uh, which I'm sure people will, uh, in a production network. So we, uh, we are looking at ways to to have, you know, the relays and the gateways authenticate each other, also maybe not have the relay addresses, uh, available, uh, uh, you know, in the open really the way we have to do it right now. Uh, and then you know another requirement that we heard from our partners is, uh, love the performance but eventually we will have to figure out a way to to secure the content, right? So do encryption. Uh, of course, we can do encrypting each AMT tunnel but I would much rather figure out the right, uh, multicast encryption, uh, method to do it correctly and something that can scale, uh, you know, up to hundreds of millions of people actually, right? So it's a fairly challenging, uh, topic and I know, uh, the team here is actively working on that. Uh, so we'll, we'll be getting involved in that part of the work as well, uh, and making sure we have something, uh, that works and happy to implement this, uh, on our Aircast Live network. A couple of other features, uh, that we are working on and there's a lot of stuff going on, by the way, this is by no means, uh, the exhaustive list, but based on the comments, uh, that we received last time, uh, you know, we are working to implement what I would say adapting to different stream rates, right? You may have, all the losses we've noticed, uh, they're not happening on the Big I Internet, they're all happening over the last, the last mile or the last hop, right? Which is where the challenges happen whether it's your Wi-Fi or your home internet, uh, and so on. And how do we make sure, uh, we can adapt the rate, uh, of the stream maybe go from 1080p to 720 or maybe even lower depending on, uh, what how good your internet is or what device you are, uh, consuming this content on, a phone, a PC or your 4K TV and so on. Uh, so that's something that, uh, that work has already started at our end. And also, uh, if you've noticed we have a separate, uh, Aircast application for if you are in venue for Wi-Fi and a different app if you're accessing, uh, content at home using our SDK using AMT, uh, we are, uh, in the middle of having a single Aircast app so you can actually get, doesn't matter where you are you can consume the content that is available to you, uh, based on your location and, uh, privileges and all of that other stuff. Uh, so that's something that's coming, um, and, uh... the last thing that I wanted to say is, you know, um, this is still, uh, we are, uh, building the Live by Aircast network. We always have a network running, uh, that we are using to test and build at our end. Um, if you're interested in trying out the service anytime, uh, please, uh, send us an email, uh, to either Craig or I or, uh, the contact@aircast.tech email and, uh, we'll be happy to add you to the, to our list and send you periodic, uh, you know, updates about what new content is coming, interesting stuff, in which live events we are supporting at Aircast. Uh, so stay tuned. I'm probably going to just take a break now and answer any questions you may have, by the way. And, uh, thanks Lenny for giving the opportunity, uh, to share our experiences, by the way, and and learn and leverage everybody's, uh, you know, feedback and experience, uh, in the work we do.
Mankamana (Cisco): Thanks, Sanjay. It was good to see you. I hope you were here in person.
Sanjay Mishra: Actually, yeah. I'm in Boston today, Mankamana. Sorry about that.
Mankamana (Cisco): Okay. So the packets which came from Australia to your AMT relay, so they were unicast packets, multicast packets encapsulated in the unicast?
Sanjay Mishra: Right. Yeah, exactly, right? So they were encapsulated in a GRE tunnel which was further inside a Wireguard tunnel. So, a little bit of a complicated setup.
Mankamana (Cisco): OK. Thanks.
Sanjay Mishra: Great to hear from you, by the way.
Lenny Giuliano: Any other questions for Sanjay?
Stig Venaas: Uh-huh.
Lenny Giuliano: Sanjay, I think AMT is useful to solve the problem... the app cannot join the multicast for the last mile. Uh, I don't know why we need the AMT in the mid-mile, to to connect two multicast domains, uh, through the internet where it doesn't support multicast, right? Uh, so I... I met this scenario very few. So I I want to know why... why we need this, uh, relay AMT relay between the ISP... between the different providers' multicast network. Um, maybe this is a question for, uh, Jeffrey.
Jeffrey Zhang: Yeah, no. Great question. I was going to say, Lenny, this is probably more relevant to the the draft that you and Jeffrey are working on because, uh, and I'm happy to share my thoughts there too, but probably you should answer it first. Um, yeah. So I... uh... what as Lenny showed in his slides, imagine that, uh, you have a couple... a few providers in the internet, they... they between between those providers they don't have multicast enabled. Uh, so it's not end-to-end multicast enabled. And how do you connect those multicast islands together? You can do traditionally, uh, as we did in MBONED, we establish GRE tunnels and run PIM, uh, over those GRE tunnels. And you not only need to run PIM, you also need to run routing protocols so to exchange routes for RPF purposes, because, um, those... Yeah, the topology is different from the traditional unicast. So it's the the way to do... that way to to to make those multicast connections is very cumbersome. You have statically configure those multicast, uh, tunnels, you have to run those protocols. And now we want to use a a more dynamic way, uh, to establish those tunnels, uh, so that you... you either use PIM. By the way, you do not have to use AMT relay. You can just run PIM. But, uh, the tunnels, um, um, between those PIM routers, they no longer need to be the statically configured GRE tunnels. They can be... you can dynamically discover that, oh, I need to establish the, uh, PIM adjacency over a tunnel to to upstream router one for one source and to another upstream router for another source. We can do that discovery, uh, dynamically. That is one way to to do it using PIM. Another way is to to use AMT. So once you know that you need to reach upstream router one for source one or need to reach upstream router two for source two, at that time, instead of using PIM, you can also use the AMT because AMT solution is already there. You just consider it as if that yourself, the router, is a is a is an AMT gateway, is a is like a cellphone, you you are getting the content from your upstream. It's just that you turn around you use yourself, uh, you provide... uh, send your, uh, content downstream, uh, either via PIM or uh, via AMT relay. So AMT relay in the middle mile is just one way to establish the multicast tunneling.
Jeffrey Zhang: Uh, yes. Thank you. So AMT solved the problem to dynamically create the multicast tunnel. Uh, but I think, so MVPN it also can can do this, right? We can utilize the BGP like MVPN, I think it can create a ingress replication to solve the unicast problem, right? Uh, so what's the difference?
Jeffrey Zhang: So if you, uh... if you use MVPN, that means that, uh, you need to configure all those, uh... um, you need to configure MVPN on all those, uh, routers, um, um, in...
Jeffrey Zhang: Between the border, the border router. Right? The border router can deploy MVPN and but I think maybe it's not a very simple because you have to use BGP and make the ASBR be the next hopper for the multicast source, right?
Jeffrey Zhang: Right. In this case, MVPN in this case is, uh, is probably, uh, even more cumbersome, more more complicated than than the traditional GRE tunneling.
Jeffrey Zhang: Um, right. I get it. Thank you. Um, another question and, uh, I really met a very few scenario to the AMT scenario, just the last... last mile. Um, I... can, uh... just now I say it's APP, APP just use IGMP to the AMT relay. So I think the provider opens the multicast service to the to the APP. Um, APP?
Jeffrey Zhang: Um, the Aircast. I think the Aircast is a is application on cellphone, right? The Aircast.
Jeffrey Zhang: Right. Aircast builds... so the AMT provides the underlay, the multicast transport. The Aircast is a is a solution that makes use of that, uh, the, uh, the multicast, uh, infrastructure. So they, um... so as multicast, uh, engineers we have designed all those multicast solutions but, um, the use... using the multicast for for content delivery has not been that widely deployed. And quite often the content providers are not reluctant to use multicast. But now Aircast... what Aircast did was that, uh, they provided this solution for any for so that any content provider can easily, uh, to to use this technology. That's how I see it. Yeah. And they are they are probably the first solution provider to actually provided real-time content delivery via multicast, uh, first in the stadiums and then, uh, over the internet. Uh, obviously the, uh, the second case over the internet, it was still a small-scale trial. I I was one of those, uh, trial users, uh, during the Australian Open. It it worked really, really well and then, um, I remember that, um, um... people, there are people who were watching the Australian Open on their TV, regular TV, provided by the ESPN and its, uh, service in the US. Then the they were also comparing that they were seeing watching it on the, uh, Aircast AMT, uh, broadcast, uh, service and we there were, we were able to watch the content 18 seconds ahead of the the broadcast, uh, on the TV, and minutes ahead of the web-based, uh, streaming. So it was very useful.
Jeffrey Zhang: Um, OK. Thank you.
Lenny Giuliano: Stig, has a question. And and just a reminder, anybody who has questions please, uh, use the... or please use the queue in Meetecho to, um, if you have questions. Thank you. Stig.
Stig Venaas: Yeah, hi. This is Stig. Uh, thanks, Sanjay. It was quite interesting to see. I'm happy to hear that it all worked out well. Uh, but a question for Jeffrey, I guess. When you say you could use PIM Light, what's the encap you use for that?
Jeffrey Zhang: Say it again?
Stig Venaas: You said you could use PIM Light instead of AMT.
Jeffrey Zhang: Oh, okay. So, um... traditionally when you use GRE tunnel between PIM routers, you you have you rely on those configurations, you exchange PIM hellos, right? Um, and now with a dynamically, uh, uh, discovered, um, upstream PIM routers, you can just, uh, use PIM Light mode so that you no longer need to pre-establish those hello sessions, right? You I discover that, uh, uh, Stig, you are upstream router that I should use to reach the source to to send my PIM join to, so I'll just simply send my PIM join to you. And you get my join and you simply start sending the traffic to me using using any tunnel, for example, UDP tunnel, whatever. So that's what I meant by using PIM Light.
Stig Venaas: So, um... but you have to kind ofStig Venaas: figure out a tunnel endpoints or agree on what encap to use or whatever, right?
Jeffrey Zhang: Tunnel endpoint. So by using that, uh, BGP extended community, I discovered you are the upstream router I can use. So that’s... that’s the tunnel endpoint for my PIM joins, right?
Stig Venaas: So do you use UDP encap for the PIM join, or do you use GRE or whatever?
Jeffrey Zhang: Whatever... whatever tunnel. As long as I get my join to you, um, and... you can see my IP address, right, in in the PIM join message that I send in that tunnel.
Stig Venaas: Are you saying... okay, but you send the PIM join in a tunnel.
Jeffrey Zhang: Yeah.
Stig Venaas: You send the PIM join with some tunnel encap, correct. But, yeah... so as part of the extended community, would you learn what kind of encap to use or...?
Jeffrey Zhang: Um, as of now, we didn't specify that detail, but we could say that, um, I prefer you to use this tunnel and versus another kind of tunnel. We don't have... we haven't defined that yet, but that’s something we could do. Let's just say that I didn't specify the tunnel type, I simply send you a join. And you got it. You can probably just send it... use any kind of tunnel to me, because it's really just IP in IP in another tunnel, right? You you can... if you have an MPLS tunnel to me, you can send that... the IP multicast packet encapsulated in that, um, behind the MPLS header, label stack. Or if you don't have MPLS, you but you you know my IP address, you can simply encap... put it into a GRE tunnel or or IP-in-IP tunnel, whatever. So that's why I always say that you can use any kind of tunnel as you know my address. You just send it to me. Yeah.
Stig Venaas: Yeah, I kind of agree with you, but, um... yeah, depends a bit, you know, yeah, whether what encaps you support or what you can handle, maybe, but...
Jeffrey Zhang: The encapsulation is actually, to me, is not... is not really important. I think Lenny also had the same same questions. But encapsulation is not important. What is important is that, um... for me uh to ex... to accept a PIM join from a downstream, I need to be prepared to receive a uh tunneled PIM packets. So, um... so I need to be preconfigured to say that, okay, um, I need to estab... create enough forwarding state so that, uh, some PIM packets arriving in on a tunnel can be sent... sent to my PIM module to handle that. So that... that kind of plumbing needs to be done ahead of time. But the encapsulation type is not... is not really important to me.
Stig Venaas: Um, yeah, I don’t want to spend too much time on this, but I’m thinking you need to kind of do, you know, explicit tracking and keep track of each of the receivers and which encap each of them expects kind of for the forwarding, right? Anyway.
Jeffrey Zhang: Right. So let’s just say that, um, uh... you send me PIM join using GRE and Mankamana send me PIM join using UDP tunneling, whatever. Um, yeah, it... it's probably good for me to re... to send my data packet to you in the same encapsulation that... in which I received the PIM join. Um, that... that may be the desired, but I still wonder if that really matters because, um... yes, you send me the PIM join, um, in GRE tunnel, but let’s just say I send you the data packet, um, in... in... after MPLS label stack. You will still get the packet, you pop the MPLS label stack, you will see that IP multicast packet. As long as you can associate that the incoming packet with with some kind of logical interface for your RPF purposes, you should be good. But indeed when you send me that PIM join message to pull traffic from me, you need to be prepared uh to to associate the data packet with a logical interface, uh, so that you can do RPF. So that association requires some kind of plumbing. And maybe that that does require uh that um the I should send you the data packet using a certain encapsulation. I guess that's all, um, some kind of implementation detail. Yeah. Those kinds of details indeed, agreed we should spell it out in the draft. We have not touched upon those. Yeah.
Lenny Giuliano: Um, any other questions for Sanjay? Sanjay, I have... I have one. So, so in Montreal you talked about, um, your experience at the US Open where it was just in venue, it was essentially multicast over a LAN. Um, the big... the big development evolution here is with the Australian Open, um, you're doing it over the WAN using AMT. Um, and it was kind of a limited deployment. What... um... what you got next and uh will it be broader than a small... a small chosen few?
Sanjay Mishra: Uh, I was going to say... Lenny, I’m not... yeah, no, thank you, Lenny, for the great question. Um, and by the way, we had the in-venue system at the Australian Open as well, by the way. Um, so I’m not ready to share what's what the next live events are, but stay tuned. Uh, they are coming. And, uh, yeah, you know, I think the best way to stay up to date with the progress and the events we're doing is send us an email. We will definitely add you to our distribution list. I promise we will not be spamming anyone and, uh, you'll get an opportunity also to try out the system, uh, as we support additional events.
Lenny Giuliano: Um, an... another question, you used VLC. Did you have to modify it at all, or did you just use the off-the-shelf VLC 4.0?
Sanjay Mishra: Yeah, great question. Yeah, great question. So the... so the key thing is, right, we have to use VLC 4.0. So you cannot use VLC 3 or any of its versions. So you have to be on 4.0, which does not have, you know, a public approved list. So you have to go through a few hoops to install it. But no, it is... we didn't have to make any changes to it other than make sure you get the right AMT relay address and stream address and so on. And, uh, you do lose a little bit of a performance doing it because, uh, you can't really configure, uh, the buffer size, uh, uh, on the 4.0 clients just yet. Uh, so you get probably like closer to a two-second delay as a result of that.
Lenny Giuliano: Great. Uh, any final questions for Sanjay? Well, we appreciate the, um, industry reports. Um, this was really great, very interesting, great... it's... it's wonderful to see, um, the technology that we've been, uh, the technologies we've been working on, um, being used in the real world and for such an interesting and exciting use cases. So, thank you for sharing.
Sanjay Mishra: My pleasure.
Lenny Giuliano: We look forward to hearing what you what... what comes next in, um, in Vienna.
Sanjay Mishra: Absolutely. Stay tuned.
Lenny Giuliano: Great. All right, uh, I guess... I guess that's it for MBONED. Uh, thank you everybody for joining and we'll see you in... in Vienna. Bye, everyone.
Jeffrey Zhang: Bye.