Session Date/Time: 17 Mar 2026 06:00
This is the complete verbatim transcript of the Measurement and Analysis for Protocols (maprg) session held at IETF 120 in Vancouver on July 24, 2024.
Dave Plonka: All right everyone, it's—it's 1400 hours, 2:00 PM, time to start. You are in the Measurement and Analysis for Protocols Research Group in the IRTF. If you're new to the IETF meetings or the IRTF, welcome very much. I'm so glad you're joining us today. Uh, you should be signed in on either the full client MeetEcho, uh, linked in the agenda, or scan the codes at the microphones to sign on your phone. That's how we take attendance and show that you were here and we get the right sized rooms and things. So please do that sometime during the session.
Also, we're on a tight schedule today, so use the chat in—in—in MeetEcho. There's two word bubbles at the top, click on that. Especially the speakers that are—that are visiting here, if people are talking about your paper, you'll see it in the chat. So if you don't have time to talk to them during this session, you can find their names and contact them outside via email or whatever after it. So feel free to use the chat, especially if I have to apologize and cut you off at the microphone if we're running out of time.
So, um, I'm the co-chair, Dave Plonka. I'm with Akamai. This is—
Mirja Kühlewind: Mirja Kühlewind. Hi.
Dave Plonka: And, uh, Mirja and I, uh, have the wonderful opportunity to be kind of ambassadors between IRTF and the IETF and the academic community. So we have a—we think a pretty nice program set up for today. Um, the format—front matter here is: this is the IRTF Note Well, privacy and code of conduct. Uh, behave yourselves, be nice to people, especially since we have a lot of newcomers at this. Um, make sure they—they see a—our good side. Um, there's links here if there's any trouble with, uh, interacting with people. You can contact us, you can contact the Ombuds team linked here. Um, and let's move on from that.
Uh, the goals of IRTF. Uh, as you might have heard if you went to the other IRTF meetings, we are a research organization, not a standards body or standards-creating organization. We use all the, uh, formalities sort of with the way the IETF works, but we do not do standards. We're a little looser with the way things work, um, but focused on research.
Administrivia. If you want to follow up and get the calls for future meetings, whatever, make sure you get on our mailing list. It's super low volume. Most of the things that are on there are just the calls. That's one of the things we have the hardest time: people knowing that we're meeting. So Mirja and I invite some speakers, other people submit their—uh, their—uh, contributions themselves, and those are the two ways you'd get to, uh, be in maprg. Uh, meeting tips, I think I said this already. Uh, get on the on-site tool. Uh, if you're running it, uh, be sure your—uh, microphone and video are off if it's not—it's not your turn. But other than that, that's—uh, your tool for the meeting today.
So the agenda. Um, it's packed. There's five minutes spare for questions and answers in here, so we're going to zoom through this. Um, I'm going to refer you to the agenda link to get it yourself, but, um, what we've got here is a set of invited talks from IMC 2023. Some Chinese speaker—uh, speakers from China didn't get to the meeting, so we're having them here instead. Um, but these are ones that Mirja and I invited. We've got two, uh, other—we've got four other talks, uh, heads-up about other activities in the IETF coming up. And then, uh, two unique opportunities. One of the talks today will be, uh, a pre—uh, you'll see a preview of something that's already been accepted for IMC 2024 later this year. And that's partly enabled by IMC switching to two—two times a year where they're accepting papers. So some papers for IMC late this year have already been accepted and we'll show you today, um, thanks to the speaker. And then lastly, uh, the—there's a talk, um, about VPNs and IPv6. This is—uh, also unpublished work. So this is a place where if you're an IETF'er, if this is—if IPv6 and VPNs are your thing, you might be able to help out the author. So be polite, but please do that in the chat or talk to her afterwards.
So up—up first, we've got, um—uh, Yuxi. So if you could come up, um, I'll put your slides up here and pass you the clicker and you'll have five minutes.
[Presentation Title: Heads-up Talk: Measurement of Systemic DNS Resolver Vulnerabilities (Informing Six DNSOP I-Ds)]
Yuxi Chen: Hello everyone, I'm Yuxi Chen from Tsinghua University. I'm honored to present our work on measuring systemic DNS resolver vulnerabilities. This work is a joint effort from Tsinghua University, QI-AN-XIN, and 360, and several contributors from the IETF community, especially the DNSOP working group.
So, how secure are DNS resolver implementations? The short answer is there are widespread exploitable divergences because the DNS specifications leave room for flexibility, and different implementations make different yet reasonable design choices. Unfortunately, attackers can weaponize these subtle gaps. The scale of this issue is massive. Across our studies, we found over 90 CVEs and six novel attacks with amplification factors exceeding 20,000 times, and almost every major DNS vendor is affected.
Why is this happening? It's not bad coding; it's a systematic attack surface. When high-level, um, principles are translated into code without standardized, um, uh, checks, um, attackers can use these gaps to, uh, launch attacks. And so, to address these issues, we map six attacks to six specific drafts as you can see in this table. They cover three areas: cache delegation, query handling, and packet preprocessing. Instead of going through all six, I will just highlight two examples.
The first is DNSBomb. It was published at IEEE Security and Privacy. This attack turns beneficial DNS mechanisms like timeout aggregation into a pulsing DoS weapon. Um, you can think of it as, um, Kamehameha energy blast, uh, in the Dragon Ball. Attackers first accumulate queries, and the resolvers then amplify them through aggregation, and finally, um, concentrate the blast by releasing all responses simultaneously. Um, our BCP draft, um, proposes response pacing and short, uh, timeouts. Take Unbound for example, it can reduce the amplification factor by 99%.
The second is NRB-Style from CCS 2023. Um, here, a one-sentence ambiguity in RFC 7871 ECS extension revives a 20-year-old cache poisoning attack. Because different ECS prefixes can bypass query aggregation, attackers can trigger many queries and, um, easily collide, um, spoofed responses. Um, our draft updates this RFC to enforce a no-ECS support state tracking mechanism per-zone, allowing resolvers to safely aggregate queries and close the gap.
To wrap up, um, vendors are actively patching these ambiguities, but without standardized guidance, new implementations will face the same issues. That's why we brought these six drafts here. And we, uh, welcome any feedback, including, um, technical approach, draft structure, or anything else. All the six drafts are on the DataTracker and, um, please don't hesitate to reach out. Thank you.
Dave Plonka: All right, uh, thanks Yuxi. Uh, we have a—we have a minute for any questions or comments. All right. Um, thank you. And please reach out to Yuxi if you have further questions.
We've got Deepak up next. I'm going to share his—we've got Deepak up next, I'm going to share his, uh, slides here.
Deepak Kumar: Yeah.
Dave Plonka: Okay, Deepak. And then, um, I'm going to start a timer for 10 minutes so we can see what's going on in there. And we can hear you, so I think you'll be ready to go as soon as I pass control to you.
Deepak Kumar: Yeah, I'm trying to get the controls. Can you please allow me the controls to move the slides? Yeah, there we go. Yeah, awesome. Sounds good.
[Presentation Title: Are you RPKI Ready: The Road Left to Full ROA Adoption]
Deepak Kumar: Uh, hi everyone. Uh, thanks a lot for inviting me. And, uh, I'm Deepak. I'm a Computer Science PhD student at Georgia Tech. And, uh, today I'm going to talk about our work: "Are you RPKI Ready?". And, uh, this work was published at IMC 2023, and this was done with Romain Fontugne from IIJ and my advisor, Cecilia.
So, in this talk, I'm going to discuss about, uh, the prefixes which are currently not covered by RPKI Route Origin Authorization certificates. Uh, we'll characterize these prefixes and we'll try to understand what will it take to get these prefixes into RPKI. Uh, and, um, I'll also introduce a platform which will help us in this process.
So BGP, the Border Gateway Protocol, uh, lacks built-in security mechanisms to verify the routing announcements. And RPKI offers us a mechanism to validate certain, uh, certain set of, like, BGP announcements, certain parts of the BGP announcements. So, uh, the way RPKI works is, uh, the internet registries, basically the RIRs, they assign a public and a private key pair, basically a certificate, to an organization who holds resources—internet resources like IPv4, IPv6, or ASNs, in fact. And now, organizations can use their private key to issue these certificates called ROAs (Route Origin Authorizations) where they authorize a certain ASN to originate a prefix in BGP. Now, we have a record where which ASN is authorized to originate a certain prefix in BGP. And now, other networks in the internet, they can take these prefixes—they can take these ROA certificates and the public key, and they can use this public key to verify the certificates and then take routing decisions accordingly.
And, uh, RPKI has been around for a while, more than a decade at this point. And, uh, several works have looked into the adoption of RPKI, the benefits of RPKI. Large organizations like Google, Amazon, and all of the Big Tech, they have adopted RPKI also in the past. So, uh, there is actually a consensus that RPKI is actually beneficial. And, uh, when we look into the data, it also shows in the past 6 to 7 years, RPKI adoption has grown multiple. Uh, in IPv4, the adoption, uh—in both IPv4 and B6, more than 50% of the routed prefixes have—are covered by RPKI certificates at this point.
So the question is: what about the remaining prefixes? So, what will it take for the prefixes that are still not covered by RPKI? And, uh, if you look into the IPv6 trend line, you can notice that in the past year, the growth has almost saddled. IPv4 is still growing, but can we expect a similar rate of growth for the next 50%? So that's something that we'll look into this, in this presentation.
So, uh, this is the question: so how can we support the RPKI adoption for the remaining 50% of the prefixes? I'll skip this slide over, but, uh, at this point, RPKI is a mature technology, so we would be expecting, yeah, like RPKI adoption should be fairly straightforward since, uh, everyone is adopting RPKI, right? Uh, but turns out the reality is different. Organizations adopting RPKI do face a lot of complex issues. And these issues can be technical and also non-technical.
Let me run you through an example. Suppose we have this prefix: it's a /21 IPv4 prefix with an origin ASN. So the first question—and currently it's not covered by RPKI. So the first question arises is: who can actually issue the ROA certificates for this specific BGP announcement? Uh, the answer is not very straightforward, but, uh, if we dig into like WHOIS records, RPKI certificates, we can actually locate which organization can issue RPKI certificates. We detailed the methodology in the paper, uh, but turns out Colocation America can—has the authority to issue RPKI certificates. So can we just poke this organization that, you know, please issue the certificate, and they'll just do it? Um, again, it's not very straightforward. The reason being: what if this specific /21 announcement has more specific, uh, routed sub-prefixes? Now, if you issue one RPKI certificate for the /21, it will invalidate these more specific announcements. Now if you look closely, it has five routed sub-prefixes, and all routed sub-prefixes have different origin ASNs. So if you have different origin ASNs right now, you need to make five more ROAs. First of all, you need to create these five more ROAs, and then you have to go back to the /21 ROA. Now, uh, the complexity just increased.
To go one step further, if you look a bit more closely, this /21 prefix has actually been reallocated by Colocation America to another company, which is Internet Utilities North America. Now, Colocation America can't simply just issue the certificate by themselves. They have to, uh, you know, give a heads-up to Internet Utilities, discuss with them, and this inter-organization communication is again something that's—that doesn't happen very, uh—in a very straightforward way. So issuing RPKI certificates suddenly just gets much more complicated.
So what—for an organization who is trying to plan, uh, how should I issue these ROAs, like what are the factors I should consider? Uh, especially, uh, it impacts like the smaller network operators. So what are the questions that they should answer, and in what order, how they should plan ROAs? So, to help these people out, uh, we created a framework where we aggregate, put together all of the questions or the factors that an operator must consider while planning ROAs. We put them in order, and, uh, an organization can actually look into this framework which will help them guide in the ROA planning process.
But just having this framework is not enough. So to answer these questions, operators actually also need the data as to: okay, what's, uh, like do I have the authority to issue ROA? If not, what I should do? Uh, what are my routing characteristics? Um, so a lot of factors, a lot of questions still might also remain unanswered. So we put together public data and create this platform where an organization can actually answer these questions and it will guide their ROA planning process. So in this platform, an organization or an operator can search for a prefix, they can search for ASNs, uh, we also offer API access. And to give you a quick look into what all data we provide: when you search for a prefix, we actually inform you which organization has authority to issue these ROAs, what are the customer organizations that are involved, and are there any routed sub-prefixes. So if you have these routed sub-prefixes, uh, you should be more careful.
So, we also have this tab called "Recommend ROAs" where you search for a given prefix, it will actually list out all of the ROAs that you need to create, and in what order, what should be the configurations also. So, uh, now a network operator can actually gather all of that information and follow the framework which will help them guide in their ROA planning process.
Now once we have this platform and we have this data, we are—let's characterize all of the prefixes which are currently not covered by ROAs. So, this Sankey diagram is for V4, routed V4 prefixes. Uh, I'll start with all of the prefixes which don't have ROAs, covering ROAs, that's my 100%. So turns out 21% of these prefixes are not even in RPKI—we'll come to that later—uh, but 80% of prefixes are in RPKI. So basically, the organization has signed up for RPKI and there's a certificate which can be used to issue ROAs. Now there's a spectrum of the degree of difficulty to issue ROAs out of these prefixes. So on the easiest side of the spectrum, we have almost 50% prefixes which are currently not covered by ROAs, but there's very low technical barrier into issuing ROAs. For instance, there are no routed sub-prefixes, there are no customer organizations involved, and, uh, there's very low technical barrier into issuing ROAs. In fact, uh, a big fraction of prefixes are managed by organizations who in fact have already issued ROAs in the past. So it's not like they are unaware of RPKI. And these prefixes are managed mostly by China Mobile, China Education Research Network, Korea Telecom. And on the middle of the spectrum, there are a bit of complications, but our platform can help organizations. And on the other end of the spectrum, on the most difficult side, we have 20% prefixes currently in ARIN, managed by organizations such as Department of Defense, US-AISC. These organizations haven't even signed up for ROAs. So they don't even have a certificate using which they can issue ROAs, and there are certain policy barriers to it. And getting these 20% into RPKI is much more difficult.
To summarize, ROA coverage for the next 50% of prefixes won't be as straightforward as the first 50%. But, uh, also there are non-technical aspects that impact ROA adoption. Uh, so we need continued support and training and more focused efforts into getting these prefixes into RPKI. And our platform, Are You RPKI Ready, will definitely help us in the—in this phase, in this process. And here are the links to the IMC paper and the dashboard. You feel free to check into it. We also offer bulk data access on GitHub, API access, and you feel free to send me a message and email. Thanks everyone.
Dave Plonka: Thanks Deepak. Um, we got—we got time for a question or two. Uh, either raise your hand in the—in the MeetEcho or come up to the mic if you want.
Okay. I don't see anyone in the queue right now. So thanks so much for fitting into the time, Deepak, uh, and meeting us on a funny time zone for you.
Deepak Kumar: Haha. Yeah. Thanks a lot for inviting me. Bye-bye.
Dave Plonka: Bye-bye. Um, so next up we have Waitong, uh, here in person. And Waitong, I'm going to bring your slides up.
[Presentation Title: RScope: Unveiling Global ROV Deployments and Dependencies in the Post-ROV Era]
Dave Plonka: All right, you're ready to go and you have 10 minutes.
Waitong: Yeah. Thanks for the—yep, thanks for the introduction. I'm Waitong from Virginia Tech, and today I'm presenting our work on how we measured ROV deployment and dependency between different ASes. This work has been accepted to IMC 2024 and is from Virginia Tech and Cloudflare.
So let's first have a quick review about the RPKI. So RPKI takes two parts. The first is the resource owners who control the AS numbers and prefix; they need to create a—a crypto certificate called ROA. And then, in the router side, they will—they will install the ROV and doing filtering based on the ROA certificate.
So there's two questions for us to understand how the RPKI are used and how RPKI protect today's internet. Firstly, is the how the network operators use the ROA, which has already been on—been answered last presentation. And the next question is about how the network operators use RPKI to filter invalid BGP announcement, which is deploying ROV, which is not that quite straightforward because the ROV policy is not publicly available.
So there's some previous attempts. Firstly, uh, take a very simple example is—is BGPsafehere.com, a project by Cloudflare. And what they do is that they use a RPKI valid prefix and RPKI invalid prefix and test the connectivity for each client on these two destinations. So if you can visit the valid one but cannot visit invalid one, you are protected. And also, there's some like crowdsourcing attempts, collecting all the blog posts, all the stuff. So we also try to measure the ROV—we call ROV protection, whether one network can reach to ROV invalid sources, uh, by running some in-the-world invalid prefix and some side-channel technologies. And we publish this measurement result every day.
But there's also challenge on the existing data plane measurement, which means all these measurement relies on the RPKI invalid prefix. But with more and more ASes deploying ROV, we are seeing less RPKI invalid prefix today with very large globally visibilities. So in short: so one RPKI invalid prefix will be filtered in all ROV ASes. Which means that more ASes deploy ROV results harder to measure the ROV deployment. So that's the challenge.
So our idea is that imagine a world that everyone doing ROV, then we cannot measure anymore. So can we make our prefix only filter by one or more ROV ASes? So that's our goal. So let's quick look into the ROV ecosystem. Firstly, we have the publication point where the ROA object being stored. And then we have validators or relying parties who fetch the ROAs from the publication point and then giving to the routers. So instead of letting the RPs fetching all the ROA objects, what we're going to do is that we first we create our own publication point under ARIN. So then all the relying parties need to connect to our publication point. And this relying party will give what they see to each ASes and doing ROV.
So instead of giving ROAs to all the RPs, so what we're going to do is that we give different ROA to different relying parties. So we only give this very specific ROA that will invalidate our announcement to only one RP or only one validator. Which means that only the AS1 will see our measurement prefix as RPKI invalid and filter it. For all other ASes in the world, they are seeing our BGP announcements all as valid and will not filter it. So by doing that, first using a divide and conquer methodology and then running data plane measurement, we can actually tell who is deploying ROV and we can tell which ASes are rely on which relying parties.
And although the methodology is a little bit straightforward, there's still some very tricky things we need to solve. So imagine a cases that we have the upstream AS1 and the downstream AS2. So there's different kind of possibilities that these two ASes deploy ROV. The first is: these two ASes both deploy ROV using different RP servers. And then we could see that if we only give our special ROA to RP2, then only AS2 will drop it, AS1 still keep the connectivity. But also, maybe the two ASes were using the same relying parties. And in this cases, once we give the invalid ROA, then this AS1 and 2 will both being disconnected. And also because of the inherent filtering, once the AS1 received the invalid—the ROA to invalidate our announcement, AS2 will be blocked immediately at the same time.
So the challenge is how we differ the shared relying party with the upstream filtering. So here's our methodology. So because of if the AS1 and AS2 using the same relying parties, and what's going to happen is that the AS1 and 2 will not fetching the—the content from—from relying party at exact the same time every cycle because there's a fetching cycle between the relying party and the routers. So maybe in T0 we see AS1 and 2 will be blocked at same time, but maybe in the next cycle, we see the AS2 will be blocking our pings, you know, earlier before the AS1. And if there's a scenario of upstream filtering instead of shared relying parties, which means AS2 are not fetching directly from the relying parties, then what we're seeing is that AS1 and 2 will block our ping traffic at the exact same second every time.
So we run our experiment, but firstly in IPv6 because we're using a lot of IPv6 prefix, and we're measuring 20K ASes and we found out among these 3,000 ASes being protected, there's only 1,000 ASes actually deploy RP themselves, remaining 2,000 are just protected. And also we found out the more than half of ASes are only rely on one RP server, although the RFC recommend to use at least multiple RPs for the redundancy. And only 14% of these ASes are deploy multiple RPs across different AS numbers.
And also we found some differences in case for couple very popular RP servers like the Cloudflare's and NTT's, and we also compare with what we've seen with the Comcast RP which is non-public, and we see that for example the Comcast, because all the ASes rely on Comcast are just upstream filtering, so we are seeing that they're blocking our traffic at the same time as the black bar. But for the Cloudflare, they're blocking our traffic at total different times.
And then we are comparing who is actually deploy ROV with the, uh, their AS rank, and we found out the higher rank ASes are mostly deploy ROV themselves while the lower rank ASes is mostly benefit from their upstreams. And in summary, we present this RScope measurement framework. We run the measurement from the last year but mostly in IPv6, but we're also working on the IPv4 measurement to cover all the internet. And we release our measurement result soon in—maybe next week. And our paper will be published in this year's IMC in October.
Dave Plonka: All right, thanks Waitong. Um, we've got a couple minutes for, uh, questions or comments, so put yourself in there in the queue if you—if you'd like to do that. Um, Waitong, are you going to have a preprint of this available, or—so you're—so he's before camera ready. You have about a month or something before camera ready, so if you have feedback on the paper, you have a chance to get it in there. Do you know if you're going to do a preprint?
Waitong: Yeah, so we're working on the camera ready, but, uh, the deadline is still—we have some times.
Dave Plonka: Okay.
Mirja Kühlewind: Please—please as soon as you're ready, send a link to the paper to the mailing list.
Dave Plonka: And then we have Deepak in the queue, so I'm going to go to him.
Deepak Kumar: Uh, yeah, like a quick question. Um, so the RFC recommends networks to use multiple relying parties. So if going ahead, if every network out there or especially the big networks, if they are using multiple relying parties, uh, they might have different policies onto coming to a consensus if they are getting different ROAs for the same prefix. So would that introduce some sort of noise in your inference?
Waitong: Uh, in—so there's different ways that one network can deploy the multiple RPs, so whether they're just in separate place and combine them together in the VRP level, or they're using like: we must use the VRP only if it's been consistent across different RPs. So in the later way of deployment, uh, certainly we cannot test all the combinations of all the RPs. But we're trying to test some kind of combinations and we confirmed that at least there's couple 30, 40 ASes they're actually doing this consistent-need kind of deployment. Thank you.
Deepak Kumar: Awesome, sounds good.
Dave Plonka: Thanks Deepak. Uh, thanks a lot Waitong. Um, I'm going to switch over to Ye-jin Cho's slides next. She is remote. Uh, that's our last remote presentation for today. Just give me a minute, Ye-jin.
[Presentation Title: What IPv6 RFCs Don’t Say About VPNs]
Dave Plonka: All right, there's your slides and let me pass control to you. You should be ready to go and I'll set the timer for 10 minutes and you're all set.
Ye-jin Cho: Hi, um, thanks for inviting us. My name is Ye-jin Cho, and this was a joint work with John Heidemann from USC/ISI.
So our topic is what IPv6 RFCs don't say about VPNs and how missing guidelines led to IPv6 de-preference in the VPNs. So thanks to multi-decade efforts of many people, IPv6's been growing steadily. But the problem is that having IPv6 doesn't really mean that IPv6 is actually being used. Um, as you can see in the graph, there is a gap between IPv6 capable and IPv6 preferred. So we named this phenomenon "IPv6 de-preference": when the client has IPv6 support but IPv6 is de-preferred over IPv4, the client chooses to use IPv4 instead. And normally, um, due to Happy Eyeballs algorithm, IPv6 should be preferred over IPv4.
So the problem we noticed with that: VPNs, V6 is often de-preferred severely. Um, the baseline would be non-VPN users. Um, they use IPv6 most of the time, like 82%. Um, the VPNs we name it "good VPNs", they have similar IPv6 usage, um, around 73%. However, de-preferred VPN users often don't use IPv6, um, like 22%.
And to, um, approach de-preference, we investigated 14 million visitor logs that included visitors from 123 VPNs. And we were able to see that some VPNs de-preference heavily while some VPNs are fine. So um, what we figured out is the interaction between prioritization rules and how the VPNs, um, configure interfaces would collide and end up in de-preference. So when we talk about prioritization rules, it's about RFC 6724. This rule, um, prefers IPv6 GUAs over IPv6 ULAs for, um, source-pair destination. And there will be IPv4 public and private. And then IPv6 ULAs would be less preferred compared to other pairs as ULAs are not supposed to reach outside of the local network.
So um, devices would have multiple interfaces, and we were able to see that VPN users always select V4 address as source address. So we bring up an example of Mullvad VPN. Um, we can see that they assigned an IPv4 private address and IPv6 ULA address for users, um, on their interface. And we know that because of the prioritization rules, V4 private would be preferred as a source address compared to V6 ULAs. This means that IPv4 is selected as a protocol.
So um, why does de-preference happen? So by design, IPv6 is supposed to be pure. ULA source address means that this traffic never reaches outside of the LAN because NAT will not happen. However, VPN implementers often carried over the IPv4 design assumptions like equivalent of V4 private is V6 in—V6 ULAs.
So I thought, um, de-preference was sort of surprising because there was no explicit guideline that explained that this behavior would happen. And no IPv6 RFC specifies that—specifies which address type should be used as a VPN inner tunnel source address. Um, thus implementers had to infer this behavior, which was not easy.
So we bring up four possible solutions and we have our recommendations as well. So the first one would be assigning—VPNs assign each user a unique GUA address on their interface. So um, if you do this, the pros will be that address will have actual meaning which can be linked to a specific user. But the downside is that, um, complexity of managing address allocations without much purpose as VPNs tend to, um, handshake in different ways.
So the second solution would be VPNs assign a shared static GUA address. Um, this is very easy to implement. They just have to embed a, uh, static IP in their codebase. But the problem is it's quite unclear which one they should use. For instance, we were able to see that, um, this VPN called VPNly implemented using IPv6 documentation prefix, um, which is not the best use of that prefix.
So the third, um, solution would be a creating a new address class called "Tunnel Local Address". Um, this is our suggestion and this address is to specify that this traffic is being tunneled through, will be encapsulated later and thus should not be de-preferred. And luckily, we have the remaining half ULA space which is unused currently. Um, the pros would be that this actually represents a traffic. Um, the downside would be difficulty of adding a new class. So if TLA is possible, it will be very similar to Teredo. Um, Teredo was IPv6 over UDP over IPv4, and this address class indicated that IPv6—um, this packet will be Teredo'd later by the Teredo client.
And the fourth rule would be changing the prioritization rules. Um, this would have some pros that there's no need to change the VPN codebase, but it will—it might break a lot of things because ULAs are supposed to be ULAs. Um, we were able to see that a lot of VPN users, because they want to use IPv6, they were changing their, um, /etc/gai.conf file, which basically implements the RFC rules. So yeah, some users are doing this.
So the conclusion is that IPv6 has been de-preferred in VPNs. We have suggested a lot of solutions, and we need a lot of, um, inputs from RFC writers, um, IPv6, um, experts to how to solve this issue. Yeah, thank you.
Dave Plonka: Uh, thanks Ye-jin. Um, we've got a few minutes here and we have Lorenzo in the queue. And I'm going to join the queue also, uh, not in my chair role, uh, but feel free, um, if you also might be able to help with this. Um, let's go to Lorenzo.
Lorenzo Colitti: Uh, so yeah, so I think, you know, I think you—you should think about which of these solutions if any, you know, preserves the advantages of IPv6 over IPv4, which is basically end-to-end connectivity, right? So, uh, that's one thing. I think basically just number one is—has that property I think. Um, and so if you don't do this, then you have a bunch of NAT state that you have to maintain on the server and you don't have end-to-end, which—which seems bad. Yeah, we'd—we'd rather have like V4-only VPN at that point.
The other thing that you might want to consider is that there's an RFC—it's a best practice—that says, um, that's about assigning, uh, IP addresses to general purpose devices, RFC 7934. And it basically suggests providing a /64 or other mechanism that allows a—a client to create as many addresses as it wants. I think /64 is ideal because then the VPN can be used to extend the network to devices behind it. Um, the other thing is—is the other thing that, you know, you could do is if you control the servers, you could just write—you run DHCPv6. You don't have to like communicate this stuff inside the VPN at all. Now I don't know that that's not how VPNs work. I know that IKEv2 has this. But anyway, I would say, you know, /—I don't know if you—I don't know if /64 is one of these; if it isn't, I would say that's number zero and you should do that, but...
Ye-jin Cho: So I believe the problem is that, um, basically VPN providers want to preserve anonymity by multiplexing the traffic between a lot—um, multiple users that use single 128 address, not—so if you give a prefix to this person, using a prefix, this person will be able to be narrowed down. So, yeah, it just goes to back, you know, goes back to the end-to-end versus VPN discussion. Um, yeah, but I believe there should be a better solution.
Lorenzo Colitti: Yeah, you could, if you wanted to do a shared 64, that's probably okay as well. You basically mix a bunch of 128s between different users. I mean, technically you still have the tracking problem there. It's better than with the /64 because anyone who maintains state will have to maintain state on the whole 128 bits. But I—I—I'm using up time from the rest of the queue.
Dave Plonka: Thanks Lorenzo. Um, I think it'd be great if you guys followed up afterwards. Um, I'm going to kick myself out of the queue, however you do that, and let Suresh go. Uh, Suresh why don't you—
Suresh Krishnan: Yeah, thanks. Um, I think like Lorenzo said up a lot of what I wanted to say. But I think one thing is like: take this to 6man because if you really want to go down this path of like creating a new, um, class of stuff, 6man is the right place to go. Like you cannot just say, hey, we'll take half of the, um, ULA space; it's not something because we have—I think the other thing was previously allocated for centrally assigned ULAs, right? And, um, it needs to go through protocol action. And it also requires probably some changes to, uh, address selection, which is also like a 6man matter. And we've spent years getting this right, and you should look at like how you specify this, um, based on the address selection rules for V6. Thank you.
Ye-jin Cho: Thank you.
Dave Plonka: Thanks so much, Ye-jin. Um, so of course she's remote and you're up really late, so thanks for joining us. Um, uh, get in touch with her directly if you want to talk about it this week. I'm also interested in this, so I'm happy to talk and relay, uh, my understanding of it to Ye-jin to help with her project. Um, so up next we have, uh, we have Shibo, uh, with the first or the last of the three, uh, talks today that are from IMC. So you can come up and I'll share your slides.
[Presentation Title: Understanding and Characterizing Intermediate Paths of Email Delivery]
Dave Plonka: Okay, you should have control now. You're ready to go.
Shibo Cui: Hello everyone, I'm Shibo Cui from Tsinghua University. Uh, today I will introduce our team's measurement study on intermediate paths of email delivery.
So in the original design, people can independently deploy their email services. In such cases, an email travels from the sender's client to the outgoing server and is finally delivered to the incoming server. However, situation changed. With the rapid development of cloud services, hosting-based email have been largely adopted. So as a result, the traditional end-to-end email delivery model is undergoing change. Figure show the email delivery process including intermediate—intermediate service. Email from the sender's client pass through one or more middle nodes before reaching the outgoing server, such as hosting providers, security providers, and email signature providers. So therefore, the email delivery path show a segment-to-segment mode. We define the middle nodes as the relying server located between the sender client and the outgoing node.
As a member of the email path, vulnerable middle nodes affect the security of the entire email delivery path. Attackers has already used the dependency of these nodes to carry out malicious services. Uh, for example, Echo Spoofing allows attackers to exploit the relaxed source restriction of email relays in the intermediate path to spoof victim domains. Finally, it can lead to distribution of millions of fishing emails.
Previous works focus on incoming and outgoing servers in the delivery path. Uh, they measure centralization by collecting data from MX and SPF DNS records. However, these studies lack visibility into middle nodes in the path, so the dependency and potential risks associated have been overlooked. Therefore, our work aims at analyzing the landscape of email intermediate paths.
We use the Received headers to understand the email delivery path. It records each node that an email pass through from the sender's client to the incoming server. So it has two key components: the "from" part and the "by" part. Uh, the former part records the information of previous node while the latter one, uh, records the—the current node.
So, this figure shows the process of our construction of the email intermediate path data set. First, obtain Received headers from a large email service provider. Second, generate a template library to parse the Received headers. And third, build and filter the media paths.
Our Received header data set comes from a large email service provider in China, Coremail. It offer email services for more than 20,000 organizations. We only exact the minimum data required for our study and did not obtain the user's exact email address or body. So in the nine-month-long logs, we collected a total of two billion emails.
Uh, to parse the headers, we built a template library with 54 regular expressions, which can match 96% of Received headers in our data set. By using the template, we can extract the path nodes. Given that email servers may hide or forge their identity, we use the part—we use the "from" part of the Received header to indicate the information of their previous node. So through this process, we can construct the delivery path from—from the email. We further filter the email delivery path using some criteria, including spam flag, SPF verification, without middle nodes, incomplete path, and more.
Finally, our data set involves 105 million emails. One-third of them were transmitted within China. This unique data set allows us to address the following research questions. First, what are the identity and distribution of email middle nodes? Second, what is the dependency structure and regionality of email intermediate paths? Third, what are the centralization degree and cross-country differences of email intermediate paths?
We find that most middle nodes belong to email service providers, as outlook.com according for more than one-half of the emails. Among the top 10 providers, we also identify domains offering email signatures and security services.
To better understanding the delivery path structure, we define the dependency patterns from two perspectives. First is the hosting—hosting pattern. It describes the relationship between middle nodes and the sender domain, reflecting the extent to which a domain rely on a third-party provider. So it includes self-hosting, third-party hosting, and hybrid hosting.
We also analyze dependency patterns of country domains. We find that proportion of third-party hosting in email intermediate paths for various countries more than 60%, highlighting the email delivery dependency on hosting providers. However, intermediate paths from Russia and Belarus show the self-hosting proportion of about 30%.
The email intermediate path involves different SLDs, meaning that the dependency may passed between, uh, various providers. So we analyze the dependency passing pattern in 9 million multiple dependency intermediate path. If two email intermediate path contain the same site of middle node SLDs, we consider them to belong to the same dependency passing relationship. Uh, therefore, in total, we identify about 30,000 dependency passing relationships, among which 56% involve two SLDs and 80% involve more than three SLDs. So we can see that a significant proportion of these emails rely on Outlook for transmission.
By analyzing the top 50 dependency passing paths, we identify six common types of passing relationships. So we find that most common dependency passing occurs between email service provider and email signature provider.
Moreover, we analyze regional dependency of email intermediate path. We focus on analyzing the dependency from domain of each of different countries or continents on external regions. We suggest that stakeholders may—should pay closer attention to critical points of the dependency along intermediate path, so they may pose significant risks of service disruption.
This figure shows the regional dependency of email intermediate path in 60 countries. So if email middle nodes belong to the same country as the sender domain, it's marked as the "Same". We find that regional dependency patterns vary significantly. In some countries, over 90% of email intermediate path rely only on domestic infrastructure, such as Russia. However, in some countries, email intermediate paths almost entirely depend on foreign infrastructures, such as Morocco.
Furthermore, we try to analyze the reasons behind. We find that countries belonging to the Commonwealth of Independent States significantly rely on Russia's email infrastructure. In contrast, no other countries show a similar dependency. Moreover, we observe that email intermediate paths often reflect dependencies between geographic proximate or linguistically similar countries. Uh, for example, 68% of email paths from New Zealand include middle nodes located in Australia. We also analyzed the dependency at the continental level, as shown in the figure. The majority of emails originated from Asia, Europe, and North America have middle nodes located within the same continent. In contrast, North America—uh, South America are highly dependent on North America.
We use the HHI index to evaluate the market concentration of email middle nodes. A higher HHI indicates a more concentrated market structure. So given all the email intermediate paths, we obtain an HHI of 40% for the middle node market, which indicates a higher concentrated market. Microsoft dominates the overall email middle node market, participating in about 70% of email intermediate paths.
So this figure presents the HHI of middle node providers across different countries, with the largest providers marked by a circle. So we find that HHI varies greatly. Outlook dominates the market share in most countries, typically more than 60%. The exception is that Yandex is the primary provider in Russia and Belarus.
So in conclusion, using a unique and large-scale industrial email data set, we unveil middle nodes and intermediate path of email delivery. We analyze hidden dependency and evaluate the centralization degree of the email intermediate path. Uh, we have already published our code and data set for help future research. Thanks for listening.
Dave Plonka: Uh, thanks. Uh, we have time for questions or comments. Anything? I don't see anyone in the queue, Shibo. Um, thanks for joining us.
Mirja Kühlewind: Yeah, I just want to say I announced your talk on a couple of email-related mailing lists and it spurred already some discussion, so... maybe there's more people reaching out to you after the meeting.
Dave Plonka: All right, so up next we have Mingming, uh, also an IMC talk and happy to have her here in person. Someone who wasn't at IMC in the US last year. Let me get your slides up.
[Presentation Title: Analyzing Compliance and Complications of Integrating Internationalized X.509 Certificates]
Mingming Zhang: Uh, hi everyone. I'm Mingming Zhang from Zhongguancun Laboratory. Today, I will talk about, uh, the multilingual internet. Our recent measurement study published on IMC 2023 sent a wake-up call, uh, that is: integrating internationalized content in X.509 certificates, uh, across compliance gaps and complications that need to resolve.
The public key infrastructure is the security foundation of the internet. It relies on X.509 certificate to bind identities to cryptographic keys. These certificates, uh, need to follow some strict standards that define the required formats, structures, and encodings for every single field.
The multilingual internet is driven by ICANN and the Universal Acceptance Initiative, which ensures internet applications and systems can properly handle character beyond the printable ASCII set, like Chinese characters. So, uh, when an X—X.509 certificate contains any of the internationalized content, such as the IDN, IRI, or multilingual text, we call it a "unicert".
So why unicert—uh, can unicert break things? Because the broader Unicode space complicates the issuance, parsing, and validation of certificates, which may introduce a potential security or usability issues. Many real-world incidents have shown that improper Unicode handling in certificates can cause cert spoofing, incorrect attribute parsing, or even buffer overflows in client software. However, while X.509 standards support Unicode, the universal acceptance readiness of PKI's core mechanism remains largely untested. So this is the gap, uh, we need to—we aim to fill.
Our study present the first large-scale measurement and empirical security study of unicert. It's guided by three, uh, research questions. The first is: have certificate authorities issued unicert in compliance with the complex, uh, standards? The second is: do mainstream TLS implementations correctly parse unicert according to normative constraint? The third one is: what are the security and usability impacts of non-compliance issuance and parsing flaws?
So let's start with the unicert issuance. The—the first challenge we meet is identifying a definitive standard. How can we establish a clear normative requirement for unicert, uh, compliance? The problem is that relevant specifications such as X.509, DNS, and IDN specs suit and also CA/Browser Baseline Requirements are highly interdependent and evolving. There are many updates, uh, revisions, or references here. In addition, many rules for format or encoding of a certificate fields are scattered across natural language, the footnote site, or ASN.1 definitions. So a compliance violations may arise from any of this aspect.
To tackle this complexity, uh, we employ RFC-GPT—uh, a custom GPT augmented with RFC database to navigate complex requirement and constraint. It helps us identify encoding, structure, and character constraints for cert fields that allow non-ASCII characters. We have extracted 95 rules covering 36 cert fields, with 50 rules missing from existing compliance checks.
These rules are the foundation of our, uh, measurement study. We use these rules to build a certificate-checking lint and use it to check against over, uh, 34 million unicert collected from the CT database.
So let's look at the unicert issuance ecosystem. These unicerts, uh, were mainly came—mainly came from about 700 organizations, and the issuance numbers are clearly rising, showing that the adoption of unicert is steadily growing. From this set, we have identified about 250,000 non-compliant or problematic unicert, and 65% of them were issued by publicly trusted CAs. These issues mainly fall into three categories. The first is improper character checks, uh, which involves basic, uh, flaws such as including the non-printable characters in the printable string field. The second is a lack of value normalization, such as the UTF-8 string not normalized to Unicode NFC form. The last is invalid format or structure, including—including the formatting errors or invalid encoding methods, or any other structural issues that hinder the certificate parsing.
These non-compliant unicert were issued by over 500 organizations, uh, indicating that the problematic practice are—is widespread, involving both major global CAs and regional providers. Meanwhile, we also found some requirements may not have been fully covered by the existing linting tools that CAs currently use. These issues are not just limited to one or two fields. Uh, actually, we have identified that, uh, 17 different subject or extension fields that didn't follow a relevant standard or baseline requirement.
There is an interesting, uh, case study. We know that DNS names are critical for identifying peer entities. However, we found 27,000 unicert with malformed IDNs in the DNS name field. Uh, many Punycode IDNs are syntactically allowed under the current CAB baseline requirements. They have valid P-labels and they are resolvable well-wildcard DNS. However, when we decode them back to the Unicode, we found there are many special characters in the domain labels that are disallowed by the relevant standard. So there is a conflict. The CAs appears to comply with the relevant baseline requirement, but the resulting certificate are problematic—maybe, uh, dangerous for user agents that need to parse or use these domain names.
The next question is: do TLS libraries respect declared encodings and enforce strict character checks? We ran a gray-box testing for nine TLS mainstream libraries, and we crafted test unicert, uh, with a variety of Unicode blocks and encoding types, uh, to check whether their parsing APIs can, uh, properly handle these special unicerts.
The results reveal a wide range of decoding or character handling anomalies in all of our tested libraries, uh, including the in-capa—incompatible or over-tolerant decoding, such as, uh, using non-standard declared method to decode or accepting characters beyond standard range.
We uncover some parsing flaws that may enable exploitations. For example, the encoding and decoding mismatches might allow common-name forgery, and improper replacement of control characters could enable CRL spoofing. We did find some certificate in CT logs that could be misinterpreted like this, but we have no evidence of a real exploitation yet.
Finally, we also conducted an empirical study in the real-world scenario and, uh, uncover some interesting threat services. The first, uh, threat is misleading CT monitoring. We found malicious or compromised CAs can exploit parsing inconsistencies to make CT monitors mis-forge certificates, which allowed the concealment of some, uh, deceptive identities.
The second is about user spoofing. We found specially crafted cert fields can manipulate how browser warning pages are displayed. For example, the Chrome's warning page can render bidirectional characters in the malformed common-name field. And Firefox page could show misleading information that is derived from a malformed SAN field. This—this will potentially trick users into trust unverified site.
So to sum up, achieving a truly internationalized PKI is, uh, currently challenging due to the systematic issuance non-compliance and universal parsing flaws. We consider building an internationalized PKI requires collaboration work among CAs, TLS developers, and the relevant standard bodies.
We have provided, uh, some checking rules, tools, and recommendations to the community to help better handling of unicert. Uh, we hope to bring us closer to a much, uh, secure and global PKI. That's the end of my presentation. Thank you.
Dave Plonka: Uh, thanks, Mingming. Um, we've got a couple minutes available if there are questions. Get yourself in the queue.
Mirja Kühlewind: Yeah, maybe I can ask something quickly. I mean, thank you for doing this work and detecting these problems and digging into the security-related aspects of it. Did you reach out to any of the issuer and organizations to point them at these problems?
Mingming Zhang: Uh, we have disclosed some of the issues to Let's Encrypt and some other CAs and talk about the handling of IDNs in the certs, but, uh, some technical managers maybe they think this is—they have followed the current baseline requirement and technically they—they perform right, but this mis-handling of IDN should be, uh, reconsidered by any other groups like the DNS groups and TLS group to working together to reach a—
Mirja Kühlewind: Agreement.
Mingming Zhang: Agreement, yes.
Mirja Kühlewind: Okay, more work for us. Thanks.
Mingming Zhang: Thank you.
Speaker 1 (ETH Zurich): Uh, could you—could you introduce yourself? Yes. Uh, this is Lipman from ETH Zurich. So I have a question. Could you go back to slide 22? So I think you mentioned that, um, this vulnerability can be used exploit to, um, cause some damage, uh, in certificate transparency monitoring, right? So did you already find like existing exploit or is just some kind of assumption that it could be used to do this kind of damage?
Mingming Zhang: Uh, no, we didn't found any real-world exploitation, uh, so this is just in the, uh, experimental assumption. We verified that, uh, some CT monitors like, uh—uh, sorry—some—some major CT monitors can mis—have mis-handled some of the special characters and they can't search or render the results through their APIs, so, uh, some malformed certificate will be concealed, uh, because of their implementation flaws.
Speaker 1 (ETH Zurich): Yeah, okay. Thank you very much, yeah.
Dave Plonka: All right, thanks much Mingming. Uh, we have Genshin up next, uh, to take us to the end of the meeting.
[Presentation Title: Measuring the Time Source Vulnerabilities in the NTP Ecosystem]
Genshin: Hi, hi everyone. I'm Jinchen Huang, a PhD student at Tsinghua University. It's my great honor to present our work, "Measuring the Time Source Vulnerabilities in the NTP Ecosystem," which was published at IMC 2023.
Okay first, let's briefly review the background of NTP. Accurate time plays a vital role in the internet security, such as TLS and RPKI. The Network Time Protocol, or NTP in short, is widely used to synchronize time between different computer systems on the internet. While NTP serves as a critical internet infrastructure, it was not initially designed with security considerations, making it suffer from various attacks such as time-shifting attacks. In such attacks, adversaries could manipulate timestamps in the NTP response or control NTP server to send malicious timestamps to clients.
Therefore, as one of the essential mitigation strategies against time-shifting attacks, the Network Time Security, or NTS in short, has been proposed to protect the authenticity and integrity of NTP packets. In NTS, a server sends response using server-to-client key, and the client uses the key to validate the response. However, NTS cannot judge if the NTP server itself is providing inaccurate time. An inaccurate time source could propagate erroneous time information to clients.
To evaluate the degree of timing inaccuracy within the NTP ecosystem, we first scanned open NTP servers on the internet. From April 2023 to August 2024, we conducted 66 rounds of scans at weekly intervals. We find around 7 million open NTP servers, 3,000 NTP Pool servers, and 21,000 REFID servers. The REFID server is indicated by the REFID field in the NTP response to provide time to clients.
We also scanned open NTS-KE servers on the internet. From December 2023 to August 2024, we conducted nine rounds of scans at monthly intervals. We find 583 NTS-KE servers, eight times more than the NTS-KE servers on the public list.
Based on the measurement data, we first analyze the characteristics of NTP servers. We find 1,115,000 open NTP servers, 7 NTP Pool servers, and 1,000 REFID servers are "bad timekeepers," that is, their times have an offset greater than 10 seconds from UTC. Inwards, while stratum 0 clocks such as atomic clock and GPS clock are traditionally to be considered to be highly accurate time, we find 61.2% open NTP servers and 10.8% REFID servers using stratum 0 clocks are bad timekeepers.
We then analyze the configurations of NTS servers. We find, even though the number of NTS servers is on the rise, 374 NTS-KE servers are misconfigured, hindering their ability to provide time—time services. For example, 24 NTS-KE servers use expired or self-signed TLS certificates; 335 NTS-KE servers are associated with NTP servers that are unable to provide accurate time, and so on.
We then analyze the causes of bad timekeeping. We find 92.1% open NTP servers suffer from synchronization anomalies such as packet loss, while 6.7% open NTP servers use faulty time sources, such as they use inaccurate time servers.
To better understand how NTP server configurations affect the time accuracy, we launch an anomaly survey to 377 NTP operators and received 40 email replies. We find they also experience interference points' time sources and unexpected packet loss. For example, four operators providing inaccurate time use low-cost GPS receivers. Three operators experienced packet loss due to improper firewall configurations and ISP security risks.
Based on the measurement results, we uncover two security risks stemming from flawed NTP server configurations. The first is "Single Source Vulnerability." As shown in the figure, an attacker first manipulate time of an upstream server, uh, by on-path or off-path attacks, such as hijacking the BGP route to the upstream server and manipulate timestamps in the NTP response to the upstream server. After that, when downstream servers query the upstream server, they will receive and adopt the manipulated time since they use the upstream server as their only time source. For evaluation, we find 2,024,000 NTP servers are configured with only a single upstream time server. Inwards, the most upstream servers having as many as 41,000 downstream servers.
The second vulnerability is "Dangling IP Address." As shown in the figure, an attacker first control an IP address of an upstream server by searching for unavailable time sources. For example, an IP address provided by the cloud—cloud service providers are susceptible to be taken over due to the nature of dynamic IP leasing clouds. After that, when downstream server query the upstream server, uh, it will return a malicious packet with malicious timestamps. If downstream server accepts the packet, the time will be manipulated. For conservative evaluation, we find 6,000 NTP servers statically configure the IP address of the 37 dangling upstream NTP servers which are deployed in the clouds. For aggressive evaluation, we find 35,000 NTP servers are deployed in clouds, and 244 are now dangling.
To mitigate the NTP server vulnerabilities, we have proposed some solutions. Briefly speaking, we recommend NTP server operators use reputable and diverse time sources, ensure network connectivity, and update in time. Besides, using NTP Pool as a time source is also a mitigation, since it periodically monitors time synchronization services, assigns NTP servers based on country. However, NTP Pool has two—has some limitations. The first is: it doesn't support NTS, which makes NTP clients easily vulnerable to man-in-the-middle attacks. The second is: its country-first NTP server assignment. The number of NTP servers varies by country, with some having as few as four. This can lead to NTP server centralization, introducing potential security risks. The second is—the third is: fixed weights cause overloaded servers. Operator service weights are never adjusted by the NTP Pool, and overweight servers get overloaded, evicted, recover, and rejoin repeatedly, causing service instability.
To solve the problems of NTP Pool, we proposed NTS-mon, which use, um, publisher and subscriber model to support the NTS. To be specific, NTS-mon periodically monitor the time—for the public NTP and NTS servers, score them based on the time accuracy and availability, and publish a ranked list. We develop a NTS-mon client plug-in for the widely used NTP client software, such as ntpd and chrony. It auto-fetches the NTP server list and updates NTP client configurations. Besides, NTS-mon use adaptive load adjustment, which can reduce the, uh, unresponsiveness of NTP servers. In our simulation with a—with true NTP client query traffic, NTS-mon can deliver more accurate timing services even with fewer time sources than the country-based method and reduce the service unresponsiveness by 11 times and heavy query loads.
We have open sourced our data set. You can access our data set at Case 3 website. To be a summarize, we built a large-scale time server data set through active measurements, characterize time accuracy in today's NTP ecosystem, reveal that time inaccuracy is mainly caused by synchronization anomalies and faulty time sources, identify two security risks stemming from configurations of time sources, including Single Source Vulnerability and Dangling IP Address Vulnerability, proposing NTS-mon to provide clients with an accurate and available NTP and NTS server list. For more details, please, uh, read our paper which was published at IMC 2023. That's all for my presentation. Thanks.
Dave Plonka: Uh, thanks Genshin. Uh, we have time for questions or comments on this work.
We've got Karen in the queue. You're ready to go Karen.
Karen O'Donoghue: Uh, yes. Uh, I—I hope you can hear me.
Dave Plonka: We can.
Karen O'Donoghue: Um, excellent. Uh, so first of all, I'd like to, um, thank you for this work. Uh, second, um, I think this would be an interesting presentation for the NTP working group. I just didn't get it lined up for this meeting, uh, so I'd like to follow up with you on that. And then the third, um, point that you made is: we are actively working on adding support for NTS in pools, and there's an ongoing experiment there. Um, I'd be interested if you—if you had—uh, took a—took a look at that. Um, and that's all I had. So, thank you.
Genshin: Uh, okay, thanks. Rousing okay. Thanks Karen.
Dave Plonka: All right, thanks Genshin. Um, I think—I think we're done then. Um, the—in—in Mingming's presentation, one of the follow-up questions that, uh, Mirja had was about what follow-on happened. So if you think of the timeline that IMC paper was done in—or was presented in October and we're about five or six months later. Maybe for the Vienna meeting—we'll almost certainly meet in Vienna—I'll think about when we have works that are published like that if we can identify and share some of the ways they've interacted either with the IETF participants or certificate authorities or whatever and see if we can develop some of the, uh, after effects of bringing these works together. Uh, I hadn't thought of it before but that'd be a nice thing to do when we have that kind of, um, stride or cycle.
Yeah, otherwise we're done. We're doing great on time this time. So thank you everybody for keeping the time and please catch up with the speakers here if you have further questions or consider their input in your own work. And a big thanks to our note takers, that was a shared effort between P-Tomic and Brian. Thanks a lot. So see you at the social tonight and see you in Vienna. Bye-bye.