Markdown Version

Session Date/Time: 14 Mar 2026 03:07

Sure thing! Here's the verbatim transcript of the audio:

Benno Overeinder: Yeah, we are. Excellent. So, good morning. Good morning, welcome everyone. So, half hour later than planned, but I think it was good to wait for everyone to join. This is the kickoff of the IETF Hackathon weekend, Saturday today and tomorrow. For the next 15 minutes, I want to introduce you a little bit in the hackathon, how things are going, how to find a project of your interest, and start thinking about your presentations for tomorrow. And of course, the times for lunch and dinner.

Good. The hackathon. And we always start with this kind of raising hand. So for whom is this the first IETF hackathon? Okay, that's a good crowd. Good to see. It's always a great way to start the weekend, or the week actually, during this weekend to discuss ideas, drafts, etc. And next question: for whom is this the first IETF event? Okay, over there. Yeah, a handful, two handfuls. Thanks. This is a great way to get introduced into the IETF, to get to know people for the rest of the week. If you want to join a project or have interest in any topics, please reach out to me. My name is Benno. And Barry, are you...? Oh, Barry is not in the room at the right moment. Barry is my colleague here, also co-chair, and he can also help you in finding the proper project to work with, or any other introduction.

Good. Why are we here? So, one of the important things of the hackathon is to... well, standards are relevant if they are used, and if they are used there have to be an implementation. So, deployed standards are relevant. And to get this deployment, well, you need implementation. So one of the important things here during the weekend is to have this collaborative spirit of the open source community in the room, discuss ideas, implement ideas, do compatibility or interoperability tests, etc., to make progress on your IETF drafts.

Another thing, of course, is to attract new people into the IETF, and also developers and young professionals and university engagement. And that was pretty reasonable... well, pretty successful through the years. So that's... that's one of the important and I think also one of the successes of the IETF Hackathon.

And one of the kind of running... well, themes here at the IETF is, hey, so we believe in rough consensus, that's in the working groups, and running code, and that's what we're doing here today. So that's why also the Hackathon is that important for the IETF.

Good. IETF, the Hackathon, this is an IETF session. So for IETF sessions, some rules apply. That's the Note Well. So I won't go into the Note Well, but be sure to read the Note Well if you start at the IETF, also this weekend. It's about how IPR is managed, how people engage with each other, rules of engagement. Good. So Note Well applies also here.

For your code is a little bit different. The code is yours. So the code is not an IETF contribution. But the discussions around that are IETF contributions, or your presentation are IETF contributions. So it will end up on the website of the IETF as a presentation. If there are any things that need clarification, ask me or Barry, and we can explain how the rules are and how things are. But yeah, the code is not IETF contribution. But everything you discuss are IETF contributions. Oh, and the usual IETF copyright and IPR disclosures applies here. That's good to know.

Good. Projects. There are a lot of projects on the wiki. So thank you for everyone that filled in their projects, that listed their projects in the wiki. Very useful and it's a good archive of all the things and all the activities during this weekend. Thank you. If you're not sure yet which project you want to contribute, look at the lost and found. The people can present their skills and otherwise people that have a project that looking for skills. So use that. Or ask me or Barry again or send an email to the Hackathon chairs if you want to get some help here.

Good. The agenda for today. Well, we're now around 11:00 during the hackathon kickoff. So you form teams just after this one. Around 12:30 will be lunch and about 6:30 there will be dinner served. And around 9:00, before 9:00, we will close the room. Also because the staff wants to go home. So I want... most of the times I need to clear the room. Because... it's fun to stay for a little bit later and have other discussions than just work and hackathon.

Good. The projects on the hackathon wiki... oh, sorry, this is the next slide. Tomorrow, again, we start at 9:30. Oh, there's one mistake here. It's again the lunch is at 12:30. I will change that and upload the new slide. At 12:30. But important at 1:30 you stop with programming and you start preparing a short presentation, about three slides, four slides maybe, but you only have three minutes. So be very, very precise what you want to communicate and tell. So during 2:00 and 4:00, we ask everyone who wants to submit a presentation and you get three minutes to present your results. So what's the problem you want to... all right, oh, I will come to that later. So, good.

There will be quite some remote participants. So the remote participants will use Gather or Gathertown. The URL is also in the wiki. I will skip this. The people that use that already they are aware how that works. The hackathon space. But maybe some of your colleagues will be remote and you will be here in the room, you can work also with Gathertown. It works pretty well. And some teams are completely remote and they work the whole weekend on Gathertown.

If anyone needs special network request, it's mostly already announced. So if you have special network request, look at Hacknet or ask again me or Barry for support.

This is important. And I will repeat this part also tomorrow. The presentations of results for tomorrow. So you only have three minutes. So be very careful... think very carefully about what you want to present. So what's the problem you want to solve? What do you have achieved in the past two days? And some things like conclusive lessons learned, feedback to the working group, etc. and your team, of course. But don't go into details. You don't have time to present that. Go about the idea and what you have achieved. Okay. Good.

You can use Data Tracker to submit your slides. It's like a working group. So you need, of course, your Data Tracker login, and then you can submit your slides. How to do that? You go... apologies for here for the screenshot of 124, so but it's similar. So you go to the Sunday agenda at the 2:00 slot from the Hackathon and you find the icon over there, "Upload slides in Data Tracker," and there you follow the menu. You go to the slides tab, upload slides, and then before 2:00 you can upload your slides, PDF, into the Data Tracker and they will be ready to be presented in Meetecho. Okay. Again, I will present it tomorrow again before 2:00 because it takes a lot of organization and coordination from our side to have this late submissions in place. Remember, it must be in PDF format and don't change names of your presentation. If you have a revision, if you use the same name, the revision will be replacing the original. Otherwise we have to have all kind of duplicates hanging around. Good.

That's it. There's also an IETF Hackathon Git repo. I will update it soon and you can find a PowerPoint or an HTML template for your presentations. And in the IETF Hackathon repo you can also present or create your own repository in the GitHub IETF Hackathon organization for visibility and for if you want to continue work in the next IETF Hackathon. Good. That's it. I will skip this one.

Remember, what you achieved here you will present on Sunday to the people in the room. On Monday you can present your results to the whole IETF. So we call it the Hack Demo Happy Hour. It's at 6:00, from 6:00 to 7:00 on Monday evening. You get a table, you can give a short demonstration, you can use posters, etc. and you can engage with the IETF community that is not this weekend here in the room. It's really fun. There will be a paid bar for drinks, but there will be snacks and soft drinks available. It's on the second level foyer, 6:00 to 7:00. And please register your team before 1:00 on Monday so we can organize sufficient number of tables.

Good. The Code Lounge is on the 14th level. It's a large room to collaborate, to code, think. I haven't been there but I think on the map it looks great. And you can collaborate also on your running code.

Right. I want to thank the sponsors for today, this weekend. SIDN and ICANN, they sponsor the Hackathon. They provide for the drinks and the food, etc. So wanted to thank our sponsors. Thank you.

Any questions? You can find me here at the front of the table and I want also to invite an out-of-the-ordinary project here, but they want to have your input. So please go ahead.

Speaker 2: Just a really quick note. There's a table number 20 at the back corner with Greg, who's standing up and waving, and Dhruv who will be here later. It's a little bit different kind of project. It's one looking at GitHub documentation. So if you're working with GitHub to do documents, to author them or to review them or in any way use them in a working group, we've heard a lot of comments about the lack of good documentation in the IETF and so we're really looking for suggestions on how to improve it. So come visit us. We'll only take a few minutes of your time.

Benno Overeinder: Thank you very much. Yeah. So please, please visit the table and have a chat. I will definitely do that. Okay. Thank you for your attention and start on your project and probably I will announce in more than an hour lunch. So you will see me around. Success!


Session Date/Time: 15 Mar 2026 06:00

Below is the complete verbatim transcript of the audio from the video:

Speaker 1: Okay, it's time. We will start with the presentations of the projects and I will give you another challenge. It's not even two minutes, but 90 seconds, given the time and the switching. We decided to give every speaker 90 seconds per project. Please indicate if you have more than one project to present in your slide deck; you get a double of time, of course. All right, I will share the screen and you'll see your name. You can also see who is up next in MeetEcho if you want.

Speaker 1: Okay, the first presenter is remote or here in the room? It's Transaction Token Profiles for A2A calls.

Peter: Hi, my name is Peter. Happy to be the first one presenting in the afternoon. So, this time we're presenting a Transaction Token Profiles for A2A calls. So the background is actually we have, you know, the OAuth Working Group has a transaction tokens draft that can securely propagate user identity and authorization context across workloads and eventually will service a workflow. And transaction tokens will define, actually defined several immutable and changeable data fields. So this can help the workflow when propagated through different domains can protect critical security context. And sometimes in agent-to-agent call chain, you have to protect your call context, security context, for sufficient input for the access control in other domains. So what we do is actually map the A2A message critical information such as Task ID and the original user prompt input into the immutable 'purp' data field defined in the transaction token profile document. So actually, this is supposed to be immutable and this can prevent new attacks such as prompt injection and intent manipulation and context rot—those kind of attacks. And during the process of an A2A call, actually the downstream agents would do some additional inference and add to the context. So you have... sorry.

Peter: Okay, so we don't have a timer, so I can't see the time, but eventually we did code here and connect the inputs and outputs. There's no timer. All right, and eventually, I think this reveals a little bit more profound questions because A2A is envisioning a more collaborative future, but actually poses more security exposure on your internal resources. And this also the attack is more—becoming more implicit because currently it's gone... traditionally, we do contextual, you know, header-based threat detection, but now the attack is more inside of the payload, inside the prompt. So I think we need to consider all these different questions and we arranged a Thursday 11:00 to 12:00 AI Agent Security side meeting at Hunan, the small room to discuss. Thanks.

Speaker 1: Thank you, Peter. Yeah, so sorry. I don't have time to set the timer; that also takes time, so I will show time with my telephone. Next up is... let's see, next deck, Packet Scope. Are they in the room? Or is remote? If anyone remote? Join? No, then I go to the next presentation. The generic... let's see. Oh, Thomas.

Thomas: Hi, Benno. Sorry, my voice is terrible.

Speaker 1: Oh, that's... sorry, Thomas. Yours was the previous, the Packet Scope?

Thomas: No, no, no. It's the Co-Serv, the one that you are showing now.

Speaker 1: All right.

Thomas: Can I go?

Speaker 1: I can hear you. Yeah.

Thomas: Okay. I just want to make a suggestion for everybody to help Benno out. He works very hard organizing this. If you have a presentation, go to the IETF agenda page, go to the hackathon results, click on the meeting materials. You can see the list of slides in order. So look at the list. If you're coming up next, come and stand by the stage because if we have to wait two minutes for people to switch, we're going to be here till midnight. So let's help Benno and move it along quickly.

Speaker 1: Thanks. Thomas, should I give you... you say next slide? Or you want to run them yourself?

Thomas: No, please. Next.

Speaker 1: Yep.

Thomas: Okay, so what... so the topic here is Co-Serv. Co-Serv is this minimalist query language for pulling endorsement, including trust anchors and reference values from the supply chain in the RATS architecture flow. It's a CBOR-based protocol that reuses much of the CoRIM data model. We did it as an explicit choice, of course. And CoRIM is another protocol that we are developing in the RATS, so we're really eating our food. And the work we present here basically brings together the server and client sides, an HTTP-based API, and a CoRIM verifier to ensure that all the pieces fit together with RATS architecture. The next slide is exactly that picture, but I think, you know, since we have very little time, I'm going to go through very quickly. And the answer that we got from the hackathon experiment was: yes, they do, they fit all together. And this is something that we're going to bring back to the working group on Wednesday. And here in these slides, you can see a few standards that are under development that we have used, and we implemented in the demo. And I think I forgot to add an RFC, RFC 9111, which is the HTTP caching, which was one of the building blocks for this. In fact, what we did, what we focused on here was adding the HTTP caching into the flow. And that was very easy, I would say. It required adding a crate on top of the request crate that we already used on the client side and on the server side sending the right cache control headers to fit the client logic. And next, please. And we did it, basically. This is the architecture. Yeah, you can take a look later what got done. This is the list of things that we've done. We have a complete and fully functional end-to-end appraisal flow. And what we learned? A few things. But the most important one to me was that the CoRIM and Co-Serv are highly compatible. And the second thing is that HTTP caching works out of the box and is a fantastic tool if you need to add caching to your transactions. Next, please. And yes, the team members were Shefali, Paul, myself, and Hank. Shefali was a first-timer here, and there's the link there if you want to try out the demo. Thanks very much.

Speaker 1: Thank you, Thomas. Bye bye, everyone. Next, Applying YANG.

Speaker 2: So, following the past two hackathons, we are bringing from our draft of Applying YANG Provenance Signatures an enhanced demo following the serialization of the data format in CBOR objects. What we are investigating is how to provide provenance to data sets and to provide data integrity to data sets that are modeled in YANG. Some use cases can be on device configurations or telemetry data. What we brought from apart from the other hackathons is an end-to-end workflow using a Kafka message broker where there's a data ingestion, serialization, and formatting, and several microservices doing the signer and verifier modules of these signatures that we are applying with COSE. Apart from that, that data should be validated. There's a YANG schema to be processed this data. And this new workflow is operating with byte serialization and CBOR object management, using only binary data using CBOR. Here are the integration details of the workflow. This is the whole workflow serialized in bytes and encoded as CBOR objects, apart from JSON and XML, which are the previous versions of this workflow. What we achieved here is that we enhanced the reference implementation. It has evolved and is aligned with the latest draft updates that are in the OPSAWG working group. And adopting these CBOR data formats and the binary encoding, we reduce the message size and it has a faster processing of this data and improves extensibility. This is also a working progress with YANG Push and the YANG Kafka schema registry that we are using along with our Kafka workflow. And for the next hackathon, we'll try to finish providing a multi-signing implementation in this workflow. Thank you.

Speaker 1: Thank you. All right. Next slide deck is from Model for Distributed Authorization.

Lucía Cabanillas: Hello, I'm Lucía Cabanillas. I'm working on a model for distributed authorization policy sharing. The goal of this framework is to define a canonical way to manage and share authorization policies. We use YANG to define these policies in a machine-readable way. The policy logic itself is written using policies-as-code languages, while YANG carries this logic together with its metadata, enabling it to be validated, versioned, and distributed. This approach is designed to support interoperability across domains, while different kinds of policy engines, typically PDPs, can consume and enforce the policies. So in the draft, each policy artifact modeling YANG includes a description, the policy language used, the policies-as-code language, the policies-as-code logic, the owner of the policy, the version, and additionally, we included an optional provenance leaf. So to demonstrate the policy creation, distribution, and enforcement using this YANG-based model, a YANG policy artifact is submitted to a policy administration point, which is in charge of validating and distributing it to the corresponding policy decision point. What we have used is OPA, Open Policy Agent. The policy logic is written in Rego, which is a policies-as-code language, and we use this policy to protect some data stored in a graph DB. Specifically, the policy denies queries requesting the concept vendor name. So to validate this, a user authenticates with Keycloak, which is acting as our identity provider, sends several requests to APISIX, which is our policy enforcement point. APISIX queries OPA to decide whether the request should be authorized or not. Normal queries return the data; queries requesting the concept vendor name are denied. So with this, we demonstrated our... we implemented an YANG-based policy sharing, we demonstrated policy distribution, and we validated that this approach can support interoperability and automated policy management. Next step, we would like to use some mechanisms of provenance, further explore cross-domain policy sharing scenarios, and we are looking forward to gathering feedback. Thank you.

Speaker 1: Thank you. Next is AnSAn.

Speaker 3: Hello everyone. I'm Guo Zhen from China Telecom. My presentation is AnSAn YANG to API. First, AnSAn, short for Autonomous Network and Service Abstractions, is a working group forming BOF. Its previous name is ANNS. The AnSAn working group aims to abstraction implementation and use, improving automation, operational efficiency, and interoperability. YANG to API is a technical methodology for the automatic generation of API interfaces based on the YANG data model. Its primary function has three parts that include model parsing, model mapping, and standardization service interface generation. Its implementation has three steps: step one, YANG model parsing and structured representation; step two, automatic conversion from model to JSON schema; step three, generation of standardized RESTful API interface. Finally, welcome to our AnSAn BOF at Grand Ballroom 1 from 9:00 to 11:00 next Tuesday morning. Thank you.

Speaker 1: Excellent, thanks. Next presenter... let's see. That's correct, yours? Yep. Okay, excellent.

Speaker 4: Okay, good afternoon. I'm Nan Gong from Huawei Technology. I will share the results of two hackathon projects. The first one is selectively syncing RPKI data to routers. The problem is that current RTR protocol will sync all types of RPKI data to routers, but in some cases, the router may only need part of types of RPKI data and other RPKI data will never be used by routers. So our main idea is that why not only sync part of the types of RPKI data that are needed by the router? And we—what we got done: we made two key extensions to existing RTR software. The first is to extend the SLURM mechanism. This enables the filtering of specific types of RPKI data in the local cache. And the second key extension to the RTR software is that we extend the RTR protocol. It allows the router to send a newly defined PDU to subscribe the RPKI data, so only the RPKI data that are really needed by the router will be synced to the routers. And here are the team members, GitHub link, and related documents.

Speaker 4: This is the second project: RPKI-based validation with prioritized resource data. You know, the RPKI data are not complete, especially for the newly defined RPKI objects like ASPA. And some operators will complement the RPKI data by using IRR data or AI inference. However, as you know, the IRR data and AI inferred data are not always correct. So the router will use a conservative action on the invalid routers. The main idea of us is to mark the RPKI data and validation results with a priority or credibility, so that the router can discard the invalid rules with high credibility directly. And we have designed and implemented the algorithm. And next, we will finish the design. Here are the team members and related documents. We welcome comments and welcome everyone to contact us to work together on this work. Thank you.

Speaker 1: Thank you. Next up is AI Gateway. Yep. Excellent, thanks. Thanks for the tip, Stewart; works very well.

Speaker 5: Good afternoon. I'm Wei Wang from China Telecom and today our topic is about Agent Gateway for dynamic multi-agent secured collaboration. And yes, the AI Agent Gateway is a device served as a unified entry point and control panel for traffic. And it should support the capabilities such as discovery and registration, the routing and addressing, the gateway interconnection, and secure communication. And we also developed one demo for the multi-agent collaboration. And you can see in the figure, we have two domains, and each domain has a gateway and an agent. And the agent have different capabilities such as the video recording, video analysis, video—sorry, the alarming, which can call the fire police and the agent to contact the homeowners. And they can collaborate with each other and complete the fire detection scenario. And we have proposed two related documents. And if you are interested in our topics, maybe you can... you are very welcome to attend our side meeting, which will hold on Wednesday and Thursday. And you can also show... post and show your ideas and comments in the DMSC mailing list. And any comments are very welcome. And okay, thank you. Thank you for your attention.

Speaker 1: Thank you. Next up, OAuth. Thank you.

Speaker 6: Good afternoon, everyone. Today, my presentation is on OAuth 2.0 scope aggregation for multi-step AI agent workflows. I'm presenting on behalf of project champions me and Shuping Peng, and we are from Huawei. And the background is that the advances in LLM enable AI agents to plan and execute multi-step workflows, which may involve calling third-party tools (for example, via MCP) or delegate subtasks to other AI agents (via A2A). Those interoperations typically require authorization by OAuth 2.0. And consider a scenario where an AI agent connects to a notebook MCP server. The user talks to the AI agent to summarize this week's daily reports into a weekly report note. The AI agent reasons that I will first need to call list_notes then call add_note. The problem is that the existing agent protocols (for example, MCP) adopt OAuth in a reactive and challenge-triggered way. So the user needs to respond to multiple consent prompts with distinct scopes (for example, here list_notes and add_note are distinct scopes). So our proposed solution is: first, the server exposes the security requirement in the resource metadata. Here, the add_note has type OAuth2 and scopes 'write'. And the AI agent will initiate a single OAuth flow for the aggregated scopes. Here 'read' and 'write' are aggregated. And here's a demo: we have built a notebook MCP server with UI shown here. You can see from the logs that we aggregate scope 'read' and scope 'write' and to form a single authorization. So we will show this demo on Monday's hack demo happy hour. We are very welcome for feedback and suggestions. Thank you very much. Thank you for your attention.

Speaker 1: So very good for the promoting of the hack demo happy hour tomorrow. Next up is LLM-driven automated. Yep.

Speaker 7: Hello, I'm Yunze from Tsinghua University. I'd like to share our project of LLM-driven automated network protocol testing with you. Network protocol testing is to test protocol implementations such as switches and routers to ensure the conformance, performance, and security, etc. However, traditional methods are mainly labor-intensive. They are often with low efficiency and limited coverage. To address these issues, we designed and implemented our automated system that takes protocol specifications as input and generates test cases, test artifacts, and executes them in the test environment. On the right side, you can see our hardware and software testbeds. Now, Chuyi will show our demo with you.

Speaker 8 (Chuyi): We select a target RFC file, and the system will extract the functional modules. The system first generates test points with testing objectives, then it refines the test points into the detailed test cases with procedures and oracles. Using an agentic approach, the system iteratively interacts with the environment to produce executable artifacts. Finally, it executes the test code and produces the final test report. Thank you.

Speaker 7: We have also some related drafts and demonstrations. The draft is on Monday afternoon and the demonstration is on Tuesday. We sincerely welcome your comments. Thank you very much.

Speaker 1: Thank you. Next one, Optimizing Agent Context. Thank you.

Speaker 9: Good afternoon, everyone. And I'm Zeze Zhang, and I'm presenting on behalf of the champions me and Shuping Peng. I'm very happy to share our hackathon project, optimizing agent context interactions. And so why are we doing this? And today, a lot of agent protocols like Google's A2A actually transmit plaintext information. And this wastes too much tokens and caused a long task execution time. We realized we need a smart way for agents to talk to each other which is structured, efficient, and scales to real-world applications. Our core idea is simple: instead of agents constantly resending plaintext, we give them a structured context to help the agents to focus on the necessary information. And besides, we also built a task context manager to keep track of the big picture. To test this, we picked a realistic scenario: analyzing financial reports of seven new energy vehicle companies. And we built a small multi-agent system with a master agent and eight specialized sub-agents. Then we run two scenarios: one using our structured context exchange and another using plaintext. And we measure the token usage and the execution time, and the numbers were pretty convincing. With our approach, token consumption dropped by like 70% in some cases and execution time was optimized by 30%. And we want to thank all the participants for the contribution. Thank you.

Speaker 1: Thank you. Next up, BGP Flowspec with Packet Content Matching. Yep.

Speaker 10: Hello, everyone. I'm Yu Jia from Zhongguancun Lab and the topic of this hackathon is extending BGP Flowspec with packet content matching. And BGP Flowspec is widely used in carrier networks for traffic handling or the DDoS attack mitigation. And as the traffic intensity increased, the scrubbing center cannot defend the attack well. And also the packet capture from the operational network shows some attacks have a specific payload pattern. So if we can defend this traffic in the network device through the BGP Flowspec, it may reduce the defense costs. But the current Flowspec only supports header-based matching, so that's why we propose the packet content matching. And this work is proposed in IDR working group and which is aligned with the exploring Flowspec V2 work. In this hackathon, we define the filter encoding and ordering, and also implemented and validated open-source software and commercial hardware. Here is the software implementation result. The test is on the openbgpd and FRRouting and we test it in some different scenarios and the code is open source in the GitHub. Here is the hardware deployment result. It works on a switch by Arista switching. And from this hackathon, we found the packet content matching can reduce the traffic processing costs, but it requires careful deployment. And here is the team member of our hackathon and if anyone interested in this topic, please feel free to contact me. Thank you.

Speaker 1: Excellent, thanks. Next up, SRv6.

Speaker 11: Hello, everyone. I'm Minxue Wang from China Mobile. So our hackathon project is the SRv6 for the interlayer network programming. And we defined the SRv6 End.IL for interworking interlayer working behavior to instruct a node to send packets through underlay links or connections. So here we demonstrate a multi-vendor interoperation with SRv6 End.IL in a slicing packet network scenario and enabling the deterministic connectivity to the cloud private network. So we validated SRv6-based EVPN service connectivity and performance. So for each End.IL, is associated with an underlay connection such as MTN or fine-grain MTN channel, pointing to a remote network node, enabling the integration of the underlay channels with the SRv6 programmability. What we have done here is three vendors, Huawei, ZTE, and FiberHome, they have already implemented End.IL with the MTN and FG-MTN underlay channels based on our draft in SPRING. So we can say that each equipment can be configured with End.IL. So the SRv6 EVPN service interoperation is also achieved from the SPN devices and it can demonstrate deterministic performance. Though the traffic testing we measure the latency and jitter before and after the congestion; both showed a minimal variation. So here's our team member and our document links. Thank you.

Speaker 1: Okay. Next up, Relay Attacks.

Speaker 12: Yeah, hello. I am online.

Speaker 1: Nice, thanks. Just give me a cue when I want the next slide.

Speaker 12: Or maybe can I control myself? Is that possible? That will be quicker for me. Can you give me control for that?

Speaker 1: Here we are. Okay, now you can manage yourself. Yep.

Speaker 12: Okay, fantastic. Yeah. Thanks very much. So I, Viacheslav and Jan-Peter, we worked on attested TLS. We have been presenting this for several meetings. There are three ways to combine TLS and remote attestation. They are pre-handshake, intra-handshake, and post-handshake attestation. Pre meaning you do attestation before the start of the TLS handshake, intra meaning in between, and post meaning at the end or after the conclusion of the handshake itself. Our AD Paul requested that we do an exhaustive exploration of intra-handshake attestation. That's what we did. And the formal analysis tool that we used is ProVerif. The devil is in the details, as the title says. Basically, we have the TLS handshake protocol shown here, client on the left, server on the right, and client acting as the verifying relying party from RATS perspective and server acting as the attester. And the two... so basically, there are two different keys inside the TLS key schedule which are the handshake traffic secret of the client and the application traffic secret of the client. And the key here is basically that the handshake traffic secret is used to encrypt the handshake messages, which is irrelevant for the security. It doesn't matter whether you encrypt these messages or not; from security perspective, they are not important. They are important for privacy perspective. And on the other hand, another argument here is that if you use attestation from the handshake traffic secret, basically the server is not yet authenticated at the point where it is derived, like the step... the first three steps: client hello, server hello, encrypted extensions, the time at which that evidence is derived. At that point in time, the server is not yet authenticated and thus it's not suitable for the attacks to be stopped at that point in time. Anyway, so the first four are the... in this, the potential mechanisms are shown. The first four mechanisms are the base mechanisms, and the last three are basically combining these two... some of those. And we only see the combinations which are interesting, so as not to exhaustively explore, because for example, attestation nonce is required in all and we proved that basically all of them are vulnerable to relay attacks. And the conclusion that we have is intra-handshake attestation is not suitable choice for standardization, and we propose the post-handshake attestation draft (vusa-tace-expect). And finally, basically, we will present the results at various working groups and research groups as well as BOF. And pretty much you will hear every day about this. And we will have a side meeting where we propose a new research group on Confidential AI. All of you who are interested are very welcome to that. And these are some of the resources from where you can get the information. Thanks very much, and I miss you all.

Speaker 1: Thank you. Okay, let's go with the order: Fast Formation, and then maybe... that's you? Yep. And then next follows the data tracker. Yep. All right, there you go.

Speaker 13: Hello everyone. My name is Luo Yemin and I'm from the Atomic Lab in HKUST-GZ. My project is Fantas, the fast network formation for 6TiSCH networks at large scale. The 6TiSCH networks' join procedure follows a sequential process like the synchronization, the secure joining, routing establishment, resource scheduling. That means each step would only start after the previous one is complete. So forming a large-scale network with thousands or hundreds of nodes would take hours. So in our solutions, we would form the sub-network in parallel and merge the sub-networks with each other to form the unified network. So at the beginning, the DAG root would form the main network. Some of them might not join the main network. And then random nodes are selected to be the sub-DAG roots to form the sub-network. And after that, they will merge with each other to form an unified network. So potentially this method can accelerate the network formation at exponential rate. So this is how we form the sub-network. In our plan, we're going to evaluate our method in simulation and then we're going to deploy them in hardware. And we also publish our code simulator code in GitHub. You can access it. Thank you.

Speaker 1: Thank you. Okay, now one moment, I will go to... oh, yeah, exploring. It's up. Yeah, there's some reshuffling, but it's almost the same order. Thanks. There you go.

Speaker 14: Hello everyone. I'm Dushenguang from Zhongguancun Lab. We will introduce our work on agent naming and discovery mechanism. It's a joint work from Tsinghua, Zhongguancun Lab, and CNNIC.

Speaker 15: Yeah, I'm Sui from CNNIC. So, the background is that right now, agent is a really hot topic. And agents need to interconnect across domains. So the problems are here too. The first one, how do agents from like different domains find each other? And the second one is how do we make sure their identity that they don't... they don't conflict? So, the mechanism we define is like in the resolution layer, it's not for the discovery, we have another draft for that, but in the application, we don't care about that. So here, the first we need to identify the naming, which we use the domain names as the stable identifiers for agents. And we also have the trusted resolution that we use the infrastructure, which gives an agent ID find his endpoint securely. And the last one is we have basic support for the discovery systems. So now the rest part will be introduced by Chenguang.

Speaker 14: We use three DNS record types: the SVCB record carries the main information, including the agent endpoint version and protocol; the TXT record links some external metadata; and the A and AAAA record provide the basic connectivity for the agents, for the clients that do not support SVCB. We keep the DNS simple and lightweight. We deployed a simple prototype using the CoreDNS. We can see that the client can register a new agent to DNS and view the agent information. The DNS can resolve the agent name to the endpoint. Thank you. We welcome comments and we have a side meeting tomorrow. Please join us. Thank you.

Speaker 1: Up next is Agent Inter-Domain Routing.

Speaker 16: Hello everyone. I'm Shenglin. Today I'd like to talk about the agentic inter-domain routing. So what's the inter-domain routing in the AI era? Let's start with the routing analysis. With the multiple... multi agents, the system can analyze and interpret the analysis. Based on the information, the system can generate the valid mitigation policy. The core models of the system contain: knowledge management, event reasoning, routing policy generation. According to the experiment, various models have achieved good performances for the event recognition. Across all identified events, the system can generate valid policies. Welcome to the follow next-generation inter-domain routing protocol and architecture. Thank you.

Speaker 1: Thanks. Up next... you're welcome.

Speaker 17: Hello everyone. My name is Ziwei Li from Zhongguancun Lab. Today I will introduce our recent work on BGP security, named Minimal Exposure AS Path Verification Against BGP Post-ROV Attacks. Let's start with the problem. AS path manipulation and route leaks are still a major problem in today's internet. BGPsec and ASPA are two main solutions to address it, but BGPsec face high computational overhead and the benefit is quite limited during under partial deployment. ASPA can prevent route leaks, but it requires global publication of customer-provider relationships exposing sensitive interconnection policies. To solve this problem, we design the Minimal Exposure AS Path Verification; it decouples validations from disclosures and it employs a validator assistant architecture in three phases: trust establishment, path verification, and securing route selection phases. We evaluate it. The key results show that it has achieved better defense effect in several deployment scenarios and the latency overhead is negligible. Finally, the conclusion: it offers a secure, pragmatic, deployable, and less exposure path towards securing inter-domain routing. Thank you.

Speaker 1: Thank you. Up next we have AI Agent Protocol Security.

Speaker 18: Hello everyone. So this project started in 123 meeting in Madrid. So what we want to do is to study the identity, authentication, authorization, and privacy part for the AI agent protocol. So what we achieved: so last time, we already developed a digital identity for the AI agent based on W3C DID, and we also developed a simple authorization procedure based on W3C VC. Now we started to study OAuth. So MCP use OAuth for authorization, so as A2A. That's enough for a single agent to call tools. How about the multi-agent system? So we think there may be more requirements for multi-agent system. If every agent in a group, in a task group, apply for access token one by one to the authorization server, it will cause a lot of signaling consumptions and also heavy the burden to the authorization server. So after analysis, we think the coordinator in a agent group can apply the access token for the group. So this is what we achieved this time. We started a MCP client as a coordinator. It can apply for the access token. The token request message and access token we add some new parameters into it. And the coordinator can send the access token to another MCP client for it to call the tools. So next, this is our future plan. We want to study the security for A2A protocols and also we want to develop some security tools. Any collaboration and comments is very welcome. Thank you.

Speaker 1: Thank you. Next up, E2E over SRv6.

Speaker 19: Thank you. This is Fengyan from China Mobile. So we have people from China Mobile and H3C. And the background is that today's AI service is booming, which brings the traffic pattern updates on this. The first one: before we have the human and the LLM interaction, now it turns to the machine to LLM interaction. So the next is what we have done: the interaction—there are dozens of interaction between the LLM and agent. So that put requirements on the network. So there are two... three requirements for network: one is observation, one is quality assurance, the last one is the isolation. So we have put—we have the idea that some... the... one solutions with the overlay solution and collaboration with the underlay network, which can guarantee the agent access services. So there... the collaboration point is kind of ID which are subscribed by the user and then it put that user into that packet and then based on packets, the backbone network will steer the traffic onto the correct path. That can provide the service guarantee. So that's what I want to show. Thank you.

Speaker 1: Thank you. So, okay, thank you. So the updated slides will be in the data tracker archive but not in the MeetEcho; apologies for that but that's... yeah, we download most of the slides before 2:00 and we do our best to keep up with all the changes, but part of the challenge for all of us. Next presentation is by... here we go. Agent Communication Systems.

Speaker 20: Hello, I'm Yue from China Telecom. So our project is the Agent Communication System for the Network AIOps. Wait, sorry. So the problem statement is that we want to collaborate multi-agent communication and collaboration framework that facilitates the coordination of the multi-agents and supports the intelligent automatic network operations and maintenance. So we have employment the two functions. The first one is agent discovery, the second one is agent communication. And the first function, we build a Agent DS problem solutions for the new ideas such as the agent metadata management. So we use the management of the key elements for agent including the job responsibility and code of the conduct. Also, we build a identity authentication functions rely on the trust root of the operator network. So the second one, we use a new design for the core capabilities such as a you proposed a unified Agent DS-powered gateway, integrating end-to-end resolution with the DAC guide orchestration across three functions layers, such as the resolution match and orchestration. We proposed the name-to-name resolution reserves a target agent by name for scenarios of the caller already knows which agent to invoke. Also, we proposed agent gateway for a pub and sub mode which is a dual-NIC architecture; also, it is a unified agent gateway serve as the common entrance point for AI agents, tools, and requests such as MCP and A2A protocol. So this is our team's member, and also welcome the first newcomers in our team, Xinsong and Jiajing Li. Welcome to the first IETF meeting in our team. Also, we have a side meeting on Monday morning. More details and presentation will... will found in Monday morning and welcome to our side meeting. That's all, thank you.

Speaker 1: Thank you very much. Up to you.

Speaker 21: Hello everyone, this is Chuansong from ZTE and our hackathon project is HPWAN, high performance wide area networks. Thanks for the hard work from our team members. And we successfully achieved the objectives, including to check the HPWAN state-of-the-art, identify methods for end-to-end HPWAN service monitoring, and we discussed HPWAN deployment on topology for public networking scenarios, and we also tested the integration and simulation of the HPWAN functions. We also tested the RDMA performance and Quick-based optimization. Finally, we discuss requirements for HPWAN service. So we provide the following hackathon results for three parts: first, we provide the simulation results and performance comparison; we completed the Quick-based implementation and provide simulation results; we also provide the lab simulation and celery testing. And we provide the three parts of hackathon results for the following backup pages. So we greatly appreciate the participants to join our projects and if you are interested and you can get more information from the related documents and you can get the materials in our GitHub. So for the last steps, we will define the IETF list to standardize to enable the deployment of HPWAN services in multi-domain, and we also define high-level HPWAN services. So thank you for your time.

Speaker 1: Thank you. Next up, the Knowledge Graph for...

Speaker 22: Yeah, okay. Everyone, I'm Mingzhuo Qing from Zhongguancun Lab. Our hackathon project is about using the Knowledge Graph to assist the network management. So we focus on the challenge from the network data that includes unstructured, discrete, or heterogeneous network knowledge from the vendor-specific documentation and logs and so on. So our proposed solution is that we can use the knowledge graph to model and correlate these network data. What we have got done is that we have built a knowledge graph with large-long model and we provide a retrieval interface. First, we use the large-long model to analyze the network hops and build a knowledge graph. In this process, the large-long model has shown its capability to accurately extract entities and relationships from this knowledge. And we also provide a natural language query interface for the knowledge graph. So here is the web interface of our knowledge graph which displays the entities and relationships and properties distributions of the knowledge graph. Based on this knowledge graph, we also implemented a interface where the operator can type their questions through this chatbot, and we can retrieve some relevant nodes and some relationships, which acts as the context information for the large-long model to derive the final answer. Here is another example. So in this hackathon, we focus more about how to use the knowledge graph in network management and the large-long model is actually a good tool for the knowledge graph construction and interpreting. And we maybe in the future we can explore agentic knowledge graph and its interface for network management. This is our team member and welcome your comments through our email. Thank you.

Speaker 1: Up next, PQC Interoperability.

Corey Bonnell: Hi, I'm Corey Bonnell and I'll be talking about the PQC Interoperability project. So, just a brief summary on what we're doing. This is actually our 11th IETF that we met; our first meeting was back in London in 2022. We're working on adding PQC algorithm support to existing X.509 structures, keys, signatures, certificates, and then also testing interoperability on various implementations. We also want to gain implementation experience and provide feedback to the standards developers. So here's a list of some of the specs we've been working on. So we actually had an interim hackathon in January. The primary focus was on composite signatures and KEMs. A lot of excellent progress there, and also some work on the Certificate Discovery draft. What we've got done since: we've actually pulled in the NIST PKI test suite into our PKI certificates repository. We also discovered some issues with ML-KEM verifiers and we were working on actually some HOSE artifacts as well, and then additionally some more work on the Certificate Discovery draft. Still dealing with some algorithm implementation issues, some corner cases, and we also did some cleanup of our PQC certificates repository to clean up some of the old draft algorithm artifacts. Here's the information that we have on the repo and here we are, the list of contributors and our first-timers this time. All right. Thank you.

Speaker 1: Thank you. Up next, RATS... remote, Mike.

Mike: Yep. Hello.

Speaker 1: Hi. So, you want to click yourself, shall I click for you?

Mike: You can click for me, that's fine. Um, so this work is RATS PKIX Evidence. So this is remote attestation. This is essentially a file format for HSMs, cryptographic hardware, to prove to an outside party that it's in a good configuration state. Next slide. So, this is essentially one draft. The authors of this draft figured it was time that we had samples for the draft, test vectors for the draft. So we came to the hackathon with the goal of producing a reference implementation, open-source reference implementation that could also produce the appendix for the draft. Next slide. So obviously the thing we did was just feed the TXT version of the draft into Claude and ask it to make a reference implementation. Um, this was actually wildly successful. Next slide. It produced really clean, elegant code, which wasn't entirely correct; we did need to make some changes to it but as, you know, as a starting point for reference implementation, this was extremely good. Next slide. So we've taken the conclusion here that feeding your draft into an LLM and getting it to produce a reference implementation is an extremely good way to check if your spec is clear and covers all the edge cases. If an LLM can produce working code, then your spec is presumably well-written. So this weekend in about three hours of work, we have two independent implementations going and our source code's available; it lives with the draft source. Thank you.

Speaker 1: Thank you. Oh, almost ask any questions, but we don't have time. Okay, Task Discovery. There you go.

Speaker 23: Okay, I wish I knew I can provide the draft to Claude and it fixes everything. So what we're looking into is Task Discovery. The main idea is we looked into the traditional agent discovery. It looks something like this: so we have a discoverable entity, which is the agent, and the way we discover agents is that we have agent cards and we will go through a process of a search. So you create your task, you decide on the requirements of this task, based on this you go ahead and just search this repo of agent cards. We said is there an alternative way to do so? So this is what we're proposing. We're saying okay, rather than posting agent cards and the task owners searching through those task cards, how about task owners post their own tasks in the form of a discoverable object called a Task Card? And now agents are the ones doing the search. This way, the task owners and their tasks become a discoverable objects themselves. So what we did today is, or in this hackathon is, we developed a platform where you can post tasks and we showed a quick demo where agents can actually look up those tasks, decide for themselves, use their own intelligence, LLMs, to decide which task is good for them. Next phase is: can we build this as a complementary architecture? That's what we're hoping for. Just final notes: so we wanted to shed some light on other possible discoverable objects besides agent cards, in this case it's task cards. Those discoverable objects should be separate from the discovery vehicle. And we wanted to explore establishing a Tasking Working Group within the IETF for AI Discovery. Those are some related work and drafts. We also have a side meeting tomorrow, AI Ecosystem at 2:30 PM. Please join us. Thank you.

Speaker 1: Thank you. Up next we have Bridging the Transparency Gap. Yep, right.

Speaker 24: Good afternoon, everyone. So I'm Yuning from Huawei. So our hackathon project is on Distributed Remote Attestation. So the background is that in practice, remote attestation is often happening across domains. So usually we would have, for instance, a separate remote attestation service in each domain, and different domains would have their separate or different policies, their own verifier inputs. And this creates two recurring problems. The first problem is cross-domain attestation transparency, meaning that one domain may need attestation artifacts such as endorsements from another domain. And the second problem is that many verifiers may need multiple endorsements from different providers, and similarly, different providers may need to distribute those endorsements or reference values to multiple verifiers. So our goal is to provide a public or publication mechanism to solve this problem where those attestation artifacts such as endorsements or reference values could be published, distributed, and reused in an efficient way. So this is the first architecture we developed where the distributed ledger publication would have off-chain attestation and verification. So in this architecture, the flows such as attestation flow and verification flow are following the current RATS architecture, while we would have the reference values and endorsements published into the distributed ledger. And we also have the second architecture. The difference between this one and the first architecture is that the distributed ledger would also have the registration and verification logic that is done through distributed ledger, such as through smart contract. And this way, the verification could be done in a more transparent way. We have also done our demo or hackathon developments through TLS attestation and Hyperledger Fabric. And if you are interested, please go to the corner and we could show you the step-by-step whole process. And thank you, we are looking for collaborators and feedback.

Speaker 1: Thank you. Hello everyone. My name is Tianyu Cui from Zhongguancun Lab. Today I will share our team's work about Traffic LLM to enhance large language models for network traffic analysis. LLM application to the traffic analysis domain is limited due to the challenge of generalizing to heterogeneous traffic data and different downstream tasks and new environments due to the dynamic networks. So we build TrafficLLM, which is a traffic domain foundation model to learn the generic traffic representation from heterogeneous raw traffic data. The framework contains three components: first, traffic domain tokenization use pre-prompt and heterogeneous feature extraction to build training data, and we train traffic domain tokenizer based on BPE algorithm to get tokens from traffic data. Second, dual-stage tuning pipeline separately fine-tuned with a LLM with natural language instruction and downstream task traffic to learn instructions and traffic patterns. Third, extensible adaptation with PEFT, split different capabilities into different external parameters, which helps reduce GPU resource and training time. Finally, TrafficLLM has evaluated under 10 downstream tasks and here are all codes, papers, data sets, and models which are all available to the community. That's all. Thanks for your listening. Thank you very much.

Speaker 1: Thank you. Hello everyone. I'm from Huawei. I'm shared... I'm honored to share our project: Agent Networking Framework. It's difficult for bank application parties to interconnect with banks due to the following issues: non-unified interfaces, inconsistent protocols, incomplete service access. We developed the Agent Program and Agent Gateway Program. We have implemented the intelligence open banking business based on Agent and Agent Gateway and simplified the interconnection between application parties. In our simulation, the sponsor agent can provide users with the ability to query bank transaction records. UnionPay agent can shield the differences between banks' agent and simplify the interaction between the sponsor agent and different bank agents. The bank agent can return the corresponding user transaction information based on the request. In our demo, the agent has the following functions: agent information modeling, registration with the agent gateway, task breakdown and execution, discovering the agent through the agent gateway. The agent gateway has the following functions: agent registration and discovery, information synchronization among multiple gateways, agent traffic forwarding. This is the agent gateway page. This figure shows how a sponsor agent query the bank account statements through the gateway. From this project, we gained valuable insights. Firstly, compared with traditional API communication, semantic communication of agents can provide stronger generalization capabilities, reducing the adaptation cycles and costs. Agent gateway can simplify the access and interconnection of across-organization agents. In the future, we will continue to optimize the framework. Firstly, agent gateway provides the agent protocol translation function to implement interconnection between heterogeneous platforms and protocols. Efficient information synchronization between agent gateways to simplify the discovery and interconnection of agents across large-scale organizations. The agent gateway provides the supervision and audit function to ensure that agent communication is observable and traceable. That's all for my sharing. This is my team member. Thank you.

Speaker 1: Thank you very much. Hello everyone, this is Chuansong from ZTE and our hackathon project is HPWAN, high performance wide area networks. Thanks for the hard work from our team members. And we successfully achieved the objectives, including to check the HPWAN state-of-the-art, identify methods for end-to-end HPWAN service monitoring, and we discussed HPWAN deployment on topology for public networking scenarios, and we also tested the integration and simulation of the HPWAN functions. We also tested the RDMA performance and Quick-based optimization. Finally, we discuss requirements for HPWAN service. So we provide the following hackathon results for three parts: first, we provide the simulation results and performance comparison; we completed the Quick-based implementation and provide simulation results; we also provide the lab simulation and celery testing. And we provide the three parts of hackathon results for the following backup pages. So we greatly appreciate the participants to join our projects and if you are interested and you can get more information from the related documents and you can get the materials in our GitHub. So for the last steps, we will define the IETF list to standardize to enable the deployment of HPWAN services in multi-domain, and we also define high-level HPWAN services. So thank you for your time.

Speaker 1: Thank you. Hello everyone. I'm presenting CCPipe, Concurrent and Conflict-Free Pipeline for RPKI Relying Parties. And we focus on systematic barriers in the RPKI data supply chain, that is, traditional RPs block all updates from being distributed to routers until all data is synchronized and validated, delaying critical RPKI data updates. And our goal is to pipeline the RP validation run while preserving routing state consistency and mitigating introduced router overhead. And for consistency, we utilize a resource allocation rule and we aggregate some updates and extend VRP cache maintenance strategy for router overhead. And there is some details of our design. The evaluation results show CCPipe reduced up to 73% RPKI validation run latency and introduced negligible router overhead. To explore the standardization of CCPipe, we need more tests with more BGP router implementations. Yeah, and this is our team members, all of... this is the first IETF meeting for all of us. And our code is available at GitHub. Thank you.

Speaker 1: Hi there, I'm Daniel. XMPP is an IETF standard for federated instant messaging. Extensions to XMPP are not managed by the IETF but by the XMPP Standards Foundation. We had an XMPP extension that allows us to query what time zone a contact or server is in. The extension is actually from 2006, and responding to those requests is widely supported in the XMPP ecosystem, but displaying that information is limited or even nonexistent. So, for example, in the popular XMPP client Gajim, you have to go in like some menu and like dig very deep to find that information. So during the hackathon, two XMPP clients, Dino, a Linux client, and Conversations, an Android client, have been working on improving that a little bit. We refactored our code, added unit tests to the responding part, and added support for requesting the time zone information of a contact and then also added display information of like what time zone a contact is in. And this is how it looks like if my contact is... if it's nighttime for my contact, I'm showing like a little warning in the chat screen so that I can think twice about whether or not I want to get in contact with them now. Thank you.

Speaker 1: Thanks. Up next is IVY, I think. Yep, it is.

Speaker 25: Good afternoon, everyone. This is Yan Xia from China Unicom. I'd like to share our project. This project focused on the interactivity of the IVY model. The inter-op test bed is a most likely test bed. Two controllers from China Unicom and Ciena place as the inventory system... and one controller from Huawei place as the controller and the network is a real hardware from Wuhan Lab. And we use the IVY model to get the real-time resource from the controller. This is the IVY model and draft we used in this project. This is part of the IVY system on our China Unicom's operator. It includes the resource view and resource management information. We have 11 members in total from three different companies and four of us are first-time hackathon. All materials are uploaded to our GitHub. Thank you.

Speaker 1: Thank you. Next up is I2I-CF.

Speaker 26: Hello everyone, I'm Shuduo Wang from Sungkyunkwan University. Our champions are Jehoon Jeong and Shuduo Wang. Our project is Interface to Network Computing Functions project. So, this project shows our hackathon poster. Our goal is to make a robot car that receives the intent, detects obstacles, and avoids obstacles safely using camera and lidar through the I2ICF framework. Our I2ICF framework works in three steps. First step is user's intent is translated into network policies, and second step is the 5G core delivers them to robot cars, and last step is each robot car detects obstacles, stops, or detours safely and reports back. Our main goal is collision avoidance. So, it is our previous framework: if the person is too close, the robot car activates the stop function. But for now, we added path planning as the edge server. So, the blue boxes show the added process: user intent is sent to the edge server and path planning state is sent to the robot car and activates stop or detour function. So it's a demo. So, we build a real-time perception to avoidance system, and the robot car's camera and lidar measure distance, and robot car can also automatically plans a moving path when an obstacle is nearby. So our next step is implement the intent translator and policy translator, and we also extend I2ICF testbed for next IETF. So it's our source code on GitHub and our demo video on YouTube. So it's our team. Thank you very much for listening. Thank you.

Speaker 1: Thank you very much. One moment. I will refresh the slides. All right. Thanks. Let's see. Next up: 5G I2NSF. There you go.

Speaker 27: Hello, my name is Jiseo Bang from Sungkyunkwan University. Champion is Jehoon Jeong, and I'm going to talk about the integrating I2NSF with 5G networks. And this is our poster. You can see on our table. Our goal is to make the edge-based security system in 5G networks with I2NSF and you can see the detail in the internet draft. And this is the whole structure of our 5G I2NSF system and we are going to integrating the security system with the 5G core networks. And actually, the I2NSF itself is the cloud approach and it has the network security function on the cloud, so it can make the long delay due to detour security service via I2NSF cloud. But we try to use the edge approach, so we try to use the NSF inside the UPF, so to make the short delay due to the optimal security service path via I2NSF edge. This is the whole structure of our testbed of 5G I2NSF and what we learn is to make the 5G I2NSF, we learned on the cloud native environment. And this is a demonstration of what we've done now. And this is open source and you can see video clip too. And our next step is to design and implement the 5G protocol procedures for the I2NSF and especially to make the NSF be launched proactively before the security policy migration. And this is our hackathon team. Yeah, thank you so much.

Speaker 1: Very much. All right. I think we have one more. Oh, yours. Oh, sorry. Again. Your IP please? Yeah. Okay, I will go back and we go with the flow. Okay, is this yours? Yes.

Speaker 28: Good afternoon. My name is Cao Qian from Zhongguancun Laboratory. I'm going to walk through a very brief demo about making SAV information visible with IPFIX. So what is SAV? It's short for Source Address Validation. And it's a simple checking mechanism inside the network routers. When a packet arrives, the router checks its source IP address and asks if the IP address is allowed on that interface. If not, the router considers the packet as a spoofed packet and may drop it. So this SAV mechanism helps to prevent source address spoofing attacks in the routers. But here's the question: the SAV enforcement is a black box. What happens in the data plane is totally silent. So the operators have no idea what just dropped and why it got dropped. So we're going to open that black box by designing new IPFIX information elements to export SAV information through a standard IPFIX telemetry protocol. So we implement this hackathon project to prove the ideas. And here's the demo results: in the first data records, we see that on interface 5102, there are 256 packets that are dropped. Why? Because there's no SAV rule in the allowlist matches the interface. So we export all these SAV rules to show to the operators that the flow was dropped because it's the evidence. And in the next records here, we get to see more detailed information about the flow: we can see its source IP address and it's trying to reach the destination IP address. And these two records together provide the operators with information about the attack surface, the victim, the reason, and the action taken. So that's it. Thank you.

Speaker 29: Hello everyone. I'm Kun Liu from Chinese Academy of Science. Today I would like to introduce a privacy-preserving technology: Fully Homomorphic Encryption (FHE). It is also known as the Holy Grail of cryptography. In traditional encryption schemes, once data is encrypted, we can't process it in any way unless it is first decrypted. FHE breaks this limitation; it allows for direct computation on data while it remains in the encrypted state. Here are some application of FHE in machine learning. The first one is the inference on MNIST database under homomorphic encryption. It takes less than one second to infer and recognize a single image, both secure and efficient. In addition to this, FHE is fully capable of supporting more complex deep neural network computations, such as the well-known ResNet, which is my second demonstration. As you can see, inferring a single image takes about 10 minutes. Additionally, directly transmitting encrypted data to the server faces a challenge: it significantly increases the communication costs. One method to solve this problem is Transciphering. Here are some examples of Transciphering from Trivium and the AES. If you are interested in discussing further about the application of FHE in AI, you are more than welcome to join our side meeting, Tuesday at 11:15. Thank you everyone.

Speaker 1: Next up, Open Gateway.

Speaker 30: Hello everyone, I am Hu Tianshuo from Tsinghua University. Today I will present our project, Open Gateway. It is an open source agent gateway for cross-domain agents in complex collaborative tasks. First, some motivation. Increasingly more real-world tasks require multiple agents to work together, such as smart factories. These scenarios bring challenges like hardware heterogeneity, workflow continuity, real-time response, and protocol translation. Most existing products struggle to meet these challenges. So this is our proposed solution, Open Gateway. It consists of four main parts: heterogeneous connectivity, task orchestration, working memory, and security assurance. This is our detailed project architecture here. Current version includes three new capabilities for the gateway: working memory fingerprints requests and enables memory reuse; the semantic router routes messages based on complexity; and the observability dashboard monitors all of this in real-time. This is our project demo here: you can see the model output and this is our monitoring dashboard. We are building this in an open source way: task orchestration and working memory are already available; more functions such as agent registration, automatic protocol translation, etc., are coming soon. We have also submitted an active IETF draft and we hope to grow this work together with the community. Thank you very much.

Speaker 1: Right. Up next, Agent Communication Agent Gateway.

Speaker 31: Hello everyone. Hope to be the last gateway today, whose name is not open; you can call it a welcome reception gateway. Yeah. Okay, so the problem is easy: so we want to achieve cross-domain grouping, communication scope-based routing, and also dynamic tool invocation and registration, and also in-parallel tool invocation so you can simultaneously invoke tools to decrease workflow completion time. So what we did: we built a platform for agents, tools, and models to register within administrative domain, and also we built gateway prototypes for information synchronization and grouping channel implementation, also MCP servers traverse. So, yeah. So what we learned from the whole agent problem space: it covers many levels. So platform level: you need to manage and exchange trust across different administrative platforms; the network level: you need to—if you want to achieve complex scope-based routing, you need to design the network framework, introduce network component like gateway to handling state, control policies, and manage traffic; and also the bottom layer is the protocol layer. You need that some protocol we design, I think that can be handled by existing working groups, for example OAuth to like handling authorization protocol and token management. And also, next step for our implementations will be like introduce session protocols for better call, and also DNS-based discovery for interoperability and better scalability. Okay, contact us please, and enjoy... and hope everyone enjoyed the week in Shenzhen. Thank you.

Speaker 1: Updated slides. I think, yeah. It shows a revision one. I think it's a... you can call it a success.

Speaker 32: Hello everybody. This is about integrating YANG Push into message broker, so for enabling next-generation network analytics applications. So this is not the first time we are here at the hackathon, so there was many preceding testing mainly focusing on the publisher capabilities on the network nodes. So this time we are focusing more and more on the message broker integration itself. So we have multiple implementations on the publisher side but also on the receiver side, also on the producer schema registry, message broker producer, and consumer side. Everything is on GitHub. You will find the test results, the open source code, but also more details basically on the different verifications on the different system components. So please feel free to have a look in there. These are all the IETF documents related to that system architecture and how they are related to each other. So we are continuing here on the publisher implementation, so we have with Arcus another implementation which did most of the document implementation already. We are now also going forward with what we can subscribe. So besides the openconfig and the vendor-specific YANG models, now we have also the first implementation supporting IETF and IEEE YANG models for operational metrics, and you will see in the upcoming hackathons how we're going to progress in there. On the message broker side, here what we have achieved, what has been implemented noteworthy is about the net-cost real-time data collection we have, basically all the different system components have been integrated there now with of course the Apache Kafka integration in the schema registry and with the serializers, and we have with Ciena Blue Planet and Cisco Crosswork the first two YANG message broker consumers where we are testing interoperability with. Now in the last IETF hackathon we had some open items, everything could be addressed, many thanks also here to Paulo on all the implementation side, he did on PMA CCCT, also to Michal on libyang, there were many changes in the area of YANG structure and any-data validation, and of course there were also extensions in the YANG schema registry and within the net-cost data collection. We have new open items, so there we couldn't finish all the test cases we want to, so there are still a few things. As a teaser for the next IETF hackathon, Maxon is working on a comparison on the different YANG libraries, the feature coverage we have, and we also intend to have some enhancements in the net-cost data collection itself. And many thanks who... everybody who collaborated here. If you want to know more, Friday 9:00 to 11:00 at the NMOP working group.

Speaker 1: Thank you. Okay, thank you. So, again, thank you for your flexibility and keeping all the presentations within the given time. Really impressed. It's 4:00. We did a great job all together. Before I close, I want to mention two things: tomorrow at 6:00, we have the hack demo happy hour. If you want to present your project to the IETF community at large, please put your name or your project on the wiki. You can find the URL on the wiki. Put your name and your project on the wiki before 1:00 tomorrow so we can prepare tables and flipovers, etc. that. And the other thing, I want to thank our sponsors: CNNIC and ICANN for sponsoring Running Code. They make this weekend, well, possible actually by catering drinks, food, and all the organization by the Secretariat. Also want to thank the Secretariat to do the great job for us. And that's it. Enjoy the week and see you through the week and... thank you.