Session Date/Time: 16 Mar 2026 06:00
This is the verbatim transcript of the entire audio recording for the ICNRG session at IETF 119.
Dave Oran: So, welcome to ICNRG. We have a pretty full session this time, so we're going to be running with careful timing. We have a whole bunch of research talks, which will be the focus of the session. We won't be talking about drafts or progression of documents very much. So, let me start.
(00:24-21:07 - Technical difficulties, silence, and organizational setup)
Dave Oran: Okay, I'm hearing we may have had an audio issue. Can you hear me? I'm getting a chat saying that there's an audio issue. Oh, hang on. Do I need to start over? Oh dear. All right, I'm... now that we have audio, I will start over. Sorry about that.
Quickly, the IRTF has a different intellectual property policy from the IETF, with some... mostly the same things. Please check to make sure that you are following those rules, particularly if you're presenting in these sessions or submitting documents.
We make audio and video recordings of all the meetings, and by participating online or, of course, in the main room, you turn on your camera and microphone by doing either of those. That is an implicit consent to appear in the recordings.
The IRTF has a privacy and code of conduct that you should be familiar with, please. If you have any doubts, please check the links on the slide.
Now, in general, ICNRG is one of the research groups of the IRTF, and we are a research organization, not a standards development organization. We publish both informational experimental documents, but that's not our only goal. Our primary goal is to coordinate with the broad research community and collaborate with them on exploring research issues that are related to the internet. And there is a primer for IETF participants to understand how the IRTF has similarities to the way the IETF works and some differences. So, please check that RFC if you're interested.
The session's being recorded. The general tips: please sign in with Meetecho, and for remote participants, please make sure your audio and video are off unless you're actually presenting during the session. That will be ditto true of us as chairs.
So, that's the introduction material. What I'd like to note here is we have a new co-chair, Dirk Kutscher, who is now chair of the entire IRTF, has been very careful about selecting a wonderful new co-chair, who will introduce himself momentarily. And also thanks to Jinchao Li, a student at Hong Kong University of Science and Technology, Shenzhen, who's agreed to be the note taker for our meeting. We do have a note taker website, and any of you are welcome, of course, to contribute to the notes, which I'll go over with Rio after the meeting before we get them published. So, Rio, over to you.
Rio Chiariello: All right. Hello, my name is Rio. I'm the new co-chair of the ICNRG. My main area of research has been network protocols, ICN, IPv6, mobility, multi-homing, and all sorts. I'm currently a lecturer at the University of St. Andrews. I was previously a postdoc, which is when I started engaging with the ICNRG.
So, do you want to pass... can you pass the control? I'll just take back the control myself. There we go. All right. So, without further ado, here's the agenda for today. We will then start with the talk from Kazuhisa Matsuzono on in-network retransmission control in ICN.
(Referencing: In-Network Retransmission Control in ICN)
Rio Chiariello: Let me switch over slides. All right. And I shall pass slide control. All right. Over to you. Kazuhisa should turn on his mic.
Kazuhisa Matsuzono: Hello. Can you hear me? Hello? Can you hear me?
Rio Chiariello: Yep, you... we can hear you just fine.
Kazuhisa Matsuzono: Ah, okay, thank you. Okay, I'm Kazuhisa Matsuzono from NICT, Japan. We revisited in-network retransmission scheme in ICN, so let me talk about this work.
Recently, we've seen a growth of low-latency and high-quality video applications like 4K or higher quality real-time video streaming. So, for example, teleoperation of robots requires real-time video feedback from the publisher, which generally requires a network bandwidth of 10 to 100 Mbps. And also, another important aspect we need to consider is the end-to-end latency should be less than below... should be less than 200 Mbps. So, in order to enable the reliable and safety operations, it's important to recover data loss in the network. But it's challenging because of this kind of latency requirement.
So let's now consider the end-to-end retransmission-based loss recovery, which is commonly used by such as TCP. So, the publisher initiates data retransmission based on the feedback from the end consumer. So, in the end-to-end case, the recovery delay is likely to be long. And in the hop-by-hop case, we call it in-network retransmission, the loss recovery is performed locally. So, it's expected to accelerate loss recovery in the hop-by-hop case.
So let's now consider ICN, which adopts the consumer-driven name-based communication. So this consumer specifies the latest data name to obtain the real-time data from the publisher. So in ICN, ICN-based communication supports in-network caching, which enables intermediate nodes to retransmit the lost data from the node cache. And also ICN supports hop-by-hop flow-aware transport, which enables intermediate nodes to perform in-network retransmission with consideration for application-specified latency requirement. So, ICN can potentially accelerate in-network retransmission, but we need to consider how we can suppress the duplicated data transmission and while satisfying the latency requirement.
So there are some technical considerations, like how to detect loss, which intermediate node should send recovery interest, and when and how many times recovery interest should be transmitted. So basically, the challenge of this work is to deal with the trade-off between the network cost and recovery delay.
So in this context, we proposed a in-network retransmission scheme in ICN. So let me briefly explain the key features. So the proposed scheme introduced a metric called recovery budget. So the total available recovery budget is determined by the application-specified acceptable latency and end-to-end propagation delay. So this consumer allocates the available recovery budget to each link, so that each link can execute per-link retransmission-based loss recovery. More specifically, the downstream node, in this figure's case, VL... the downstream node sends the recovery interest. So this interest is not relayed to further upstream nodes. It's only used for in this link. So based on the allocated budget to this link, the node... upstream node switches between the two relay modes for recovery data.
So one relay mode is stop-and-wait, where the received recovery data is not relayed to the downstream node. So this recovery data is relayed to the downstream node only when receiving the recovery interest from the downstream node. So in this case, the duplicated data transmission is not likely to occur, but the recovery delay... the recovery delay is likely to be long. So another relay mode is fast relay. In contrast, in this mode, the received recovery data is immediately relayed to the downstream node. So the recovery delay is likely to be low. But in this case, duplicate data reception is more likely to occur because after relaying the recovery data, if this node receives the recovery interest from the downstream node, this upstream node retransmits the data again.
So as I mentioned, this consumer allocates a available recovery budget to each link. So ideally, it's best for each link to perform stop-and-wait to avoid duplicated data reception, but it needs more recovery budget. So in such a case, total available recovery budget will run out. So which means this consumer cannot receive the recovery data within the specified acceptable latency. So that's why this consumer collects two budget information: one is the budget consumption and another is the budget request. Budget consumption means the estimated recovery delay when obtaining the recovery data with a probability of over 99%. And budget request means an estimated recovery delay when performing some stop-and-wait. So as we can see in this figure, the intermediate nodes put... insert two budget information into the real-time data so that the consumer can collect the two budget information.
(01:08:19-01:21:01 - Audio cutout, silence, and technical trouble)
Rio Chiariello: I think we might have had a presenter issue. Oh, he's back. Kazuhisa, can you hear us okay? He has to get off mute. When he came back in, he must be muted. The audio is still given to the individual. He still has audio rights, I think. So are you muted, Kazuhisa? It's not... He's talking and no audio. I see the border lighting up, but I don't hear anything. Is your mic turned way down?
Kazuhisa Matsuzono: Hello? Can you hear me?
Dirk Kutscher: I'm Misaku. He's now leaving and then get back again. Just a moment please.
Rio Chiariello: Okay. We'll give it a minute.
Dave Oran: Yeah, we're on a very tight schedule, so let's try and catch up.
Dirk Kutscher: Ah, sorry. Can you hear me? Can you hear me?
Dave Oran: Yes! Yes!
Kazuhisa Matsuzono: Ah, very sorry. Um, I can't use... I can't enable the mic... it's... yes, I can't use the mic, but it's now okay. So, okay. You need to go back... yeah. All right, over to you.
Kazuhisa Matsuzono: Okay and I'm moving to the slide. Oh, okay. So, the reason why the preferation from links closer to the consumer is the nodes closer to the consumer are more likely to detect the latest data loss, which means it's more likely to... have... cause the duplicate data reception. So, as I mentioned, based on the allocated budget, each link performs stop-and-wait or fast-relay.
(01:23:15-01:24:26 - Brief audio cutout, Kazuhisa speaks with heavy static and glitching)
Rio Chiariello: I'm wondering if we should jump to the in-person session and then we'll come back to the remote sessions.
Dave Oran: Well, we have one in-person talk.
Kazuhisa Matsuzono: Ah, sorry. Can you hear me?
Dave Oran: Yes.
Rio Chiariello: Can we try maybe... maybe you should restart your mic. Yeah. And then also maybe turn off the video on your end for this time just to make it easier. Okay. Let's try that and see how it goes.
Kazuhisa Matsuzono: Okay. So, the time I... I used a lot of time, so I want to summarize my presentation. So I... I actually conducted the prototype implementation using Cefore, which is an open-source software platform. We... everyone can freely download this source code via this URL.
So I actually conducted experimental evaluation using the prototype implementation, and we observed the two performance metrics: one is the successful loss recovery ratio, and another performance metric is the duplicated data reception ratio. And we compared our scheme with two... two other schemes: the baseline scheme and existing scheme like this.
So we investigated the impact of acceptable latency difference. So as we can see, the left figure... figure... the left graph shows the successful loss recovery ratio. So our proposed scheme achieves more than 90% of successful loss recovery and higher recovery ratio compared to other schemes. And also our proposed scheme effectively suppressed the duplicated data reception, especially in the case where acceptable latency is relatively high. So it means that our proposed scheme achieved the well-balanced performance according to the acceptable latency, so enabling the effective and efficient in-network loss recovery.
Oh yes, in summary like this, and we propose the... in our future work, we'll develop an effective rate control scheme using this in-network retransmission scheme. Thank you so much.
Rio Chiariello: Right, thank you for the talk. Are there any questions? If you have any questions, please join the queue.
(Silence)
Rio Chiariello: Okay, seems like there's no questions. So let's move on. I'll take back the control for the moment. Thank you very much, Kazuhisa-san.
Next talk is by Yohei Okamoto on reflexive forwarding implementation in Cefore.
(Referencing: IETF125_icnrg_3_Reflexive-Forwarding_in_Cefore)
Rio Chiariello: For that, I need to give screen sharing control. Right. Okay, start my presentation. Today, I'd like to talk about reflexive forwarding implementation on Cefore. I am Yohei Okamoto from ID Corporation, working as a member of the Cefore project.
Here is an outline today's talk. We briefly explain the technical specification of reflexive forwarding and talk about the implementation in Cefore. We then show the demo how reflexive forwarding is working on Cefore.
Reflexive forwarding uses four messages: trigger interest (TI), reflexive interest (RI), reflexive data (RD), and trigger data (TD). Although trigger interest (TI) is forwarded based on each forwarder's FIB, reflexive interest (RI) is forwarded by reflexive name prefix (RNP) specified in TI. As you see in left figure, when consumer want push data, it sends TI toward a producer by routable name. TI includes reflexive name prefix, which is uniquely assigned by consumer and use as the name of RI. When producer received TI, it initiates RI operation using RNP. RI can support chunking if necessary.
Okay, so from here, I'd like to explain the some operation using Cefore implementation. So at first, the consumer sends the trigger interest. As you see and let's see the trigger interest has a name of abc.com/LMP ID and LMP ID can be generated by UUID, for example. And this trigger interest is forwarded to the forwarder A, then this forwarder A generates a TI pit, trigger interest pit, as general interest. And also he generates a template bit, T-pit, to be the referred by the reflexive interest later transmitted. So this trigger interest is forwarded toward the producer and then producer send back to the reflexive interest.
So this slide shows the returning packet which is a reflexive interest. Reflexive interest is sent by producer using LMP ID. So the routable name is just excluded and only the LMP itself is a name to be retrieved by producer. So this is something like a subscribe request. And so he send the reflexive interest RI toward the consumer based on the T-pit entry. So after for example forwarder B receives the reflexive interest RI, then he can just copy the T-pit as a regular pit entry named RI-pit. So T-pit is actually just a reference, it doesn't used for the forwarding entry, so he T-pit is just a reference to generate RI-pit and then the reflexive interest is forwarding back toward the consumer.
So this slide shows the same reflexive interest but it includes a chunking ID. So with chunking. So in that case, producer send the reflexive interest with some specified chunking number like chunk 1, chunk 2 and so on. So the difference of the previous slide is that when we support chunk... chunked content, then RNP T-pit, which consists of LMP ID only, LMP ID, is copied to RI-pit but it also includes the chunk ID like a chunk equal 1 or chunk equal 2. So for each chunking data, chunking interest, this reflexive interest is forwarded toward the consumer by generating a chunk ID on each forwarder.
Next, we will explain a pub/sub operation using Cefore. This demo consists there are three nodes: consumer, forwarder and producer. Cefore is a forwarding demo of the Cefore component. First, producer runs the cefsub5 command and wait for TI from consumer. Then consumer runs the cefpub5 command with a name demo1 to push data. After consumer initiates TI operation, TI is sent toward producer. Forwarder then creates TI-pit and T-pit. After producer received TI, it initiate RI operation using RNP. When producer received RI, it create RI-pit using T-pit and the RI packet is sent toward consumer. When consumer receives RI, it immediately sends corresponding RD. The RD is forwarded based on RI-pit. If data is chunked, RIs and RDs are transmitted repeatedly in parallel. When producer completes receipt RDs, it initiated TD operation. When consumer received TD, it terminated this push operation.
Today, we will show you two demonstration. First one is a demo to push a simple small file. You can see the sequence of TI, RI, RD and TD on the packet capture. Now, we will play the first demo video. Please look at the producer in the top right, the destination directory is empty. I'll execute the cefsub5 command. Next, please look at the consumer in the top left, I'll run the cefpub5 command, then the packet transfer will begin. Please look at the bottom, you can see the TI, RI, RD and TD packet have been exchanged. This is the RNP value. Finally, we can confirm that the producer has received the pushed text.
Next one is a push operation sends a large sized chunked data. We will see that RIs and RDs are transmitted many times. Now, I will play the second demo video. Similarly to the first demo, let's confirm that the producer's directory is empty. Once I run cefsub5 on the producer and cefpub5 on the consumer, the packet flow will begin. Here is the RNP value for this session. You can see that the file has been successfully generated on the producer side. I will play the pushed video using VLC. As you can see, it plays without any issues. That concludes my demonstration. Thank you very much for your attention.
Rio Chiariello: Thank you very much. Because of the screen share there was a slight choppiness but I trust that that is because of the screen share not because of the actual data transmit. Does anyone have any questions? Please join the queue if you do have any using Meetecho. Not seeing any comment on the Zulip either. Suddenly we're catching up with time after the first delay. Good! All right, thank you very much. Let's move on to another talk.
The next talk is by Kenji Kanai, ICN service mesh for plug-and-play ICN.
(Referencing: ICN Service Mesh for plug-and-play ICN)
Kenji Kanai: Hi, hear my voice? Can you hear my voice?
Rio Chiariello: Yep.
Kenji Kanai: So actually, according to the meeting agenda, maybe the next one is a Information-Centric Wireless Sensor Network Platform. Oh, you changed? Ah, you changed. Okay, okay. Sorry. Okay, so now let's start my talk.
Hi, thank you for this opportunity to talk with you today. My name is Kenji Kanai from the Waseda University, Japan. Today, I'd like to talk to you about ICN service mesh for plug-and-play ICN. And this work was partly supported by the NICT, actually the Asada-san's group. And the position of this talk is actually not about research aimed to improve ICN performance, but rather research conducted from the perspective of an ICN user.
So here is today's content. So let me first introduce the background introduction, and then we give a two topics. The first one is ICN service mesh, and the second is DePIN-based Co-Digital Twin. So DePIN is a Decentralized Physical Infrastructure Network, and Co-Digital Twin is a co-creation of the digital twin. So I think these two words is a very new, so I will talk about our vision later. Finally, I conclude my presentation.
So let me first introduction. So as you know, the blockchain technology and distributed ledger technologies are currently widely spread. So the Web3 era is coming. So in the Web3, it is said that the internet structure is shifted from the centralized architecture to decentralized architecture. So this kind of the shifting, so what is the benefit in the users? So users can manage and operate their own data not to the collecting in the platformer.
So the network infrastructure also follows the trend toward this kind of the decentralization. So there are two examples. So first one is decentralized network storage, so which is currently the well-known de-facto standard, namely the InterPlanetary File System. So most of the Web3 application is using this kind of this software. So in the IPFS, so broadly defined, the Information-Centric Network is adopted as a network protocol, not pure CCN or NDN, but IPFS is use the content identifiers or the content name to register or retrieve the data from the internet.
So another example is a Decentralized Physical Infrastructure Network (DePIN). So what is the DePIN? So DePIN is aim to build and maintain the decentralized infrastructure using the distributed ledger technologies, and infrastructure management is actually based on the sharing economy business model. So current... so what the realization of the DePIN? So DePIN is so currently the user is currently only using the network, but from the DePIN, so user can join the project and they can build and manage their own infrastructure. So this is very interesting.
So as you know, so the ICN is a very attractive network protocol, and we think that ICN could become a good network protocol to support various Web3 applications, so including of course decentralized network storage, distributed computing, in-network computing, and also AI is currently the very hot topic, and AI agent or AI chaining is also the Web3 application I think. And other one is a trust network for DePIN and DePIN-based Co-Digital Twin. This is our current main activity and I will introduce later.
So from the perspective, user's perspective regarding ICN implementation. So there are the several ICN implementations like NDN and Cefore in Japan. Few implementations have reached the level of providing the application programming interface. So this mean that it is very difficult to develop the application based on ICN for the developers, application developers. So developers currently must implement ICN functionality itself in addition to the application itself. So ideally the communication functionality should be transparent to the application, which mean that the upper layers, so for example application layer, should not be aware of the network protocol of the lower layers. In addition, and it should be the possible to seamlessly replace the protocols used in existing application like a current TCP or HTTP into the ICN. So if the applying ICN requiring the re-implementation the application in the programming developers, so there is no motivation to use ICN in case even if the ICN is very good. So the research question is is it possible to implement ICN in a plug-and-play manner?
So we think that we can adopt the service mesh approach. So service mesh is the one of the cloud native technologies, and it can mediate the communication between the services and resolve the operational challenges regarding the communication, like traffic control, fault tolerance, security and access control, and the observabilities. Istio and Envoy, this is the well-known cloud native software, is a service mesh, de-facto standard for the service mesh, but these software primarily supports only the HTTP.
So we reuse this concept and we design ICN service mesh aims to provide ICN functionality transparently to the applications. So our goal is to enable plug-and-play re-utilization of the ICN functionality, achieve the autonomous and distributed network control, and realize the trusted networks. So our research group is currently the considering the architecture of the ICN service mesh. So here is comparison of the architecture. So there are the four different models. So most left one is a baseline and very naive one, and this model is service layer, application layer, requiring the ICN library, and service application itself implement ICN functionalities and connected to the network side ICN router function. Next one is straightforward way. So the service layer, application layer no longer need ICN library, but in the network side, HTTP API interface should be exposed and the service is connected to this kind of the HTTP API to connect it ICN router functionality. Next two model is a sidecar or the service mesh models. So very similar to the HTTP model, but between an application layer and the network layer, there are the service mesh layer, and in this layer we implement ICN library to intermediate the service and ICN router function. So the third model is of course very naive model, the but easy to implement the service mesh. because in this layer ICN router functionality has in this layer, so the most of the code is reuse the ICN library or router function. However, the service mesh layer requiring the large resources to run the router functionalities. So the right one is without a router model and service mesh layer is no longer need the router functionalities. So ICN library is directly intermediate the service and ICN router functionalities. So ICN library has exposed own API and ICN router functionality has also the API. So we need to carefully design these blocks.
So next slide show the deployment strategy. So the left one is a baseline model is yeah, as I said it is naive model, service is ICN router is required for each services in the packaging. So there is a scalability issue because the number of services increase ICN router needs to increase. So HTTP API model and sidecar model is a very similar, and ICN router can be shared among the several services to access the HTTP server or service mesh. So however additional resources like HTTP and service mesh to run the additional resources.
So this is the very simple performance comparison regarding memory usage in four among the four models. So left figure shows the memory usage comparison results in case of the two services deployment scenario. So left one is baseline, and green and yellow bar is the memory usage of ICN router or the sidecar. From this result you can understand that sidecar model can reduce the memory usage approximately 0.6 to 0.8 times compared to the baseline model. And compared to the HTTP model, so as I said HTTP resources is a very needed, so the router functionality requiring the memory nine times increased compared to the sidecar model. So the right figure shows the memory usage comparison results in case of the one, two, three services deployment scenarios. So from this results, sidecar model, especially the without a router case, the reduction 0.8 times the memory usage compared to the baseline model.
So the another result is this is content retrieval time. The you know, HTTP is a core implementation is very mature, so the performance is HTTP is good. However, the sidecar model is also the comparable is the baseline. So we can think that ICN service mesh has comparable performance with the baseline but lesser memory usage.
Okay, so this is ICN service mesh. And we move on the next topic, DePIN-based Co-Digital Twin. So this work is also supported by the NICT. So what is the motivation? Is we collect and update and share the latest digital twin data effectively. So here is our problem statement. So digital twin, so especially not a network digital twin but a city level digital twin, is a critical enabling technology to construct the smart city that helps to improve the citizens' lives and quality of life. So currently many, many research project regarding the digital replica city are ongoing in the worldwide. However, most of the project is only focusing on the how to create a city model. So there is no mechanism how to maintain a sustainable and up-to-date digital twin in large scale. So we think that there is a lack of mechanism to involve the various key players such as the data provider or service provider, application developers in this project. So we think that the mechanism, the ecosystem is essential for constructing the digital replica city and producing the various city replications.
So our approach is very simple. So there are three points. So first point is city residents play a important role as a data provider. So as you know, the most of people have a smartphone, and smartphone has a very powerful sensing devices, so it equips the many sensors and image or video capturing the image or video and even the 3D data using the LiDAR. So the smartphone or the holding the city residents can be play as a data provider and they collected data and selling data. So another second point is we need... we doesn't think that without third party data provider or the data broker, the provided data is directly traded to the service provider. And third point is when the service provider buy the data provided by the resident, these kind of the service provider pay the reward to the resident. So this is kind of the incentive mechanism and this incentive mechanism to keep the resident collected data and update data. So here is our vision. So from this circulation is to keep the up-to-date digital twin by the participatory sensing. So this is our vision.
So the we done the feasibility test of the Co-Digital Twin. We collected the 3D point cloud data using iPad LiDAR sensor and very few number of people, just a six participant we collected their 3D data their own devices. And in the system side, these data is merged and one large 3D created the large 3D map. So this is a comparison. So left one is our created 3D map, and the middle one is map provided by the OpenStreetMap, and the left one is local floor map. So you can confirm that visually confirm that our co-created 3D map has a nearly equivalent accuracy to existing maps.
So in the system side, the system is currently running on the NICT's testbed. So the system architecture is very straightforward. So we defined the regional server, so regional server has a on-chain storage and off-chain storage. So on-chain storage is for the blockchain using the blockchain technologies, two layers blockchain technologies, and storing layer one blockchain store the transaction information and the layer two blockchain is for the smart contract for the reward distribution. In the off-chain side, we use the IPFS to save the 3D data itself. So data seller is accessing this regional server through the API to upload and sell their data. And the data buyer side is accessing the front-end application to check the data catalog and purchase the data provided by the data provider.
So this slide shows the deployment strategy and the point is a data storage network and metadata storage network and system endpoint is in the network edge, or let's say the geographically distributed these nodes. And again, regional server is a has a role to intermediate between users and storage network, these kind of the storage networks. So the smartphone is no longer need to the installing the IPFS or other specific application, just using the website or HTTP.
So actually the we are running this prototype system, but through the feasibility test we have found the two technical issues. So issue one is a very yeah, straightforward: security issue. So as you know, so we using IPFS, so IPFS is used the content name, content identifier to register or retrieve the data. So if the content identifier CID is registered with IPFS is leaked to the outsider, so data can be freely accessed. And also the we have no encryption, content encryption, so the someone who obtains it so easy to decode it. So this is a very critical issue for the purpose of the buying and selling the content.
So second issue is storage network performance, so the content persistency and effective storage utilization. So maybe the time is running out, so we speed up. To deal with our issues, so we are currently the collaborating with NICT and we using the NICT assets namely the UC-INC. So User-Centric In-Network Caching. So detailed features of the UC-INC is please see the IEEE ICC paper, but there are the three key features. So first one is security and privacy, so UC-INC adopts the Cipher-text Policy Attribute-Based Encryption. And second one is there is a geographical limitation capability. Cache data in routers within a permitted area and close to authorized users. So this offers efficient data transfer and high performance data download. So the third one is availability is high data availability by using the caching data in multiple intermediate routers using the ICN technologies. So we using this UC-INC and ICN service mesh to solve the these issue one and issue two and issue three: implementation challenges.
So remaining time we focusing on the how to using ICN to deal with the issue two. So actually this is a early stage research. So we focus in IPFS routing problem. So IPFS data discovery has a two step, two searching step based on the Kademlia algorithm. So first step is searching for the PeerID who has this content. Next one is searching for the connection information. So PeerID is information like IP address. So in the IPFS, pair of the IPFS and peer IDs and content IDs are managed in a table called the provider record, and provider records are exchanged only between the neighboring nodes. So idea is very simple. So how about replacing from the Kademlia to ICN to distributed the provider record. So to do so, IPFS data discovery can achieve only one step. So ICN routing table enables directly discovery the provider records within the network. So here is a small test result. We use on a test network topology and we comparing the original IPFS and ICN-IPFS. So from this result, the ICN-based IPFS is a significant improvement both content retrieval and content publication.
So from our next step is we don't currently we don't using the in-network cache, so how to use effectively use of in-network caching, like a caching the provider records and also the content itself. And also the how to integrate with UC-INC. So this is our next step and yeah here is conclude my presentation. Thank you for the attention.
Rio Chiariello: Thank you. Just about enough time for maybe two questions. Anyone from the room on-site? Wow, those are very clear presentations, then. That's great. Or maybe I'll join a queue myself and ask one question. So in the earlier part of the talk, one of the conversations was about having a HTTP API and how that has some cost in terms of enabling that. I suspect, like, in the pathway to sort of enabling ICN-based wider network, would you say this HTTP API gateway as a sort of a potentially a worthwhile sort of cost to sort of bridge that time between sort of where we are right now and then to sort of move towards the more ICN-based transport underneath, I guess? Sorry, I... I can't catch up the point of your question. Ah, so in short, would you say that this sort of cost is perhaps worth taking to sort of have it as a sort of a transition period while, you know, in between the sort of the non-ICN... bridge the non-ICN network and to ICN network? Yeah, yeah.
So at the beginning of the motivation to develop ICN service mesh is we utilize not changing the entire network. But in the, for example, cloud native area, so the pod network or the so microservice between the microservice communication, we change the HTTP to into the ICN. This is the our motivation. So in that motivation, maybe not so costly if we provided the ICN service mesh and the software. Yeah.
Rio Chiariello: Okay, any other questions? All right, thank you very much. I think we managed to catch up on us now, so we shall now swiftly move on to the next talk. I'll take the slide control. Thank you very much.
The next is Information-Centric Wireless Sensor Network platform.
(Referencing: Information-centric wireless-sensor-network platform development in mmWave-band communications)
Rio Chiariello: All right, let's find... okay, Mori-san, we need to just give you the slides. Are you joined on the Meetecho? Yes. Oh, there he is. Okay. Right, I've passed over the control, so floors yours.
Shintaro Mori: Okay, so thank you for your kind introduction. So my name is Shintaro Mori from Fukuoka University, Japan. So today, I'm going to talk about this title. And today, I'd like to explain Information-Centric Wireless Sensor Network. So in my research theme is to introduce ICN technology to into the wireless communication and wireless network. So today, I'd like to provide the effectiveness of the Information-Centric Wireless Sensor Network and the demonstration of the feasibility of the scheme.
So everyone know, IoT framework are widely used and the platform is shift from cloud to edge node. So in this situation, the data and some function are constructed based on distributed and decentralized. So to use the wireless sensor network is very important technology, I think, and to adapt the ICN technology into wireless sensor network, Information-Centric Wireless Sensor Network can improve efficiency, latency and energy consumption due to a caching mechanism. And in another feature of ICN is abstraction. So different from wired network, wireless network, wireless communication system are typically heterogeneous, many protocols are exist. So to use ICN technology to overlay the low layer, the protocol can abstract and easy to use for my wireless communication researcher or another field, I think.
So the next slide shows the motivation of ICN and wireless sensor network. So we would like to introduce ICN into wireless sensor network and to implement this system, as an ICN platform, we use Cefore.
So this slide shows the effective caching scheme. So different from the wired network, wireless communication and wireless network system has significant feature of overhearing. Overhearing is the node along the routing path can also obtain the transmitted data. So to use this phenomena, the ICWSN can achieve off-path caching without any particular mechanism. And as show in this simulation result, the ICWSN can boost the caching mechanism.
So to use ICN and caching mechanism, the system can reduce energy consumption. So everyone know wireless sensor network has strictly hardware limitation, in particular battery resource and calculation capacity and wireless spectrum. So to apply the ICN into wireless sensor network, as a caching mechanism, the ICWSN can improve these traditional issue, I think.
This slide shows some computer simulation result. The proposed method is ICWSN and conventional scheme is not use, without use caching mechanism. So to use ICWSN, the system can reduce the energy consumption.
So based on the previous fundamental research, we construct some prototype device and prototype network. So of course, the sensor nodes are showing the left side photo and the network structure is right hand side. Of course, we use Cefore as an ICN platform in the system. And to use ICN mechanism, the first data retrieval is so high latency and a high jitter and a low throughput, but after second time, thanks to the caching mechanism of ICN, we can dramatically improve the network system.
On the other hand, to shift from the conventional host-centric to ICN, there are some conventional and traditional system are co-existed, I think. So to improve the situation, we equipped the gateway to exchange the compatibility, the API-based traditional scheme to ICN, and we can guarantee the compatibility to the ICN scheme.
Okay, so based on the previous all research, I'd like to construct the new platform for smart city as a service. In particular, I'd like to construct the mmWave Information-Centric Wireless Sensor Network platform for supporting a new ecosystem. So in beyond 5G and 6G wireless communication system, the wireless radio frequency and spectrum is so hungry, so the in the research field of beyond 5G or 6G is to shift the higher frequency. So to demonstrate the feasibility of the Information-Centric Wireless Sensor Network on the high frequency spectrum such as mmWave, so we construct the test field and we conduct experiment. As shown in this photo, we construct two test field and connected each other and we conduct experiment.
One is on the ground and the other is non-terrestrial environment. In the beyond 5G network is alternatively research topic is aerial node, aerial base station. So to demonstrate such aerial base station environment, I conduct the experiment, the ICN scheme on such aerial base station environment. Of course, mmWave communications.
This is some experimental result. We can obtain the throughput and we can demonstrate some video streaming on the air-to-ground communication. And of course, we have to consider some UAV, the condition, but there are very technically and hard to demonstrate. So instead of such situation, in multi-hop experiment, we use smartphone and we can also demonstrate the air-to-air wireless communication, of course ICN environment.
So this slide shows another test result. The figure side figure is a network model of the implemented test network and the sensor node is a publisher and end user is subscriber, and there are three network section: one is a cellular network but so low bandwidth, the another is a wired network and a very high speed and very wide band communication. So to use ICN and into the IoT platform, the data... when the user retrieve the data, the average throughput is low at the first time, but after second time is can improve the throughput because of the cache data is located on the wired network area so throughput can improve.
So another scenario is in my research, we conduct a smart agriculture project for deployment of our scheme. There are three solution and topic: remote monitoring of greenhouse and automatic harvesting robot and security and pest prevention system for the farm. In the previous project, the network structure is a very traditional and conventional scheme, but there are some issue, technical issue of the conventional network structure. So I think this issue can improve to use Information-Centric Wireless Sensor Network. For example, the conventional scheme in the conventional sensing data must be continuously sent to the cloud server, but to use ICN and ICWSN we can achieve higher efficiency by a pull-based design to ICN mechanism.
This slide shows the demonstration result of the strawberry harvest robot. Of course, the robot is required strict throughput and latency, but the traditional framework are still issue to requirement of these strict requirement. So I think and I hope to use ICN and ICWSN we can improve this and remove the issue, I think.
So based on this research project and research work, now I plan to conduct a new project Information-Centric Wireless Sensor Network for deployment in the smart agriculture. The overview of the project are shown in this slide and a research item are shown in the following slide. One is real-time data transmission system for robot or real-time data surveillance system in the greenhouse. And another is communication control in ICWSN using AI such as to improve the efficiency of the wireless communication and wireless network system.
Okay, finally I conclude this presentation and now ongoing work and contribution future wireless network of my mmWave Information-Centric Networking and wireless sensor network and the last is finally the future works. That's all my presentation. Thank you.
Rio Chiariello: Thank you very much, that was actually quite cool. Does anyone have any questions? I put myself in the queue just to sort of start the queue, but... Okay, so actually I'm going to use my chair privilege and I'll go ahead. You mentioned earlier that the overhearing can be utilized to pre-cache the data. Could you take talk a little bit more about how that's done at the sort of the network layer perspective? Sort of how the... how that's actually happening in terms of names and stuff?
Shintaro Mori: Sorry, what is the question? Sorry.
Rio Chiariello: So in short, how does the overhearing effect utilized in the context of the network layer perspective, like in terms of names and data content, how does that work?
Shintaro Mori: Yes, that's very important because wireless communication system is a broadcast-based networking I know. But network layer is unicast-based but ICN layer is also broadcast or flooding. So the network is different between ICN layer and physical and MAC layer. So this is challenge to implement the overhearing and this off-path caching mechanism. I just only... wireless... I just demonstrate the some result in terms of physical and low layer only, but that is very important and significant term of the implement I think. Yes.
Rio Chiariello: I'll be curious to see how the sort of the caching mechanism itself adapts to this, you know, and to utilize the overhearing phenomenon, I guess. I would like to see that happen in the future.
Shintaro Mori: Yes, overhearing is... yes, is specific in wireless network I think. Wired network is not overhearing I think. Yes.
Dave Oran: Hi, this is Dave. I have a kind of a related question, but from the... a slightly different point of view, which is: there have been previous efforts to use opportunistic caching on a broadcast channel with ICN. And the hard problem seemed to... seemed at least then to be, if you have multiple of these cached versions on the same broadcast channel, how do you decide which one actually responds to the request as opposed to them all trying to answer and clogging the channel? Have you given any thought to how you address that problem?
Shintaro Mori: I interfere... sorry, what?
Dave Oran: Well, I can... I can take the question offline with you. I'll just send you some email rather than taking up everybody's time. So, I'm sure you have a good answer, I just... would like to see how you've approached the problem. Thanks.
Shintaro Mori: Okay, yes. Okay, thank you.
Rio Chiariello: We'll discuss this further over the mailing list, I guess. All right, thank you. Thank you very much. All right, thank you very much. Do we have Hongbin in the room? I could not find him in the participants list, I direct chatted with Dirk.
Dirk Kutscher: Yes. Ah, okay. So it turns out it's actually not Hongbin giving the presentation.
Dave Oran: Oh, who is?
Dirk Kutscher: Okay, we are ready. Hang on, let me switch the slides to chair slide. Or hand the clicker to Dirk? Yes, why not. Let's pass the slide control to Dirk. Okay.
Shan Zhang: Okay. So, good afternoon everyone. My name is Shan Zhang. Okay, thank you. So good afternoon everyone. My name is Shan Zhang and I'm from Beihang University. I work in a research group which focus on new network architecture for heterogeneous network, the integration of computing and networking, and the research group is lead by Professor Hongbin Luo. I'm very honored to be here to share our recent work, which is HiCom: a hyper-ICN architecture for computing power network in edge.
(Referencing: HiCom: A Hyper-ICN Architecture for Computing Power Network in Edge)
Shan Zhang: I can now go to the next slide? Yeah, it doesn't work. Dirk, you have control. Yeah, it doesn't work. Are you sure? So the clicker should... so he needs to self control. Oh, okay so is it the slides clicker that he needs? Okay. Can you try now? Oh yeah, good.
So let me go... briefly review the background. We all know that most of the data is generated by the devices at the network edge. And if we look at the devices, the devices varies a lot. There are many devices like cameras, sensors, machines, IoT-type machines, which generate lots of data, but they do not have capability to deal with the data. Meanwhile, there are also some devices like the intelligent cars, they are usually equipped with abundant computing resource to process their data. So if we can let the devices find each other and collaborate with each other, we can unlock the underutilized computing power to deal with the data at the edge. And that's the computing efficiency can be greatly improved by 30% according to recent research. But the thing is, if we open our cell phone and we can find the devices we can connected, but we don't know what they can do and what's their computing power. So we need a new architecture to support such ideal figure.
There are already some works on this path, including IP-based solution and ICN-based solution. The IP-based solution usually adopt application layer proxies to manage the computing resource and help the end devices to select the appropriate server to provide service. But the thing is... then the network layer will help to build an end-to-end connection, the path, to transmit the data. This solution works good if the network condition is good. But at the network edge, devices may move and the wireless connection may change. That will happen... it may happen that the proxy may select a weakly connected server with sufficient computing power. That because the proxy do not know how the network condition is. But if we look at the ICN-based solution, ICN architecture naturally couple routing and addressing together. It helps to find the server while building the path. So it's promising to find a good server with acceptable connection. That's why we go with the ICN-based architecture.
Let me go to the three fundamental problems we met when we try to figure out this architecture. The first problem is the management of heterogeneous resources. When we try to complete a computing task, we need to find the source data, we need to find a device with the container environment, and also with sufficient computing power. That means we need heterogeneous resources. And if we look at these heterogeneous resources, the computing power is very different. For example, if I want to talk to Professor Oran, I have to talk to Professor Oran's devices, even if he's very far away because that's unique. But if I want to complete a computing task, I won't want to find Professor Oran, I want to find any device which is in proximity and provide sufficient computer power, that's enough. So naturally, the computing power is not unique and we... it's enough for us to maintain local information. So the problem is: how we can maintain these different object information considering their different properties at a lower cost? That's a basic problem for the control layer design.
And the second problem is: how we can support both server finding and result returning? So when we try to find a server, we do the addressing and routing together because for this stage the destination is pending, we don't know which server should take the task. But for the result returning, we know exactly who should take the data as the user. In this phase, we believe the IP-based push routing is more efficient. So we support both routing patterns by keeping the multiple domain namespace. Specifically, we keep the mapping relationship between the SID, IP, and NID.
The third problem is about how to support dependent computing task. So we know that the end devices and the servers at the edge are usually computation or resource constraint compared with the data center. So it happens we need multiple devices or multiple servers to help to support one task together. So the current ICN architecture only support single service or content retrieval. So how we can decompose the task and how we can form this collaboration relationship? That's our third problem.
Following the third problem, we design our HiCom architecture. The first design is that we propose a integrated but different control plane to manage the resources of heterogeneous... the heterogeneous resources. We keep the SID, NID, IP address. Specifically, we attach the computing power information to the node identifier. The reason why we use multiple dimensional namespace is for three reasons... for three aspects. Firstly, in this way we can keep a global view of services and devices distribution. And for the second aspect, we can simplify or truncated the computing power information in the spatial domain based on their mapping relationship to the NID and IP. And the third reason is that we can also support different routing patterns during the server finding phase and the result returning phase.
This shows the... this here shows the details. So as I mentioned, when I want to find a server, we do the addressing and routing together because for this stage, the destination is pending, we don't know which server should take the task. But for the result returning, we know exactly who should be... who should take the data as the user. In this phase, we believe the IP-based push routing is more efficient. So we support both routing patterns by keeping the multiple multi-domain namespace. And specifically, we keep the mapping relationship between the SID, IP, and NID.
The third design is about how to support the dependable task. Firstly, we use the directional asynchronous graph-based architecture to support a description of a task. After this request is raised, we will try to figure out what's the critical path and the critical subtask. And we first find the appropriate server for the critical subtask. And from this subtask, we decompose the graph into multiple graphs. The task... the forwarding tasks may form different graphs wherein the critical path or the selected server will act as a final user. And for the forwarding tasks, the selected server will behave as the source data or source node. In this way, we can decompose the whole task into in a recursive way and finally build a collaboration relationship.
So that's the main design aspect of the HiCom architecture. Actually, the HiCom architecture borrowed many idea from computing first network, CFN. And we share lots of similarities, for example, the whole design object, the namespace, and the task scheduling aspect. But we do focus on three aspects for the new design. First is that we find the different property of computing power, which is not unique and we can not just name it by using NID or SID. And the second aspect is that we keep the IP address to speed up the result returning in the push manner. And the third different is that with this task decomposition method, we can help the multiple devices to collaborate. And in this collaboration, the status synchronization is simplified. For the CFN architecture, there usually is a centralized controller to synchronize the status of all involved servers. But in our architecture, after the decomposition, the server node only need to talk with the preceding ones.
We realize our architecture by... as a prototype. So we have the resource manager node and the server node. The resource manager is implemented with the control plane functions and the servers handle the data plane functions. We build the platform shown as the bottom figure. We have one user raising video analytic request, we have two resource manager representing different areas, and the two resource manager can access different servers to handle the computing task. Meanwhile, the resource manager will maintain the status of the servers.
In the experiment, we simulate different cases, including when the user moves from one resource manager to another one, when the traffic load of a server decreases, and when the server disconnects. The experiment's results show that with these different environment dynamics, our proposed HiCom architecture can adapt to the change very fast and always switch to the best server with good enough connection.
So that's basically for my talk today. Actually, this is just a ongoing work. There are many opening issues under this architecture. For example, how we should model the computing power and integrate this resource information based on the network topology. And for example, how we can depict the task. As AI agent comes, which is a very hot topic, maybe the computing pattern will change not like human only human or only machine, but a human and machine integrated way. So in that sense, how we should represent the computing request and how we can decompose the request by like language model and how to support this kind of complex task. Besides, there are some security concerns about this zero trust environment. How we can make it robust and fault tolerant. Thanks again for Professor Kutscher and Professor Oran's invitation. And thank you for your listening. Thank you.
Rio Chiariello: All right, thank you very much. Excellent. Right, I think that brings us to the end. Very good time management for the speakers. Thank you so much. Thank you very much indeed. Let me take back the control. And let me also take a moment to also thank all of the speakers today.
(Referencing: Chairs' Sides for ICNRG @ IETF125)
Dave Oran: Well, just to mention what's happening coming up.
Rio Chiariello: Yes. Okay. Right. So let me just come in for a second. There should be one more thing on this list: the submission deadline for ICNP, the International Conference on Network Protocols, is coming up in probably a little over a month. And there is an ICN track in the call for papers for that conference, and I'm the area chair. So it'd be really nice if we could see some more ICN papers making its way into that conference. It is a tier-one conference and you get a lot of points for getting papers in. So if you have something that you're looking for submitting and didn't meet the SIGCOMM deadline or the impending NSDI deadline, ICNP is a really good conference for showcasing your work.
All right, excellent. And then beyond that, we should start planning for another meeting in Vienna. Hopefully we'll be able to see more progress and maybe some follow-ups for some of the in-progress work that we've heard about at this meeting, that may have gotten a lot further along in the reflexive forwarding work and in the wireless space using ICN for opportunistic caching. So please stay tuned and we appreciate everyone's participation today.
Dirk, did you want to say something?
Dirk Kutscher: Ah yes, and I am IRTF chair. Thank you! Thanks very much for chairing the session so well, and wish you a nice rest of the day in Tokyo.
Rio Chiariello: Thank you very much. See you soon. See you soon. All right.
(End of session)