Markdown Version

Session Date/Time: 17 Mar 2026 06:00

Michael Welzl: Okay, that puts us to 12 p.m. in China time. And with that, I'm opening the session on SUSTAIN-RG. Welcome, everybody. A couple of notes here: the Note Well. As a participant in or at an IETF activity, yada yada yada, you acknowledge that, you know, we live with the privacy conditions that are laid out on this slide. You should have seen these slides maybe once before already, since it's already Tuesday for you. You always want to work respectfully with other participants, follow the Code of Conduct, anti-harassment procedures, and so forth. Audio and video recordings are being made, online and in person meetings as well. Include that. Michael?

Diego Lopez: Yes.

Michael Welzl: If you don't mind to speak up a little bit because we hear you somewhat distant.

Diego Lopez: Okay, okay. That's good to know. Thank you. I could probably find a way to turn up the mic, or just speak louder. No, I hope your neighbors are happy with you. No, no, that's totally fine.

Michael Welzl: Okay, anyway, yeah, you know, if you don't want to be photographed or filmed, you should be wearing a red lanyard. If you wear a white one, then you consent to appear in the recordings. You will find yourself on YouTube. There are IPR disclosure rules here. If you are aware that any contribution is covered by a patent or patent application, then you have to disclose the fact or not participate in the discussion. Then, well, we expect you all to file such disclosures timely. There is more information in RFCs, as always.

The IRTF is focused on longer-term research issues and not on just engineering. So we conduct research. We appreciate you bringing papers and research results, things like that. We're not directly a Standards Development Organization. We can, however, publish informational or experimental documents, but the primary output is really meant to be understanding or research results. So, again, we have an RFC with more details.

Here are a couple of links with details, which we also have in the agenda. So if you need the direct links to the Meetecho tool or – and this is an interesting thing – the meeting materials, or the, sorry, the shared notes where we would appreciate if people are willing to take notes. We're not specifically assigning a note-taker, we're going to do it ourselves. But if you do take a look and you have the opportunity to fill out any mistakes that we might make, then we would appreciate that.

The vision of SUSTAIN-RG itself is to contribute to the advancement of the internet as a fundamental part of sustainable and resilient societies and the planet through conceptual and evidence-based multidisciplinary research collaboration. We have a long, very detailed charter if you want to know more, but that is it in a nutshell.

And this brings me already a bit early to the agenda. The agenda is quite packed. We would like to ask presenters to keep within their time slots, and indeed keep in mind that the time slots do contain some time for discussion, which can be quite intense and long in the IRTF sometimes.

So that's it from me. I would like to already hand over to the first speaker.

Eve Schooler: Although I would like to acknowledge that we're really grateful to Luis and Diego for helping us as delegates in the room. And I'd also like to remind the speakers that the time that we've allotted includes Q&A. So for the shorter talks, assume maybe 3 to 5 minutes for Q&A, and for the longer talks, at least 5 to 10. Okay? Great.

We are ready for the first talk, and I think that is for Wen to present in the room. Ah, you're here. Okay, please. Let me see if I can – I'm sharing your slides.

Wen Cai: Thank you.

Michael Welzl: Are you keeping the control, Michael, or?

Michael Welzl: I can give the control to the – I gave the control to the clicker. No, the clicker.

Diego Lopez: Okay, so you can use it. Try, try.

Wen Cai: Try real quick. Not really.

Michael Welzl: Michael, it looks like it's not working. Try, try if you can move it from your –

Michael Welzl: Yeah, then I have to take the control again. And do that. Yeah, I can. So, well, we will ask when we will be asking you. Just speak on the mic. That's the only thing.

Wen Cai: Okay. Good afternoon. I'm Wen Cai from the University of Oslo. One second, sounds like it doesn't – Can you check if there is a green light there? The level of this mic seems pretty low. Better? Yes. Thank you. Now it's better. Okay.

Slides: Towards a Sub-National Carbon Map

Wen Cai: So, once again, I'm Wen Cai from the University of Oslo, and I'm with my supervisor Michael Welzl, Shafiqul Islam, and Kristian Chico. Today we're here to present a more fine-grained carbon intensity map. Next.

Yes. So, from the green networking perspective, Electricity Maps is one of the most commonly used commercial sources for hourly for real-time carbon intensity. But there are drawbacks specifically from this interactive database. Energy breakdowns are only available at zonal or national levels, and those zones are particularly large for our local users. Secondly, when Electricity Maps use flow tracing to calculate cross-border energy exchange, it assumes proportional sharing within the entire region or the entire zone, which may or may not reflect the real case in practice. Next. Next.

So, well, we have found that this apparently was not advanced enough, so probably Michael, can you change again the control here? Try, try. Done. Yes, it worked. Now you're relieved. You're relieved. You have less latency on the slides.

Wen Cai: Thank you, thank you. And then motivated by those drawbacks, we are trying to find the possibility for a more fine-grained carbon intensity map just based on those freely available databases. And on the right, we have listed the databases that we use in our research. For energy utility locations, we use OpenInfraMap. And then for energy generation, demand, and trade data, we use ENTSO-E for the European region as a real-time database. And more location information are drawn from Google Maps. And we also find power plant information in details from transmission system operators in different countries.

So, to start this aggregate the national carbon intensity into regionals, we need to define those regions. And we're using two regions that's being defined by Eurostats for statistic analysis. The first one is NUTS-3 regions, and then the second one is LAU regions. For the NUTS-3 region, it's typically for small regions with a population of 0.15 to 0.8 million, and then it commonly corresponds to small groups of districts, commutes, counties, and groups of small provinces. And for LAU, which is short for Local Administrative Units, it's much smaller regions and they are usually referred to small counties or towns. And when we are talking about local carbon intensity, it's more reasonable to use local data per se. And then in our methods for local carbon intensity calculation, we divide the regions into two groups: one is self-sustained and one is not self-sustained. The map on the right shows a general map for Germany, which highlights the green regions as self-sustained and then the yellow regions as not self-sustained. We also have those maps for the two levels of regions, specifically for mainland Spain, which is NUTS-3 levels and LAU level, just to show how fine-grained our data can actually get.

This kind of summarized our methodology, which is quite simple. So the local carbon intensity are divided into two parts: one for the locally produced and one for the imported parts from its parent level. So if it's for a NUTS-3 region, when it produces enough to cover itself in consumption, we use the local carbon intensity directly. Otherwise, part of the local – part of the energy will be imported from its national level carbon intensity. Likewise, LAU regions are treated in a similar way, but when the LAU region is underproducing, then part of its importing – part of its carbon intensity is imported from its NUTS-3 region.

And here we are able to present our data, specifically for different time in the same – on the same day and different day throughout the year, just based on our data from 2024. That's the mainland Spain maps, and then we are – our data and results are able to give our users more freedom to choose between the national level carbon intensity versus the local regional level carbon intensity.

And that's just more information that we can provide from our results. If we bring all three levels of carbon intensity together, from Electricity Maps as the national level one, the NUTS-3 level ones, and then the LAU level ones, they all behave kind of differently. So we can observe from the diagram on the right, even within the same NUTS-3 level and LAU levels, local production, specifically from the more greener power plants, will behave differently for each hour versus the parent NUTS-3 level carbon intensity. So, yes, during the daytime, the solar panel in Cadiz municipality will contribute more to the greener carbon intensity versus its provinces.

And that's really what we want – trying to say in terms of national versus regional is, even within the same hour of the day, within the same country, two regions might be canceling each other in terms of national carbon intensity. So, for example, when we are planning for a carbon-aware routing path, a path to a greener country may or may not give you a greener – a lower carbon emission numbers as a result. So that's more or less worth considering when we are planning for the routes and trees.

And then based on our findings and research, there are several problems that we can help in solving, specifically for carbon-aware routing, both for the intra and inter-domain. And then we provide more greener energy options for domestic load shifting, specifically avoiding international legal or tax issues. We also help defining the actual greenness for CDNs and data centers if people are more interested in regional numbers.

But we do acknowledge some limitations based on our research. We have excluded electricity price and power efficiency, which are two big factors in terms of green networking. And then if we want to expand our regional real-time carbon intensity map beyond the European region, we would definitely need more data in a real-time fashion. Also, the market is excluded in our research yet, but that's a big part of a factor that will contribute to the networking and commercial concerns for carbon-aware routing as well. Last but not least, validation for our results is kind of hard just because most of existing carbon intensity databases are still at the national level, and then we do share a big part of source databases with them, which is in turn comparing an apple to an apple. So our next step is trying to validate our results with the UK data, and then maybe provide some more insights from the comparison.

So, yes, that's the end of the presentation. We are open for any collaboration and then discussion.

Diego Lopez: So, I have a question. So these regional maps are basically focused on the production of the energy, not in the consumption, right? Because probably could be unbalanced.

Wen Cai: It's a combination of both local production and consumption taken per each hour. So, yeah, it does combine both factors into consideration.

Diego Lopez: Thank you.

Wen Cai: And then also it's our results are in the paper that's under-submitted, but we do have a link for our data and results in a GitHub repository. If anyone is interested, we can share afterward. Yeah. Sebastian is in the queue.

Sebastian: Yes, thank you for the presentation. So there are – networking is only one industry that would benefit from this map, from these maps. There are probably other industries that are looking for the same. Have you seen if there is other efforts outside of networking for doing the same thing?

Wen Cai: If it's in terms of networking, yes, maybe it's more towards carbon-aware routing or maybe load shifting between different sites. But if corporations per se, let's say, are – are finding more energy efficient and more green location for their infrastructures, maybe that's another possibility. But we're still – it's a very fundamental finding and then we're looking for potentials out of it.

Michael Welzl: Any more questions? Okay, well, thank you very much for your presentation. And I can say that it would be great to get a pointer to your data and we can include it in the notes so others can have a look at it. Thank you.

Sebastian, you can take the floor and request slide presentation.

Slides: Traffic testbed and Carbon topography representation: tools to better measure, understand and analyse server lifecycle impacts

Michael Welzl: I mean, I – I shared the slides but I cannot – Ah, there you go. Now. Okay, now you should be able to share the slides. Yeah, I granted permission. Great.

Sebastian: All right. We're all good. You can see my screen, right?

Michael Welzl: Yeah, we see the screen. We hear you well.

Sebastian: All right, thank you. So, thank you for having me this morning. I have only 15 minutes to tell you about two papers we wrote recently. One is with Michael, who is here.

So, a quick stage setting. We all know these numbers about end users, data centers, telecommunication networks. So we have this pretty big share, about half of the impact according to this study. There are others. Well, this is actually a survey. That the cloud is accounting for about half of the damages. So the cloud is like the thing we don't really see that is in the cloud. And we also have these figures showing that the data centers are exploding their power consumption and their ecological footprint. So, but this is for the stage setting.

So, how do we reduce this cloud in the large sense, like cloud data centers plus networks? Well, first of course there is the possibility to do less of everything. This is probably what we should do, but this is more like a sociology problem. So far, I'm more like a computer scientist. And then you start to think, okay, maybe data centers and networks are there to meet a demand. So maybe we should look at this demand. What is the nature, the form of this demand? Are data centers and networks meeting this demand efficiently? And you might ask if this demand is relevant and justified, but this brings back to more like sociology. But we need to look a little bit why are they doing – and we came to the conclusion that high-level studies, they're not really looking at what are data centers and networking doing in practice, especially data centers. Most papers about data centers, they just think it's a big box and they – it's hard to know what are they doing.

So we started to think we should look at what they do, and we should understand how this infrastructure relates to each demand or flows of demand or each request. And so, in some sense, this is to ask how many infrastructure units we will need per request, per request to a service, for instance. And also analyze infrastructure efficiency and infrastructure impacts. And this is another question is: how many ecological impacts we will have per infrastructure unit? And if we combine these two questions, then we get how many impact units we get per request. This is the overall goal.

And also talking with some people, co-authors of the two papers, we realized that there is a need for primary measurement, bottom-up experiment, over which we can ground – on which we can ground the top-down studies.

So this is what – this was our motivation. So we just said, okay, we take users, we take a cloud service, and we send requests, and the service sends responses, and we just measure the impact. That's the – just the principle. So how are we doing this? First, we started to look on power consumption. This is the – or energy consumption, electrical energy consumption. This is the easiest to do. So what we've been doing is to take a server, measure power consumption at the plug, and then use a controller to send load. On the server, we install a service. It can be service 1, it can be service N, we can start different services. And this is essentially what we've done in this first paper: TRAFFIC, like testbed for assessing energy efficiency in throughput computing.

We took four different machines, different years, different size, form factors. We even took a Raspberry Pi, different CPU architectures. And here I'm just referring to one, so this was the most recent machine. It has a big GPU that wasn't used in this study. And what we see is that, well, starting analyzing how the power consumption depends on the load we send to different services, we got this measurement. And this is already very interesting. We see that depending on what we do on the server, the power consumption is very different. And also notice that, well, actually basically we put a straight, like a dot line vertical that shows that this is the maximum load. We actually calibrated the count service so that it reaches maximum load at the same time than the AI service. So the maximum load is when requests start to not be being served. And what is interesting here you see is that the AI service power consumption is saturating. So there is actually more work being done, but the power consumption is nearly the same from 600 requests per second to 800. We had the idle power here.

And now we can – when we do this with all these different machines, we can put them all on a chart like this, which shows the – which compares the energy efficiency per request. Obviously, the more you use a machine, the better the energy per request. What is interesting here is that obviously the biggest machine gets the lowest energy per request, but if you don't have 800 requests per second but rather 200 – so if you don't have a big load to serve – then the biggest machine is not the most efficient one. Then the Dell machine in this case would be better at 200 requests per second. And if you only have sporadic requests, like 2 requests per second, then the Pi is actually much, much more energy efficient.

It was also interesting for us to show that like the AI service is actually a BERT, so it was a machine learning model that we can load on the Raspberry Pi. And just knowing that it's actually one inference with this model is about a joule, we found it interesting because sometimes it's very hard to put an energy figure or order of magnitude on such a request. So in our experiment, it was about a joule to make an inference with a small language model.

When you start comparing power consumption and CPU usage, so you can – each data point here is one machine for a little time. What we see is that there is no – CPU usage is actually not a good proxy for power consumption. If it would, the data points would be nicely aligned along the red line. And what you see instead is, for – again, it shows a big difference between the service being used. Apparently, the inference service is actually probably doing more work with the RAM, is asking more job from other components than the CPU. And you can see pretty big deviation. So for instance, in here, the prediction, if you use the red line as a predictor, I would say 400 watts. So the prediction here is just you go from the idle power of 240 to the max power of 500. So the prediction would predict 400 watts, and what we measure here is rather 320, so we got a 25% deviation.

So this was a first proposition: to measure equipment when facing a modulable load and actually measure equipment when providing a service at a given load. And we realize that what matters is not really hardware efficiency, but more like the service-level efficiency. And that actually brings us to a concept of back to functional unit that is used in the life cycle analysis. So the functional unit here is one request to the service, and we try to relate everything to this.

Second, we started to look at CO2. So we took the same testbed. CO2 electricity, fabrication needs to be added. So we made a quick life cycle assessment with a quick method. And then we can combine the two – the two numbers. So, for instance, you take the machine on the left, you assume that you have 300 requests per second for 4 years, and that will give you about 1.7 tons of CO2, which you add to the CPU – the CO2 of fabrication, and you get over the lifetime of the equipment about 2.7 tons. Here it's pretty low because we have the assumption that we have only 100 grams per kilowatt hour.

So the question is: how does this – what is dominating in CO2 of the – of the server like serving requests in the cloud? And this is where we introduced this carbon topography representation. So we just figured, okay, if you have very low carbon in your power plug, then fabrication will dominate CO2 of the server. If you have low load and high energy – high carbon electricity, then it's static power that will dominate. And if you have high load and high carbon electricity, you're going to get dynamic power dominating.

And so when you plot everything, this is what you get. So we plotted for the two machines I mentioned previously. What is interesting here is that you see, if you are in a country that has less than 100 grams of CO2 per kilowatt hour, it gets pretty blue, which means that basically fabrication is dominating the CO2 of the full server life. Not only fabrication, but the full server life. What we see here is that it's pretty red. Only the HPE machine shows some decent energy proportionality, but very – most of the time it is dominated by static power.

And so this is something we presented last year at HotCarbon. And from there, we can look at two strategies. So when you see this representation, you see that the first step to do is probably to use low-carbon energy if you can through contracts or through a provider that has a more green energy. So that can be represented pretty nicely on this representation. And here you see that you can very quickly get nice decreases of CO2 per request. Once if you are already at low energy – at low-carbon electricity, then the next obvious step is to maximize server load. If you do go from – you increase your request per second, then naturally the CO2 per request will get better because CO2 is better amortized. Although note that here you need a more load, so you need more traffic, and if you just want that – so this is kind of a weak scaling scenario – then this is something we elaborate on the first paper. And last, if you want still to improve, then the last strategy is basically to amortize your equipment over more – a longer years. So here, this is the comparison if you go from 4 years to 10 years, then you get another -35% on the CO2 per request. So as we – if you just quickly see the numbers here, at the end we get 5.5 micrograms of CO2 whereas at the beginning we were at 120. So when you combine the three strategies, you can get a pretty nice decrease of CO2 per request.

So this is quickly the conclusion. We need more measurement, more testbeds. We have one approach; there are probably others. We came with this idea of functional units. Careful with simplistic model. We show that CPU usage might be a bit overused and also server utilization. What is exactly server utilization? Your machine are not necessarily better, especially if they remain unused or not very not used 100%. Here we have something that relates to the rebound effect. And impact, CO2 impacts vary a lot depending on whether you are in China or Europe or France or United States and whether you use your server 100% or not. Thank you.

Michael Welzl: Do we have anyone in the queue now? No. Um, I have a question. Um, so, Sebastian, when you talk about your testbed and needing more testbeds, what are you imagining for that? Are you imagining scale up? Are you imagining other more sort of principled sheets of, you know, information of equipment, etc.? Share some more of your thoughts about that.

Sebastian: Yeah, well, it begins by just more what matters. There's a lot of paper that use software what matters, but we have started to see that they also have deviation. So we should really measure hardware. The hardware is polluting, the hardware – so we need to measure the hardware. So for everything that is with energy consumption or or power plug power consumption, we need to think of measuring the hardware. So here we just measured at the plug, but we intend to measure inside the computer, trying to see what is the RAM measuring – what is the power consumption of the RAM, or the power consumption of the fan, and and that requires really measurement equipment that is real. So it's nothing than this, like a testbed that can measure things in the wild. And also testbed potentially this is more like methodological, but that can run a lot of measurement and repeat them and and try different parameters and and basically collect as much experimental data as we can.

Michael Welzl: Thanks. Is there anything else that you might want to share from your backup slides? We're a little bit ahead of schedule.

Sebastian: Good question. Yeah, well, if I can, yes. I requested the share screen again, so. I never get these requests. I think it's maybe Eve getting them. All right.

Yeah, so this is also when you start measuring things in practice, you look – you find surprising results. So we calibrated, as I said, one of the service, the counting service, just counts. It counts until a number. So we made sure this number makes the job as hard for the counting service than the inference service on the A machine. And you see that for – it creates different scenarios on other machines. Intel machines are actually much better at dealing with inference than counting, but the Pi is the opposite. So there is actually a lot of things for computer architects in the TRAFFIC paper that shows how different architectures react to load. And that also shows that one-size-fits-all models for measuring energy efficiency of computers is risky because there is a lot of diversity in the environment and and really these machines react differently. So I thought this is probably worth mentioning.

Well, that's it. I think I don't want to take the floor more than – but if there are any more questions.

Michael Welzl: Anyone? Okay, well, thank you so much. Yes, thank you.

Noa, you may request the floor.

Slides: Small World Web of AI

Noa Zilberman: Okay, so hopefully you can see my slides and hear me. If you can please confirm.

Michael Welzl: Yes.

Noa Zilberman: Excellent. So allow me to start. Hello, everyone. My name is Noa Zilberman. I'm from the University of Oxford and this is joint work with my student Alexander Jackson, and I'm going to talk about the small world web of AI.

And the idea is that we are rethinking the web in today's world, because we are all using generative AI for various reasons, whether it's just to summarize meetings or to improve the writing quality of our papers or just for coding, but we didn't change the way the web works. I mean, a lot of people are working on content generation, but what we are trying to focus on is the operational foundations of the web. And the idea is simple. Instead of sending content, sending prompts, and generating prompts into content on end-user browsers.

So let's take as an example planning a vacation, and you go on a travel blog and want to plan a hike. The server stores prompts that describe the content. So, for example, if there are stock images, you can turn them into prompts. If there is a description of the hiking route, you can turn that into bullet points. And if there is content that is unique, it can be saved in its original form. The server sends only the prompts instead of files and content, and the user browsers turn the prompts into content and shows it to the user.

Why do it? First of all, because it reduces the network load and improves scalability. Second, because it allows us to reduce storage requirements. And finally, and most important for this session, is that our goal is to improve sustainability. And we want to do that with minimum changes but still achieve maximum impact. To this end, we modify HTTP, specifically in HTTP/2. We look on a settings frame and add a new option, so one of the unused values, and that's only for prototyping purposes. We use the value seven as a gen-ability indicator.

And the idea is to negotiate generation ability with the client, and if the client doesn't support gen AI, then we fall back to default. So basically, the server sends to the user using – by setting the value one in a settings gen-ability, a query: do you support gen AI? The client either answers yes or no, so either the value is zero which is the default or it's one which indicates that it is. And if the original content is a yellow duckling, then if gen AI is supported, the prompt "a yellow duckling" is sent, otherwise the file is sent. We didn't prototype it on HTTP/3, but generally, it can be implemented in a similar manner.

Also, in HTML, we add a generated content class which has two fields. One is content type, what is the type of the generated content – is it an image? Is it text? Is it a different type of content? And the metadata field, which is a JSON dictionary with information about the content that needs to be generated. So in this example, you can see that the prompt is a cartoon goldfish which has certain dimensions, so there are width and height fields, and the name of the generated file. And at bottom, you can see that there's a generated image called goldfish.jpeg.

And web browsing generates today about 2 to 3 exabytes a month, so it's still important, but video streaming is a major percentage of internet traffic. But video streaming protocols run on top of HTTP – so HLS or MPEG-DASH – meaning that using a similar settings negotiation, we can do video upscaling. Now GPUs today already support video upscaling, but this is not visible to the content provider, so the content provider doesn't know whether the end user can use video upscaling. But video upscaling can allow us to increase frame rate, so we can send the video at a 30 frame per second and it will be upscaled to 60 frame per second, meaning saving twice the bandwidth. And a full HD can be upscaled to 4K and again, we get a saving of 130%.

We evaluated this approach. We implemented a simple generative server and client with HTTP/2 support and evaluated machine learning performance – CLIP, YOLO, generation time. The details are in our paper. But important to this discussion, we evaluated compression ratio. For large images, the saving is 306-fold. For text, short paragraph 200 – 1.93x. So these propose significant storage savings for CDNs. And if a client doesn't support generation, well, one option to still benefit from the compression savings is to generate the content on the server and transmit the generated content. So if the server asks the client "do you support gen AI?" and the client replies with a no, then we save in the disk only the prompt "a yellow duckling," generate the file on the server, and send it to the client.

But what are the implications of the small world web? Because there might be limitations and potential harms, and I still didn't discuss sustainability. And obviously you might be looking at some of the numbers and say that's already outdated, and you are correct, probably by the date that we submitted the work it was already outdated because AI is progressing so quickly. So I'm not going to touch on all these aspects and you're welcome to read our paper for more details, but I'll still touch on two important elements. First of all, personalized content, because client-side generation means that the local model can generate different content to different clients because it's likely to engage increase the user engagement with the website. So let's say that the prompt is "a view from the best university in the world." Someone located in Cambridge may get this image, whereas someone located in Oxford will get this image. So there is a serious potential for harm. It might create an echo chamber, amplify online harm, and lead to malicious use. And this needs to be addressed and one of the questions is whether we can provide users with the required technology. I should say personalized content is already used server-side, I mean social networks.

Sustainability. So to compare sustainability, we use generation on a laptop on a MacBook Pro and on a workstation with a medium-level GPU. And currently, generation is really, really slow. You can see that it takes several seconds to generate a large image on the workstation and several minutes on a laptop. But new models are faster, and the Flux model claims to generate a large image in about a second. It's still long if you are waiting for an image to load on a web page, but AI is, as I said earlier, moving quickly. And the transmission energy is only 2.5% of generation energy, which is currently bad for us because it means that it's still more energy efficient to transmit data than to generate it on end-user devices. On the other hand, the compression provides significant embodied carbon savings because every terabyte of disk that we save saves between 6 and 7 kilograms of CO2 equivalent, and that's good.

So in summary, the web needs to change in order to support efficient generative AI. And as SUSTAIN-RG, I think that it's important for us to look into the future and to influence the way that the web will support generative AI. It may require small protocol changes but will allow us to save significant storage and therefore reduce embodied carbon, and it will reduce network load. But there is still a lot of work to be done, including browser design and trust aspect of trustworthiness, compliance, security, accountability. There is already increasing work on energy efficient models, on new types of accelerators for consumers, especially for mobile devices as well as consumer devices at home such as TV, and new operating paradigms. But this work needs to be steered, and it needs to be steered by people that care about sustainability, so influencing both industry and policymakers as well as our standardization to be more carbon and energy aware when adopting generative AI. And with that, I'm happy to take your questions. Thank you.

Michael Welzl: Thank you very much, Noa. Questions, anybody? We have Jen... We have, yeah. Sorry, I'll leave it to you, Diego.

Jen-Hsuan: Yeah, so from Huawei. I think it's a very interesting topic. So do you think this is some kind of special scenarios like agent-to-agent scenario? Because the agent A on the server side, it could tell the agent locally to generate this image or video based on some prompt and show it on the local browser.

Noa Zilberman: The model that you describe is possible. I'm not sure that this is what we meant. I mean, we were just thinking on web servers, if that helps.

Jen-Hsuan: Okay, thank you.

Michael Welzl: Sebastian.

Sebastian: Yeah, thank you, Noa. And it is interesting what you said about storage, that's on the storage side it saves carbon. And this makes me realize that, you know, in computer science we have we need basically three things: we need compute, we need storage, and we need transmission. And each one of the three has different impacts – CO2 or or carbon or energy. And what you say here is basically you can trade CPU – well, you trade transmission for CPU, and you also trade hard disk for CPU. So you get more – well, CPU/GPU, but basically you have more compute on the client to get less transmission and less storage. And that makes me realize that you have – well, if you don't go into gen AI but if you go into content delivery networks, there you get – you trade storage for – well, you trade transmission for storage, so you get more storage for less transmission. So I don't know if you thought of comparing this with other techniques like basically caching that have been trying to address this problem, and how maybe it can combine with caches.

Noa Zilberman: I do assume that caching is used for for this content. I do assume that the transmission is probably from a cache that is located within an ISP or an IXP and then, you know, in the content is the ISP. I don't assume that it is from a single server in in the US or something like that. We do assume caches are used.

Sebastian: Yeah, thank you.

Michael Welzl: Dirk.

Noa Zilberman: They are always – they are always off, yeah, you know.

Dirk Trossen: I know. This is Dirk. Thanks for the interesting talk. So I'm thinking there might be, you know, a multidimensional optimization problem, right? So in the web currently we are trying to optimize everything for low latency, or many things let's say. And I'm particularly – I'm not, you know, super convinced that adding this gen AI content helps with either low latency and to be honest, also not energy saving. I mean, if you can generate – I mean of course, you know, publishers and so on they generate their content, but isn't it much more efficient to do this once and then just rely on the established CDN infrastructure to deliver the bits instead of having it, you know, being generated potentially at multiple locations in a larger network?

Noa Zilberman: So our experimental results support your hypothesis, but I tend to think that this is an answer for today and not for five years from now, because the amount of work that is being done right now on generation on user devices regardless of web – it might be even the agent on your phone where you ask to generate images or video – it's already progressing rapidly. I actually find that the work on video upscaling is surprisingly probably the easiest to implement because it's already being done for online gaming, it's just that currently it's you know, it's controlled by the user. And if we are thinking about video streaming, then sending from the content provider side is something that can't be controlled. You don't know if the user can do the upscaling. And that's I think is an easy win, because it's done in real time today.

Dirk Trossen: Mm, sure, sure. Okay. Thank you.

Noa Zilberman: Thank you.

Jari Arkko: Yeah, this is a very interesting topic indeed. Um, I'm also interested in this sort of multi-parameter optimization issue, and of course there's also aspects of like, you know, if you have this capability, say upscaling, then maybe you are using it not to save bandwidth but actually really upscale your resolution for instance. So you can have all kinds of interesting effects there. But I actually wanted to ask you also about this: like where do you see this sort of most applicable in terms of the use cases or applications? So one could imagine for instance that if you have a brand name movie producer or a company, then they might or might not want to do this if if sort of the end result on the user's screen is slightly different than they intended their brand to look like or their movie to look like. Or – and this was not your use case – but I've thought about this for things like surveillance cameras and and there it's kind of obvious that you don't want a situation where, you know, you see a burglar entering and then the burglar will look like, you know, what the AI expects the burglar to look like instead of the actual one that we should get a photo of. So there seems to be like dependency on what you are trying to do with this, where it applies either very well or maybe not so good. Any thoughts on that?

Noa Zilberman: Yes, so we did consider that. So in a web page, for example, we considered a tag that indicates that a content is unique or important and shouldn't be converted to a prompt. And I think that will be one element. Also in the paper we considered how to convert existing content into prompts because potentially we'd like to already convert, you know, web content whether it's automatically or not. So these are certainly valid concerns, and I don't think that everything should be turned into prompts. Because for example, I don't think that this video stream where if we sent prompts will generate people's faces correctly unless there's some preliminary information about that. So yeah, there will be unique prompt that shouldn't – unique content that shouldn't be turned into prompts.

Sebastian: Yeah, thanks. That is the last more like a philosophical comment. But I think like more like turning back to music, so let's say I want to hear like a Mendelssohn symphony. So I can either go to Boston to get my favorite orchestra playing it, and then I get a very large transmission, or I can get any Mendelssohn symphony and then I will probably get a CDN content cached, or maybe I can ask Gen AI to like play it live for me. I guess the result will be pretty catastrophic, but I think it's a little bit the same here. We are into information theory and how we want trade like precision and exactly what we want against basic – well, something that just has some music. This is was more like a comment. Thank you.

Noa Zilberman: Okay, thanks. I just want to go back to my first slide, which is that I think that there is a huge amount of work on content generation, but not enough work about how we should support it through our standards. And that was the goal of this work: to look into standards and what needs to be changed in order to support content generation.

Michael Welzl: Good. Thank you. I have one last question, which is um, just simply to ask um if you have thought about talking to some of the folks who work on HTTP and what they think about all of this.

Noa Zilberman: Yes, but didn't get to that yet.

Michael Welzl: Ah, okay. Good, well, we look forward to hearing about how that goes.

Noa Zilberman: Okay. Thank you.

Michael Welzl: Thanks so much. Great talk. Okay, our next talk up is Artur, who will be speaking about the EXIGENCE project and in particular its view on sustainability of ICT.

Slides: EXIGENCE View on Sustainability of ICT

Michael Welzl: And let's see. Would you like to share it, Luis? Artur is on site, so if you can open up the slides, Artur will have the clicker so he can go through the presentation.

Artur Hecker: Yes. Thank you. Thanks to the chairs, thanks to the audience, thanks for having me here. It's a very proud – I'm very proud to represent the project here. So this is some work from an existing running project sponsored by the European Commission or co-sponsored at least, and there is also some sponsoring we receive from this 6G organization, which is a non-for-profit organization where the purpose is let's say to establish one single 6G as opposed to kind of separate standards.

So thanks again for having us here. We will present a rapid view of what we believe should also be the case, you know – not exclusively, but it should also be the case in addition in terms of sustainability and energy of ICT and ICT services in particular. So I will not go too much into the introduction slides because I guess the audience is very much familiar with the situation in the ICT sector. It's very clear that this digitalization essentially boosts the usage of ICT and while it's very good for everybody else because decarbonization kind of is using that, the ICT usage increases. And we cannot continue forever to say that it's good and never measure the ICT per se, right? And now there have been some quite interesting talks before presented by my colleagues, so it's very interesting. I can only support this and very much agree with this. It's not that simple to measure these things, but somehow intuitively from the gut feeling a physical entity like joules and, you know, let's say grams or kilograms of CO2 should be measurable per se. It's very different than measuring complex entities, complex ontologies like security or something else which might not be measurable, but this is for sure measurable. These are physical entities. Then you need of course to somehow ask yourself how you do it, and there was some work before presented by my colleagues, so it's very interesting.

But interestingly as well, there is some movement from the regulatory side, from political side on all around the globe where essentially these things are moving forward. So for example as an example – I will not read the whole slide – but in France, for example, France is quite advanced on that, when you provide a service like an MNO, at the end of the term like for the bill you need to provide an estimation, and there are some agreed models for this, how much CO2 and how much energy was consumed by that service. So for example an access provider in France would need to show not only how many megabytes you downloaded or gigabytes or whatever it is or how many minutes you consumed, but also how much CO2 it represents, right? And these things are advancing in many sectors. So for example the European Commission right now is about to publish this code of conduct for the network operators, which is a voluntary measure for now – but they always start with voluntary measures – where essentially the network operators are expected to show how efficient their networks are running. Okay? And so this – this is going on. Of course you have all these carbon markets as well, and well it's not yet the case. So Europe was number one with this carbon market which is – which was proved to be very successful, is now kind of being exported to other countries increasingly. But what is not yet the case, the telecommunication sector per se is not yet on the carbon market. So there is a big difference, for example if I have a car in Europe, I can go to the carbon market and reclaim, if I have an electric car, the non-used carbon allowances that I somehow supposedly get, right? So this is not the case yet for ICT services, but you know if it advances it might come. And we kind of go in this direction. I will explain what we mean by this.

So we did of course some state of the art, I will skip this, I guess you know it. Most important thing is, you have that, if you scan it you will get this webpage, we published all the state of the art as the so-called ICT Green Digest. So we specifically focused here on the normative and standard work. So not only IETF, but also 3GPP because it's very important – the access networks are quite energy intensive – and also ETSI work, so ETSY work. So you can browse this standard then and you can see what is there, what is being covered, and how it's actually managed. So if you're interested in this.

Now, the point is what we actually address. So where – where are we a little bit different or maybe a little bit original? So as I said, the ICT share in the overall let's say carbon footprint is increasing. It's still, you know, quite low, maybe about 5% depending on which estimation you take, DCs and network together. But it's certainly increasing. And so since the others are decreasing, the share will increase, you know, for two factors. First of all because there is more ICT, there are more energy-intensive services such as AI inference and training, but also because there is more usage and because the others somehow are decreasing, right? Through the decarbonization.

So, but the ICT does not work in this kind of siloed fashion, right? It's all about services. We all know this. I mean, we are at the IETF meeting here, so it means that you usually connect to something and something provides you with services. And that something is somewhere else. And it's impossible for different – obviously the separation in domains – it's impossible to measure on the other side. You don't have this control, right? So, and this is exactly what we try to get to, because as a matter of fact this is increasing. This is how we do it, right? Everything is cloud-based, everything is somehow using virtualization or you call it disaggregation. So this is for example the case including in our own infrastructures. So for example in 5G there is a whole core network which is used today as a virtual thing and not anymore as devices that the operator would buy.

And well, the question is: how would I know what is better? Is it better to have something on-prem or is it better to, you know, have an outsourced version somewhere else? And how do I compare this? And who gives me the data and through which means? And that's more or less what we try to address. So in a nutshell, I believe these things can be measured, as complex as they are to be measured. They can be measured and there is actually a constant advance on this. So if you go back in time like 14 years ago, probably people would insult me if I were to say let's measure, you know, just one process in a PC and what's the energy consumption of just one task, right? But today it's actually possible. It comes with some complexities. I think Sebastian's talk was very nice about this, but nevertheless you can do it. But then, you know, you have for example this is screenshots from an iPhone, but Android does something quite similar. You can have an attribution of how much power was drained from your battery, how is it essentially used by all these different tasks that you are running locally. But now you have something like Google Maps there, and of course this Google Maps is only the local part. Obviously it cannot account for the transport, let alone for the, you know, all the other Google Maps parts which are running in the cloud and all these drones which are collecting data on, you know, such that you get actually the current like traffic jam situation, things like this.

So, if we want to compare the greenness of ICT services, then we need essentially to go to service level and abstract a little bit from the domain level. I'm not saying domain level becomes unimportant, I'm just saying it's not enough because ICT is usually all about services. So I kind of trick myself, or it's very easy to trick ourselves to say outsourcing is always better because it somehow comes for free, right? Because I never see it. Okay?

So and we are saying in EXIGENCE okay, let's look at this ICT services, let's try to to kind of understand how much is actually consumed on that other side, which I cannot see directly through cooperation obviously. And then maybe with this I can go beyond energy efficiency. Meaning that I will address directly the consumption, right? And not just efficiency. Efficiency is okay, I'm always interested in the efficiency if I have some limited domain and I want to somehow push for more services with less energy or for the same budget, you know, more requests per second or something like this.

So for this, to achieve that, we need essentially to connect this measurement and optimizations and of course some economic incentives. Why? Well, because – I will show later – but it's not always the case that, you know, it's necessarily the same goal for every involved stakeholder. That's why the economic incentives might come handy.

So, what did we imagine? Essentially, the difference is you have the standalone device here at the top of the figure, where you can very simply – you have some realization of some function, right? This RF, you can obviously measure this because it's somehow all in your hand, all under your control. The different situation is when you have essentially only the display function, let's say if you take here as an example some kind of video streaming service, on – within your blue box, right? But then there is the transport function which is related to this and maybe some other function where you have some streaming server, maybe some generation or whatever it is. And this part, this green part which is not in this blue box, you essentially don't see at all, and therefore the comparisons are tricky. But – but this part is also responsible for quite something and it depends of course now on the service, right? If you have services such as inference, it's actually responsible for the bigger part. If it's something like video streaming, depends on the resolutions and what you do exactly in the server, but it's yeah, 30/70 something like this. And then of course for some services it will be mainly client side. So yeah, that needs to be then considered.

But of course as the service request is running, it goes, you know, whatever through your network card, can be wireless or wired, and then through the whole transport network up to the server. And we usually only look at the current local part because everything else was simply ignored. And then you – every domain that is involved in this overall service provision somehow does its own estimation, but you see there is no cooperation and we are losing quite some ground for the capability. So what, you know, for let's say the optimization could go way beyond what we are currently doing in a kind of per-silo fashion.

So in this example maybe I mentioned this: so if you have this nice cat video, you would have some indication, you know, of the current – so it's quite small but if you download the slides you will see it. So how many grams per second this is currently consuming, for example in CO2 if you have the CO2 information, or otherwise at least the joules, okay? That you are essentially spending, or kilowatt hours if you prefer, that you're spending on that particular video streaming.

So what we use is a methodology that is using three pillars. So we have some measurements, we have some optimization, and we have some incentivization, and we try to combine them together in the project. So, how does it work? So first, we obviously respect the constraints and the domain boundaries. So I cannot measure somewhere else what is not mine, I don't have the control so I cannot measure. But somebody else can measure. And he measures, and obviously the same applies to optimization. I cannot optimize something which is outside of my control boundary, right? But somebody else can. So hopefully he will. I and then I'm trying to show how it will work together. The incentivization is working by saying, okay maybe if you do that, you get that. I will explain it later. Okay? And these three pillars are to be brought together.

Now, how do we assume this is working? So we assume some kind of SFC model, right? Some service function chaining. It means that it's not necessarily one-to-one relationship point-to-point, whatever you name it, but there is some kind of chain where let's say there are several domains involved in general. It can be a graph, this is just, you know, you can apply this pattern as – as you like, it can be a whole – a whole graph. But in the – at the end you have essentially some – some domain B, let's say it's a service provider, and it provides us with some service SB, right? And it obviously – so you have a client on your side to kind of consume that service. And there is some consumption going on in this client, that you can measure locally. That's okay. And then there is another part of the consumption which is in this green domain of the provider B. And then since you cannot measure it, we assume the provider B will, and we supply essentially the quantity, so the energy consumed by the user A for the service SB, we send it along with the service. So how exactly is sent, whether it's in-band, out-of-band, all these modalities are being investigated in the project, but the idea is that you report it actively. So it could be reported over some other interface or sent along with the service. Depends on the service, depends on situation. These are not mutually exclusive.

Now, once you have essentially measured this – so you need to attribute it at the service level, okay? And you send it along with the service or somehow provide this as an accounting information aside. This guy is essentially already incentivized to optimization. Why? Because he is showing how good he is at provision of the service. As an example, if this are different – so if there are different providers providing a video service, so B, C, D, and they have the same movie, you essentially can get an estimation now because of this, right? You can get an estimate: oh okay, if I go with this provider it will cost me that much, you know, in terms of CO2 and or energy. And if I go with that provider, it will cost me that much. And just this difference provides some kind of energy competition, right? Or posture competition by energy. Okay? And this is very important because suddenly you incentivize the providers to actually say: oh, wait, I – I need to do something about it. I cannot very simply continue doing it this way because somehow all the other providers are better.

Now it's important to understand, so we have thought about this: we would separate this in a way that we say the idle energy goes to the provider, so this whole CO2 impact which is due to manufacturing the hardware that he buys and all this maintenance that he does, that goes to him. However, the service, the provision of the service, that would go to the user. It's only fair to say that okay, this is provided to me, I could essentially provide it to myself, but I'm not, so therefore to compare it, I would take the the user part for myself. And then the incentivization happens when essentially – so it can happen in this domain or in that domain. This is not exclusive, it's just as an example. It happens at the third phase where essentially the service provider B, in – in this example, it could say: wait, I have two variants of the service, they are equivalent. But this service – equivalent, right? – is kind of better in energy than that other one. So maybe you switch to that other one. And that can come with some degradation of let's say quality of service. Okay? And normally there is an SLA between these two. And of course when you do the classic optimization, the service provider B would always be constrained by the SLA that he promised to the user. And therefore the optimization that you do under this constraints, hard constraints of SLAs, is kind of limited in efficiency. So you can maybe in the best case by all mathematical means get factor three out of this, but never more than this. And this is probably only in the models, not in realities.

But if you cut the quality, if you undercut the SLA, then you can obviously go way beyond that. Now, the provider cannot do it alone because he signed the SLA. But the user can. Nobody forces you to always stream in 8K. Nobody. Even if you bought it. You might want to do it and probably you don't have to do it all the time. And there are thousands of examples where maybe you very simply take your mobile and put it in your pocket or something like this and you forget it, right? And still it's streaming the video. It's not shutting it off. And there are other examples where you might essentially be not very much aware about the energy, you just don't care. And in this case, the provider can essentially incentivize you. It can give you something back because you have some loss of utility in economic terms. And to compensate for this loss of utility, essentially the provider would give you something back.

So we imagine some kind of energy overlay over the SFC, where – so we call them we have some kind of agent, so the agent stands here not for any kind of AI agent but for a very simple communication authority which is essentially can send requests or receive responses but also, you know, send – so the communication model is not constrained by this agent model. And and essentially they are on the horizontal interface, they would exchange this in- energy information, so we call it ecodata. Okay? And this ecodata consists of current readings of the energy spent for that particular service session, right? And along, if available, also the carbon information.

So then we have some definitions of what is the domain and what is the service and all these things, quite classical, nothing special. So then there are several aspects for this agents could be exchanging. So the typical, the number one thing that we need to cover is essentially this ecodata report. So this energy information, what is there, what is being sent, and this is of course the mandatory aspect. Then there are some other things: you can have a prediction. So before you start using it, you ask very simply: how much would it be cost me? And for this we need some kind of functional unit for that particular service. So we would say for a video, maybe how much would I – would it cost me in terms of ecological terms for one minute of video streaming, right? If it's something else, an inference, maybe, so how many tokens and so on would I spend or how many, you know, for that and that number of prompts and so on – so I can always express it in some functional entity, but this would be different of course. Then the – there are this service energy optimization hints, so the providing domain can give you some hints. So you're using some service and it could say: look, if you switch to that other one, it will cost you that much. It can be more, it can be less. So then up to you to decide. And last but not least, there can be also some forwarding and aggregation of this data, so if – if the domains are not directly connected, there can be some help between them.

So, and we define this interface quite precisely. So this is not an interface, it's a let's say it's a reference point. So this is using some kind of C language kind of type of semantics, just because it's a little bit compact. And so the main structure is this ecodata struct, right? Where we exchange this information. So it binds the service instance, there is some kind of identification of the service, and then it binds it to the current time and the energy consumed. And and these things are essentially defined here. So the energy consumes mainly with joule of that service, right? So the – the current energy consumption. And then of course it has some information on the accuracy and the sampling interval and so on so that you can estimate also how precise is this information.

Artur, sorry, but please – it has this incentive – yes, please Michael – you have only one and a half minutes left for the whole thing, including Q&A.

Artur Hecker: I'm fine. I will not go through all these slides. Yeah, no problem. We can go to the questions essentially. So what we did is we mapped this – because we need to cover several domains but we cannot do it within a research project – so we mapped it to some kind of meta-architecture for within the domain, between the domains you have just seen, these blue circles represented by agents. And within the domain there are these functional blocks. And then we – what we did, we spent quite some time on identifying this functional blocks in different domains. So I will not obviously go through all of this. But so we have mapped it to IETF, so this will be presented in the green working group in the session later. And then we also mapped it to MANO, we mapped it to 3GPP, and to all these kind of different domains by very simple identifying this very simple functional blocks which usually exist in one form or the other. Yes. So this – I will conclude with this. Yeah, sorry, I took a little bit too long. So maybe we switch to the questions.

Michael Welzl: Thank you, Artur, anyway. Yeah.

Jen-Hsuan: Yeah, okay. I think it's a quite extensive and deep research. It could be valuable. So my question is that just now you mentioned the service-level measurement and optimization, all these technologies, is it working as a standalone work overlay and then it interact with the underlay infrastructure without the necessity to modify the underlay infrastructure? So no modification to the existing protocol? Is that correct?

Artur Hecker: So, yes and no. It depends on the precision that you want to have. So obviously what you can do, I can always put myself in a position to say, I don't know how you measure, you give me a number. You can measure for example very simple volume-based, for example 3GPP always measures the volume, so they can map the volume to some energy expression through some model. Whether it's correct or not, it's not my problem. If I'm end customer, I very simply will compare what they give me. There can be some, you know, industry inclusion consensus on this, but at the end it could be done in a very rough way. Obviously it can also come with the real measurements. You can go and measure, you know, what the base stations are spending, what the – what the UPFs, the routers, and so on, what they actually spend in terms of energy, you know, for that particular flow. It's possible. The thing is, what we have shown in the project, it's in principle measurable, you can map these things. But yes, they tend to become very complex and it depends very much on the service. For some of them, it's not that important to measure essentially the usage part. For the others, it's super important. So then we need to establish this and to agree on that. And once we have this, it can be very simple to implement or very complex, it depends on the case.

Jen-Hsuan: Okay, got it.

Michael Welzl: Okay, in the interest of time, to leave Luis a little bit of time for his talk, we are going to shift gears. But thank you very much, and let's continue the conversation on the – on the mailing list. Thanks, that was super interesting.

Luis, what is the most expedient way for your slides to get loaded?

Luis Contreras: If you share I will associate to the clicker and I can move the slides from here. So but I am afraid that I'm not able to upload them. I cannot have the button for it. Oh.

Michael Welzl: And I think Michael, can you upload them? It says all the slots of requested media are already taken, so I can't share now. Now I can. Okay. Um, yeah. I'm sharing the slides now.

Slides: Sustainability Holistic API for Path Energy Evaluation (SHAPE)

Luis Contreras: I will use the clicker and then I think I can move from here. Thank you. Okay, cool.

Hello, everyone. This is Luis from Telefonica. I will cover the last presentation today and we will comment about this idea of API that we have already presented with other name in the past. So now the somehow we have renamed as sustainability holistic API for path energy evaluation and I will do on behalf of my co-authors Adrian, Alberto, Marisol, and Jan.

So the purpose – well, the history of the API, so there was an initial work that was proposed in green, in fact we are continuing developing the API there and will be presented in the green working group session in this IETF meeting on Thursday. And basically the reference there is that this draft-petra-green-api. We initially on top of that API, of that idea of API, we proposed on augmentations and took them to sustain. So the idea was essentially to add additional parameters that are more related to sustain rather than green, at least in the terms of the charter in green. But somehow after discussions, we reshaped a little bit and this is the the draft that we will present today – I would present today. So basically orienting this API more towards the parameters that could be fitting better on the scope of sustain energy, sustain-rg, sorry.

The motivation is the definition of an API focusing on sustainability-related information, all this sustainability information associated to network paths. And the notion of path is something somehow broader. So initially the idea was to identify API endpoints defined by IP addresses, but essentially the scope that we have in mind is making broader, so whatever connectivity construct that we could have between endpoints either identified by IP addresses or not.

The metrics that we are considering here will go beyond power consumption and energy efficiency, so we want to introduce a number of parameters that could be useful for the purposes of taking into account all the sustainability end-to-end and we will go through a few of them in the next slide.

And we have made, from a pragmatic approach, is essentially to import PETRA, so take PETRA as the baseline API and augmenting that API according to the parameters that we will describe here.

Apart from that, we are looking at a number of use cases. The initial use cases that we have documented is the following ones: the SD-WAN, so that the from this API we can serve SD-WAN users so that they can account as well for the energy or the sustainability parameters associated to the service; multilayer energy management, so that we could count the sustainable parameters in an API in an optical for instance IP plus optic service, but could be multilayer in general; SLA negotiation for green services, so that we could allow customers to request what we call the decarbonization level agreement, so that the customer through intent express the sustainability goals of the service and then the network map to the specific path to accomplish with that; energy-aware UPF and edge selection in 5G, which is basically to identify what would be the computing capabilities or computing facilities that could use for instantiating services plus the UPF and and to deliver the traffic towards the end user; and the final use case, the sustainability reporting across leased backhaul and network sharing, so that one provider can report to the provider that is leasing their infrastructure where is the associated sustainability parameters to the leased service, somehow similar to what Artur was commenting before.

So going into the different parameters that we have considered so far, I will not cover all of them, just picking up some of them just for illustration purposes. We are talking here about the energy mix, we are talking here about the greenness degree, we are talking about here for instance the the availability of sleep mode in the path, the anomaly factor, so what is the the difference between the sustainability parameters along the time, and so and so forth. So essentially parameters that go beyond the scope of green and could be yet useful for customers to basically take a decision on the path that they are consuming so far.

So how we position SHAPE. So moving now to your right, so basically SHAPE would be positioned in and this is mapped against the green reference framework, would be positioned at the API that they are able to collect information for either the network domain level – so taking information from inventories or so – but also from the controller that is controlling an specific domain. So essentially reconciling information that could be either measured or collected from databases, inventories, and so.

And now looking to your left, we position SHAPE as a – could be positioned in different levels. So somehow SHAPE inherit this potential recursiveness, so that we could have SHAPE API at the level of controller in a single domain, so collecting information in a path that is between different network elements of just one single domain. Or we can go up and considering SHAPE at the level of network orchestrator, so multi-domain environment either technological or administrative ways. Or in the top of the value chain, let's say, having SHAPE as the API that could serve the service orchestration and with that helping the final customer to take decisions on what could be the different network domains to get involved for the service, somehow similar to what Artur was commenting before.

So going to the very last slide, so the next steps, we would like to continue adding feedback from the research group to incorporate that feedback to the draft. For instance, it would be very interesting to have your feedback about the parameters that we are considering for this sustain API. All of those parameters would be optional, and for sure not maybe a single domain would not be able to provide all those parameters, but it would be interesting to have some discussions on that on the mailing list so to understand what could be of relevance for the research group to consider. We would like to explore incentive scenarios, so once we have the ways of exposing and also acting on the service from the sustainability angle, so explore what could be the ways of having these incentives from the users so that the users can react or the users can request changes on the service. And for sure to keep link with the green PETRA API. As said, this SHAPE API is growing on top of it. So the idea would be to somehow here cover the sustainability dimension while keeping in green the parameters associated to energy efficiency. And that's all from my side. Thank you. There is – if there are any questions I would be glad to answer.

Jen-Hsuan: Yeah, just a very quick question. So this API could be very important as a supplement to the YANG models. My question is that do you think for all these APIs they should be generic to all devices, or they could be specific for certain devices? So there could be more various metrics being collected? Yeah, thank you.

Luis Contreras: I would say that the parameters could be rather generic. The way in which the different devices are able to report and support those parameters would be particular per technology probably. But our intention is to make them the parameters somehow generic.

Jen-Hsuan: Okay, thank you.

Michael Welzl: Thank you. And I would encourage folks to have a look at the draft and please do provide feedback. It looks excellent. Thank you for the update from the last time and we are very interested in hearing about the exploration of the incentive scenarios as well. So please come back.

So with that, thank you so much to all the speakers and to all of you, whether it's morning, afternoon, evening, or the wee hours of when you should be asleep. We appreciate your participation and we do plan to meet in Vienna, so please join us there. We are also due for a review, our one-year review. So if you have some feedback for us, we would welcome it. And in fact we will probably solicit a more official request for feedback on the mailing list but in the meantime, thank you for joining us today.

Diego Lopez: Thank you.

Michael Welzl: Yeah, thanks from me as well. In particular to the on-site chairs, thank you very much for helping, and the note-takers. A lot of people helping out, so thank you all.

Eve Schooler: Yes, and Luis and Diego for being our champions in the room, and all of the speakers.

Luis Contreras: It's a pleasure. Thank you.

Michael Welzl: Okay, enjoy the coffee break. Goodbye, everyone.

Noa Zilberman: Thanks. Bye-bye.

Sebastian: Bye-bye.