Markdown Version

Session Date/Time: 16 Mar 2026 03:30

Edward Birrane: All right, everybody, welcome to IETF 125 DTN working group. Just a reminder that—well, a few things. One is, my apologies, my camera is currently not working, so you get to see me as just a large gray circle at the moment.

This is the first of two DTN working group sessions. This is session one, which will be an hour now, and then we'll have session two later in IETF, which will be an hour and a half. I would like to apologize—neither Rick nor I were able to travel for this IETF, so we are both running this meeting remotely, but we think things will go smoothly otherwise. And I did want to take a moment and just kick things off. Rick, do you want to go through the working group chair slides, or should I do that?

Rick Taylor: All right, I can certainly start those. A couple of things: Number one is this meeting is being recorded. This is, I think, the first day or so of IETF, so we'll go through these in a little more detail.

Please Note Well all of our typical IETF policies, in particular the policies related to guidelines for conduct and anti-harassment. And additionally, make sure that we follow all of BCP 54, which says that we would like to extend respect and courtesy to our colleagues at all times. We want to make sure that we have discussions related to the technologies and not to individuals, and we want to make sure that we're working on solutions that are really going to work for everyone as we go forward.

If you are in the room, please make sure that you are logged in, please use the on-site tool, and if you come to the mic, please make sure that you announce who you are. If you are not at the mic, please make sure that your audio is off so that we don't have interruptions as we go through the rest of the session.

The agenda has been posted, and we're going to talk about it in just a moment. I would like to ask if people would be willing to jump onto the shared notepad and keep notes as we go. We tend to need, obviously, at least one person who can help us take notes, but the more the better, and we've had a really good participation in note-taking over the past several IETFs. Please take a moment and consider jumping over to the notepad and helping us track our progress.

Otherwise, this is the introduction and Note Well for where we are in the working group. And then the topics for discussion that we have today are: Encapsulation of OpenFlow over DTN using the Bundle Protocol; Brian Sipos has a variety of documents that have been progressing through the working group, and he's going to give a report out on those; we're going to talk about an update to BTPU; and then an update on DTN reliability with remaining time for open mic.

Before we roll into all of that, I just wanted to ask: Are there any additions or agenda bashing for this particular agenda for Session 1?

Edward Birrane: I do think that there is one quick agenda item added before we go into the technical content, which is—I see Eric Vyncke is here. Eric is our responsible AD, but I know that we are getting a new responsible AD. Eric, did you want to take a moment and introduce our new AD?

Eric Vyncke: Yeah, it's Tommy—how—I can't see, I don't have a see-participants—ah, good, there he is. Yes. So, it's been my pleasure to try to be a responsible AD for the past two years. Thanks for bearing with [us], and on Wednesday, I'll be stepping down and Tommy Pauly is incoming AD responsible for DTN. So, we welcome Tommy, and we'll all be here to help explain DTN stuff and get you up to speed.

Tommy Pauly: Really appreciate it. Looking forward to supporting everyone and making sure that you all are unblocked and can publish what you need to.

Edward Birrane: Well, that sounds fantastic. Yes, thank you. And Tommy, it's nice to see you in person. My apologies that my camera isn't working; that actually might be the kindest thing I can do all day today is keep my camera off. So, with that, let us go forward into our agenda for the day. Let's start with Encapsulation of OpenFlow over DTNs using the Bundle Protocol. And do we have the presenters in the room? Oh, excellent. Thank you.

(Xiaojing Fan approaches the mic and begins the presentation. There is a brief audio issue initially.)

Rick Taylor: Meetecho, we've got an audio problem. Ah, there we go. Thank you.

Xiaojing Fan: My name is Xiaojing Fan, representing Beijing Jiaotong University. Today, I will be presenting our work on "Encapsulation of OpenFlow over Delay-Tolerant Networking using the Bundle Protocol," which is documented in our current draft.

Today's agenda covers the problem statement, architecture overview, encapsulation rules, endpoint addressing, and our experimental verification.

As we know, OpenFlow enables centralized control by relying on software-defined networking by relying on controller-switch communication. DTN environments similarly also require centralized policy management. Traditional OpenFlow control channels rely on TCP/IP and stable end-to-end paths. However, DTNs lack such paths. TCP-based control channels become unreliable. So, this work describes the use of BP, which provides store-and-forward communication and supports naming, addressing, and transport across heterogeneous links, to carry OpenFlow control messages.

The core challenge we are addressing is that OpenFlow assumes stable end-to-end connectivity. DTN violates this assumption. Control messages in this environment must tolerate delay and disruption, intermittent links, and non-ideal delivery behavior.

We focused on three use cases:

  1. Intermittent connectivity, where control-switch links are only available sporadically.
  2. Long-delay paths, such as control traffic over satellite or deep space links.
  3. Multi-hop forwarding, where messages are relayed via intermediate DTN nodes.

Let's look at the architecture overview. Both controller and switch are deployed on DTN nodes. OpenFlow logic runs at the application layer. OpenFlow messages are treated as application data units (ADUs). These ADUs are encapsulated into bundles by the BP agent. Convergence layers then map these bundles to underlying links via CLAs supporting multiple transport protocols like LTP or TCP.

Moving on to encapsulation and delivery rules, starting with the encapsulation overlay. OpenFlow PDUs are carried as BP payload. OpenFlow messages are treated as ADUs and these ADUs are encapsulated into bundles by the BP agent. Encapsulation includes underlying carrier headers, BP primary block, and option extension blocks, and bundle payload containing OpenFlow PDU data. BP may operate over different CLAs and no special tunneling protocol is defined or required.

We enforce one-to-one message mapping. Each bundle carries one OpenFlow message, and the messages are carried in their native wire format and treated as opaque data. There is no message aggregation unless defined and agreed.

Bundle fragmentation may be used if supported by BP. It must not be used if "do not fragment" is indicated and an almost-source is used. Fragments must be fully reassembled and partial OpenFlow messages must not be exposed.

Do not assume interactive request-response timing. Because duplicate delivery and re-ordering may occur, OpenFlow processing must tolerate them. To handle this, receivers should maintain a bounded duplicate cache using source and timestamp. Since arrival order does not reflect send order, use OpenFlow transaction identifiers and sequence numbers. Finally, each bundle must have a finite lifetime, and expired OpenFlow messages must not be applied.

To achieve routing, OpenFlow controllers and switches are identified by stable, unique BP EIDs. Payload is delivered to the local OpenFlow entity based on EID. The communication is bidirectional, including control instructions and telemetry.

We experimentally verified the architecture by building a prototype based on ION-DTN. We successfully carried OpenFlow messages over BP in satellite-to-ground scenarios, handling long delay, intermittent connectivity, and store-and-forward message delivery. This verified the feasibility of OpenFlow signaling, confirming encapsulation and delivery semantics. There is no modification required to the OpenFlow protocol.

Finally, regarding security, the draft does not modify OpenFlow or BP security properties. Because DTN store-and-forwarding may increase data exposure, we strongly recommend that deployments should use existing BP security mechanisms.

Thank you. I would be very happy to take any questions or comments at this time.

Edward Birrane: I would just jump in to say thank you very much for that. And my apologies for not saying earlier that because we don't have someone in the room, I will need to flip the slides because we don't have a laptop to plug the slide advancer in. But I did not otherwise have a question. Rick, I see you in the queue.

Rick Taylor: Thank you. Yeah, so first of all, thank you very much for this presentation. I read the draft with interest. Can I commend you on what I think is the correct way to take a standard Layer 7 application protocol—can you hear me? Quick audio check there. 1, 2, 3.

Edward Birrane: Yes, I can hear you online.

Rick Taylor: Okay, can you hear me in the room? Yes, I see a thumbs up. Great. Yeah, so thank you very much for doing this. I think you've done the right thing, which is understanding the OpenFlow PDUs and mapping them directly as an ADU within Bundle Protocol. That is absolutely the right way to do it. I know other people, when presented with the problem of how to port a traditional IP protocol onto BP, have done some odd things with capturing IP packets and things like that. So thank you; I think this is really the right approach, particularly when you're doing OpenFlow. And good to see that you have some experimental results. Thanks.

Edward Birrane: All right, thank you very much. Our next presentation or discussion is going to be with Brian Sipos. Brian, would you like me to project and flip slides, or would you like to share your screen instead?

Brian Sipos: I will try to get the slides. I may have to be granted permission.

Edward Birrane: Sure.

Brian Sipos: Thank you. And on the topic of the previous presentation, I also agree with Rick about that draft being a good representation, a good starting point of defining all those details that an application should define.

Now, I'm going to cover a few different topics, starting with the most mature and going down to the least mature with some discussions of last changes and some discussion of next steps on each of these separately.

So the first one is going to be the BPSec COSE Context draft, the next one is EID Patterns, then the non-experimental UDPCL Version 2, and SAND is the last working group draft, and then I have a little presentation about individual drafts.

So on the COSE Context (draft-ietf-dtn-bpsec-cose), this did complete working group last call in December. It elicited some good feedback and comments, and this also went through—I'll skip ahead to the last item—that it did go through interoperability testing with two separate implementations. So that gave some good evidence of maturity, but also it gave some feedback to the last draft revision.

The changes have all been either editorial or clarifying the requirements to agree with what the intent or what the actual implementation is able to achieve. The big set of changes was really in the COSE profiling that added a HKDF algorithm to the interoperability minimum table. And I'll speak to—interoperability here means mandatory to understand, not mandatory to use, but it's really focused on what is the minimum that you would expect to need to operate these different types of key materials: symmetric keys and asymmetric keys.

The PKIX profile was updated to be consistent with the existing CA/Browser Forum. This is not something that's a hard requirement, but it is helpful to be able to say that we're in agreement with what's used in the Web PKI and the very, very large existing PKIX ecosystem that we can operate within.

And the last set of changes were updating examples to make sure that the example full BP PDUs had conforming CRC values and key strengths that were consistent with the CNSA Version 1 minimum, which is being kind of treated as a reasonable minimum security strength as a starting point for the COSE Context. The draft now leaves out mention of future capabilities enabled by COSE, especially post-quantum stuff, but definitely updates or other protocols can raise the interoperability minimum floor to include these other families of algorithms.

And then this is just a general comment on the profiling and use of COSE, is that by integrating the COSE message form into BPSec, this opens the door for individual users or networks or combinations of nodes to use whatever algorithms suit their need. And it just needs to be worked out ahead of time offline what those are, but the interoperability minimum that's presented in the document is meant to be a baseline of what to expect if you don't know any better.

And I will mention that at this point—there's not a specific item on this slide—I did make a call on the mailing list that this document I believe is ready to leave the working group and enter IESG review. So I can reiterate that on the mailing list, but I'm mentioning it here as well. So this is directed to our new AD, that some of these documents have some feedback requested and some have some actions requested.

Rick Taylor: So just jumping in, chair hat on there, Brian. Yeah, noted. We felt it was useful for you to do this presentation at this meeting, and then we'll probably press the button and get it through to IESG.

Brian Sipos: Thank you. And the good news is that this interoperability testing that was successfully completed did—did what interoperability testing I think should; it showed the spots where the letter of the law was a bit weak, and so this last draft really strengthened things up so that the text agrees with what the behavior was and should be.

Edward Birrane: Ed, chair hat on. I believe the status of this document is "Working Group Consensus Waiting for Write-Up." So I would like to just take a moment and ask if there is anyone who is also familiar with this work that would agree to be the shepherd for this document. Please let us know, either in the chat or on the mailing list, preferably on the mailing list. Otherwise, we will start asking folks to find someone who would be willing to be a shepherd for the document.

Brian Sipos: Thanks. So I'll move on to the EID Patterns draft (draft-ietf-dtn-eid-pattern), which is slightly less mature than the BPSec COSE Context. So this document is meant to provide a framework for patterns on current and future EID schemes and provide an extensible IANA registry of patterns to go along with the existing registry of schemes. And this is focused on text forms and binary forms. And it already has multiple implementations currently in several different languages and different organizations as proof of existence.

There are some variations that I'm not going to dig into on this presentation in detail, but it covers details on IPN scheme-specific patterns, but also it enables doing any scheme-specific part patterns for the existing DTN scheme and any other future including private-use schemes. So it's meant to be a starting point, but a starting point that is based in existing use patterns.

So the last changes to this document were to add more explicit definitions to some of the terminology, to nail down and tighten up some of the definitions of "Are you allowed to match nothing?" and the answer is yes. Requirements about normalization and elision, and all this means is that especially in the text form there are many different possible ways to put things into a piece of text and this is more clear now about what is the normal form of any of these things.

There's an improved binary form for IPN range and a picture to show what this thing means. This is really a compression based on, like, Delta compression. And there's now a couple of extra subsections for security considerations that just relate to where this thing might be used and how it needs to be handled.

There's one more open issue on this draft, and that's right now there's not an interoperability minimum support for number of items in one pattern or number of intervals in each IPN element range. I did throw a question on the mailing list without any feedback so far, but my—I do have some text drafted and could discuss this separately on the mailing list, but it would be something on the order of 10 as a minimum and a "should" for something on the order of 100 just as a round number that nobody should be running into as a limit. And—go ahead, Rick.

Rick Taylor: A very quick point on this, particularly on the interoperability. So in our Session 2, I've got a short presentation on using a subset of EID patterns to actually exchange routing information between different implementations, which I think might handle your interoperability request because that's actually a case where you need EID patterns to be interoperable. The other case being X.509 certificates which could carry EID patterns, but that seems quite a hard interoperability target. But most of the EID patterns otherwise are very much local machine or local implementation specific. So maybe that will help. And as a note to the working group, I'm trying to build on top of this already. I really like this work personally. So yeah, cool. Thank you. I'll shut up.

Brian Sipos: Thanks. Yeah, and that is what interoperability means in this context: It means if I write a pattern that has six items, how reasonable is it for me to expect that a system is going to be able to handle that pattern with six items? And if the minimum is 10, that means I am guaranteed that any conforming implementation will handle my pattern properly, and if I start to increase that number, it'll get into that range where maybe some implementations will and some may not. So it's something we can discuss on the mailing list. I do have a draft—I have a proposed update to a draft to address this. And I would like to see a Last Call either as written with this known issue or after this issue is addressed. If there are any opinions at all, I'd like to hear them, and I can post the link in the mailing list to the proposed changes just to get the exact words in front of the group in the list.

Edward Birrane: Ed with chair hat on. If we do have some changes and those changes are not difficult to make in a timely period, I would ask that we make known changes first because those changes may cause others doing Last Call review to think of additional things. So it's always better in my opinion to go in with known changes applied.

Brian Sipos: So I will put the changes in with, like, the minimum number of 10 there. Although 10 is not a strong justification, it is a round number that is pretty small and easy to implement, hopefully.

On the next topic of the UDPCL (draft-ietf-dtn-udpcl), there's not a real big set of changes in the document itself. There has been a trial implementation of all the features that are defined here, although there's—one of the last changes is a better, a deeper explanation of "optional to use." Again, the difference between mandatory to implement, mandatory to know what the code point means versus optional to use any of them, that's now better explained. And one of the things that is part of this optional to use is congestion control in-band, that I'll discuss in the next slide in a minute.

Another one is supporting the existing definition of packetization layer path MTU discovery. This is not inventing an algorithm; this is providing capability to exercise that algorithm in this packetization layer. And then there's a limited form of some other path characterization that the document discusses in detail, but it's really just things that are operationally useful on networks that you don't necessarily know all the details of or that can change while they're operating.

More detailed definitions were added, especially "IP reachable"—what does that mean and how does that relate to some of this characterization stuff. It updates—it does update the encoding of this CDDL rule to be able to give better compressibility. This is not a change in function, just a change of using a compressed encoding instead of an uncompressed one.

Again, there's more specific requirements under a new Section 3.8 discussing congestion notification, and there's now an Appendix B added to just explain why or how you would choose a congestion control algorithm and what the implications are.

There is a remaining TBD or two of them in this document related to IP multicast assignments for this purpose. There was a previous request for this; I'll mention it here and I can reiterate this on the mailing list and our new AD. This hopefully is not a big lift, but it is something that is remaining in this document to be assigned.

Edward Birrane: I do see Eric, your hand is up.

Eric Klein: Thank you. Yeah, I was going to ask how do you envision the multicast working with DTLS?

Brian Sipos: Not at all. And that is one of the existing considerations that's hopefully described here, is that we're not trying to tackle that specific behavior. The vision here for multicast is purely for a kind of bare minimum zero configuration sort of messaging with the known security properties that that entails.

Eric Klein: Right. Thanks.

Brian Sipos: And if there's feedback on the exact wording in the document, that would be great.

Rick Taylor: I'm just going to jump in at this point with chair hat on. The UDPCL Version 2 is a really important piece of work because there are a number of UDP-based CLs that have come across from the IRTF that are experimental, that do not tackle things like congestion control, are not good citizens on the public internet. This is a—I don't want to diminish Brian's work on this—this is in some ways a housekeeping exercise to pull together everything that needs to be done in order to get a UDPCL that actually works and is fit for purpose. So I really commend to the working group to read this document. We need to get this done. It would be really useful to get this into the IESG queue and properly standardized promptly before people start building more and more implementations on top of the previous generation of UDPCLs.

Brian Sipos: And that is a good and important point: that these things that are extensions here are, as Rick said, they are aspects of being a good citizen on a larger network, and that is why they are all optional to use and hopefully the document explains when you would need to or want to make use of them.

I have one other piece of information on this UDPCL, which is I did run a little congestion control experiment as a kind of proof of concept using very off-the-shelf, non-complicated test setup, but I did artificially limit a link using ECN markings and—didn't want to or need to or bother to try to simulate additional path effects. But the result is, as you can see in this somewhat small graph, is it did the right thing. It reached congestion control at its assigned limit. And one thing that's not in this plot is this resulted in zero packet drops, which is great because this is an unacknowledged protocol, so every packet drop is also a bundle failure to forward. But this is demonstration of "You can do congestion control in a friendly way." I chose a CCA that had a very straightforward implementation, but there's no restriction here of any operator choosing any CCA to use for any situation that is friendly in their environment. And the one I chose was one of these aggressive data center type ones, so this is meant to operate into very high rates. And I would love to see some other experiments in this kind of realm.

My time is running out, so I'll give a real brief discussion that the Secure Advertisement and Neighborhood Discovery (SAND) (draft-ietf-dtn-bp-sand) is still being worked, although there have not been recent changes here. It's settled out pretty well. That here I'm addressing Eric's question about what is the point of the UDPCL multicast, and that is when being used with this kind of an application, it enables that zero configuration discovery. It enables a blind way for an advertisement to take place and a zero configuration way for that to be received. So it's that kind of bootstrapping the completely unknown situation. The capabilities of advertising are shown here and I'm not going to get into details because I do want to cover very briefly one new topic, which is the Manifest Block (draft-sipos-dtn-manifest-block).

I'll skip past this a bit. The Manifest Block is a new thing. I did give a real brief explanation on the mailing list, but I do want to mention that it is a thing that now has an initial definition and the later session of this working group is going to present a security reason code that would make use of this manifest block. And so this is just a really simple explanation that the thing is here. It allows you to record at a node the blocks that existed at the time the manifest was created, and that means maybe blocks that were removed later on down the path or by that very same node. So I think this is a pretty straightforward starting point, but again, I'd like to receive any feedback and in relation to other drafts. So thanks for the time.

Edward Birrane: Ed with chair hat on. I really support a manifest block structure. We've talked about it for years. I think we're getting to the point where we know how we would like it to look, why it will be beneficial for security. So if we can get more discussion about this on the mailing list as well, it would be appreciated. But a manifest block in my opinion is a great idea. So thank you so much.

Brian Sipos: Yep, and this is individual draft, this is just a starting point. So feedback is very welcome.

Edward Birrane: All right. If there are no other questions, then our next presenter will be Rick for BTPU updates. Rick, would you like me to flip slides or would you like to share the screen?

Rick Taylor: Can you bring up the slides because I'm struggling with Meetecho and early morning starts. And while you find that, I will apologize for the cage in the background—that is actually—I see some people in the queue. Ah, yes, queue.

Edward Birrane: Oh, I did not see that. Please, yes, let's go back. Fonyuan.

Fonyuan: Yeah, this is Fonyuan from China Mobile. One question about that presentation is, you know, DTN has quite long latency between the different hops. So the question is how can we tell if the traffic drop is based on the congestion or non-congestion? Thank you.

Brian Sipos: Sure. I can answer that very simply by saying the UDPCL is completely unacknowledged and so there is no intrinsic feedback mechanism to answer the question why did something get dropped. We simply don't address that. But it is an important topic, and please let me know on the mailing list if there's any language that makes sense to better explain that. Thank you.

Edward Birrane: We have Tony as well. Tony, go ahead.

Tony: Hi, I'm Tony from NICT in Japan and a member of IPNSIG. So I had a question regarding slide 6, where you talked about UDPCL V2 using proxy CCA. And my question is that since proxy CCA utilizes high-frequency ECN feedbacks to maintain stable throughput, is there any—how can you achieve predictable performance in high RTT environments like space? And also, is there any mathematically defined stability boundary for RTT when congestion control loops diverge? That's my question.

Brian Sipos: I think that's a really good question and important topic. The use of a CCA in this case was really meant to show that the CL would operate in the other extreme environment of data center type situation. So that was the reason for using Prague as the—to say here is a CCA that operates in a high-throughput data center environment. Can we achieve these kinds of operating environments in a way that doesn't overwhelm other flows? So that was not trying to address the long RTT situation, and you're absolutely right that that is definitely a consideration that at some point you would not want to use congestion control.

Edward Birrane: Ed with chair hat off. Tony, I'm sorry that I did not see you as Tony. But the other is to Brian's point, when BP is used, we expect it to be end-to-end to include segments of the network that don't have the same RTT issues as we would expect on the longer delay segments. And so I would imagine that UDPCL selection or certain CL selection is going to be based on the characteristic of the segment carrying the bundle, and that there would be cases where just certain CLAs are not going to be used for those reasons. So I think it's important to note what the characteristics are, but I don't think we have to try and make things like the CLAs work in all environments.

Rick Taylor: Thanks, Ed. Which brings me nicely onto talking about BTPU (draft-ietf-dtn-btpu). So again, another convergence layer protocol that's designed for specific segments. Let me see if I can make the slide clicker thing work through. Ah, there we go, I have control. Excellent.

Let me click. There we go. So I'll just start with a very quick recap and then talk through the changes that have been made since the previous versions of the draft. So again, convergence layer protocol designed for unidirectional unreliable frame-based link layers. Some examples of that are the CCSDS suite of USLP, TM, AOS, and all who sail in her; also DVB-S2X, 5G PDUs, and Ethernet. So these are all lossy, frame-based link layer protocols which can handle a single logical segment.

Unlike some of the other previous protocols that have come around, this has segmentation of bundles across link layer PDUs. It looks a lot like TCPCL and UDPCL in that way, but it also allows the interleaving of transfers so that you can avoid head-of-line blocking. It has message repetition, allowing loss protection because—and the clue is in the name—this is a unidirectional service. So there is no return path for acknowledgment, there is no send and ack, there is no round trip, it is simply unidirectional, blast it and assume that the other end gets it and repeat if you want to increase the likelihood of that happening.

Fundamental construction of the whole thing is TLV, so it's designed for line-rate hardware, software, FPGA—they kind of like TLVs. And the other important fact is unlike UDPCL and TCPCL and Quick CLs that are starting to emerge for terrestrial delivery of bundles, this does not require an IP stack. So you can go straight down to whatever frame layer your optical inter-satellite links are using, which may be Ethernet-based, or you can package straight onto your legacy DVB, or use your deep space CCSDS link layer without requiring a full IP stack, particularly useful for future spacecraft architectures, etc.

So I've kind of summarized this. The other main advantage of it is I can see how it would replace things like LTP in some cases because LTP has the red and the green flavors—for those who know it, it has a state machine which requires a bidirectional link layer, which requires sort of EPP for those who knew the CCSDS specifications, which then sits on top of the framing layer. If you're using BTPU, you can set up two unidirectional BTPU links that sit straight on top of AOS and SLE, which will allow you to massively reduce your stack and massively reduce your complexity, therefore increasing its benefit for swap (size, weight, and power), and you should be able to get some slightly better throughput and get some hardware acceleration. These are statements that are open to challenge, and I welcome people challenging those assumptions.

So things that have changed: I made a change in the last draft before this one to have descending segment indices because it seems like a nice feature to be able to understand how many segments were going to arrive so you could pre-allocate buffers. This had a problem, which was the sender could not be so flexible with its sending rate, so it had to decide how many segments to send at the beginning, upfront, rather than being a bit more rate-controlled. This was pointed out to me was a dumb idea, so I switched it back round to the much more traditional start at segment index zero and count up and have an explicit "this was the last segment" marker. It's much more rational; I don't know what I was doing before and I can only apologize. That's the main change here. I'm going to blast through fast; I don't have a lot of time.

Because of that change, I've had to rename some of the message types. They're no longer called "Start" and "Segment," they're now called "Segment" and "End" because you now mark the final segment rather than the first one. To answer Jorge's question to—and this is a CCSDS-specific thing—if you use AOS without EPP, how can you tell the payload is a bundle? You just have to use the virtual channel identifiers. So that's actually—sorry, I'm going to stop based on that question.

There are some assumptions that are written at the top of the document. BTPU is a generic way to slice up a bundle and carry it over a link layer. How it actually maps a particular virtual channel source and dest pairing or whatever has to be determined by the exact mapping of BTPU to the underlying link layer. And in Session 2, Eric Klein will be presenting how to run BTPU over Ethernet, and that will be a good demonstration of a document which says, "Take BTPU, here are the missing pieces to make it make sense under Ethernet." So for example, if you want to run BTPU direct over USLP or AOS, there are some considerations such as virtual channel identifiers or protocol types that you need in the underlying link layer to demux effectively.

Other changes, and I know I'm going fast, I can only apologize: I have renamed "metadata items" to "hint items" because the name "metadata" was confusing. Basically, it is a side channel where a sender can add useful information—information it thinks is useful for the receiver. So the receiver can pull out—the good example that is worked up at the moment is the total size of the bundle is X. And then that can be included in a number of the messages throughout the total transmission so that the receiver can pre-allocate buffers or do something smart.

A second extension to this is if you are running two BTPU sessions, one from source to dest and a paired one coming back the other way, you can include a hint item which is some sort of cookie to say, "This unidirectional session is actually the return path for that unidirectional session." And this is, of course, entirely sender and receiver specific, but a cookie could be exchanged therefore allowing some sort of logical link between the two fundamentally unidirectional links. That's quite useful for people who want to do transfer cancel, etc., but I'll come into that in more detail probably at the next meeting or take it to the mailing list. What else have we got? Yes, hints can now be chained, very much like IP headers: the ability to say "I have lots of hints," how many hints have I got, and when does the actual data payload start, when does the actual payload of the message start. This is pretty standard technology; I haven't invented anything new. You mark hints and the last hint has "no hint follows." Pretty simple stuff.

I've cleaned up the RFC keywords because there were lots of "may"s and "must"s that should have been capitalized and shouldn't have been capitalized. Okay, I've also put a little bit of extra text to explain why there is a transfer window and the purpose of it. And the point of the transfer window is so that a listener can start listening—a receiver can just start listening to a stream and understand whether it is hearing segments which make no sense to it because it has been asleep when it wasn't planned to be asleep or it has missed a lot of transmission, and how to understand how many of the transfers that are in progress are valid and how many are repetitions or overhearing or something that is outside the scope of the session it is trying to receive.

So this is a way of passively synchronizing a sender and a receiver without having a path from the receiver back to the sender to say "I have acknowledged." It's a lot like the kind of processes that happen in Raptor codes or Tornado codes or fountain codes so that you can have a receive-only device still be able to receive in a reasonably reliable manner. And this has a rolling window with—this has a rolling transfer count with an applicable transfer window that captures the ability to say "That is too old" or "The window has moved forward" or "That is garbage." Please check the details in the document; it's mildly complicated but there for a very good reason.

IANA registry updates because I've fiddled with some message codes. Sorted out some minor typos. And so, some open questions for the working group: I have picked an arbitrary number for the recommended value of the transfer window size because, like all good IETF specifications, you should recommend a default value for any tunable parameter. I picked 16 because it sounds nice. I have no empirical evidence over whether this is a good value or a bad value. I'm looking forward to somebody implementing this apart from my scruffy little vibe-coded stuff and try and work out whether 16 is a good number or whether we should recommend a much bigger one or a much smaller number. You've got a maximum size of 2 to the power of 12 due to the rollover mechanism. I'm also looking for more feedback on the hints mechanism; I have received some feedback on GitHub which was really nice, and I'm looking for more people to give me some feedback on that.

Because I've updated BTPU, there is a parallel companion document called BTPU FEC (draft-ietf-dtn-btpu-fec) which allows you to do forward error correction on top of BTPU messages. And that absolutely leverages the FEC framework which is RFC 6363, so I am not inventing FEC, I'm literally porting an existing IETF FEC framework to sit on top of BTPU. And I have literally done an editorial pass having made changes to BTPU to correct the statements in the FEC document and that's resulted in another up-ref. IANA, grammar, updated security considerations to make it absolutely clear that there is no security at the link layer, or if there is it is out of scope of this document. You should be using BPSec if you care about security; that's properly drawn out. I've also had a little bit of semantic changes—I had overloaded words that meant something in the FEC framework document and also meant something in the BTPU document, so I've aligned those so it's a bit more readable for people familiar with one or the other. And that's it.

So, I'm interested in working group feedback on the open issues. I'm looking for more people to read it and I'm looking for more people to implement. This is a working group document. I don't think it's ready for Last Call; I would like more people to implement and bash it harder, but I think the specification is pretty stable. But until we implement it, we don't really know whether it's correct. And that's all I have to say. I will take any questions. I'm just watching the chat, actually. Eric Klein, do you want to take that question to the mic?

Eric Klein: Sure. Yeah, my guess is that whichever transfer window size is selected by the document, there will always be a better value on a per-deployment and per-link-adaptation-basis. So, it's possible to just roll D20 and pick a value.

Rick Taylor: Yeah, and I rolled 16. And it is really that—it may be the conclusion of the working group that 16 seems a sensible number and every single deployment goes out and tunes it properly.

Edward Birrane: In the interest of time, let's go to our last topic. We do have about six minutes remaining, and the last topic is an important one but too important to cover completely in five minutes, so I'm going to give a very brief overview here and then start taking some of the questions to the mailing list.

As some of you may remember from the last IETF, we started presenting discussion around the distinction between custody transfer and reliability. So, without going deeply into it, for those who are tracking Bundle Protocol from BPv6, there was a formal method for custodial transfer and that was removed from BPv7 with the idea that it would be subsumed with additional reliability discussions. And as we started working through that, we came to understand that there were probably multiple ways of discussing custody and custodial behaviors against reliability objectives.

The DTN reliability informational draft talks about this as a function of multiple layers working together. We talked before about things like congestion control in the UDP convergence layer and so on, that sometimes if it's available you can use it, if it's not available you shouldn't use it, but what reliability can we put at the bundle layer? In the previous version of the document, and a -01 is forthcoming to this, we had hypothesized four classes of reliability: from no reliability to redundant custodians. We've expanded that thinking based on discussion since the last IETF to say that there is a level of reliability Class 1, which says we actually don't need any, it's best effort, if you get it, you get it. Class 2, which is the default behavior of RFC 9171, is to store a bundle until it is ready to be transmitted, and we have termed that as "Store Until Forward."

(Pauses momentarily)

Edward Birrane: On the chat—Adam, are you able to see my slides? Okay, others can.

Edward Birrane: Class 3 is a guaranteed custodian, meaning somewhere along a path there will be at least one custodian made available for your traffic; possibly that is a path selection criteria. And Class 4 is that you may have multiple paths with each of them having different—with each path having independent custodians. And the idea behind some of this reasoning is that in Class 3, which is shown on the left with a sample network with a user source and destination across the network, individual network segments or network providers may choose to have one, two, four custodians as part of the services they provide. The end user—the source and the destination—may not always know the way in which custodial networks are deployed through the network, but they can say things like "I want Class 3 service; I'm going to hold onto my data until someone tells me that custody has been taken."

Or in Class 4, "I'm going to hold onto my data until I get two independent custodians for my data," or something like that. But we then ask the question: What is custody signaling actually accomplishing? And this is sort of the heart of the discussion. Are we signaling custody, meaning that another bundle agent is going to store the bundle for us and therefore we can remove the bundle from our own storage? Or are we signaling reliability, meaning that a network or a network segment is going to agree to carry our data?

If we say that we're only signaling custody, then that implies a node-to-node relationship which may break information hiding across service providers. And so, if you were to have a case where a user was talking to, say, a network that's a transit network for them and that network were to put a custodian at the edge node, the user might expect that since the edge node is the thing that gave them a custody signal, that same edge node would be doing all of the storage and retransmission, which is probably unlikely that a single edge node would do all storage for all traffic coming through their network. It is more likely that at an edge node a network itself would signal a reliability capture and not otherwise lock down that reliability to a particular node in the interior of a network.

And so we are trying in this draft to address this difference between expressing reliability requirements separate from node-to-node behaviors that may cross network boundaries. So there were a couple of brief questions here, which we will put to the mailing list:

  1. Do we agree that "Store Until Forward" is a better distinction for RFC 9171's default non-custodial behavior?
  2. Whether there's a special case of a destination itself accepting custody if it can't attach to a downstream application immediately?
  3. And the larger question: Should we be talking about custody transfer and custodians, or should we be talking about the idea that a custody signal says some node within the network will serve as a custodian, in which case we start signaling reliability as opposed to node-to-node custody signaling?

Which is sort of a lot of things that we don't have a lot of time to get into just based on where we are now because we are at the end of Session 1, but please, if you are interested in this, when these come up to the mailing list, please participate in the conversation and thank you.

Rick Taylor: Thank you very much, Ed. I believe we are at the very close, if not the close of Session 1. I see no questions in the queue. So, thank you very much everybody. Thank you for people dialing in from random time zones and thank you for everyone in the room and the presenters. We will look forward to seeing you tomorrow at roughly the same time, I believe it is, and we have a longer session. Please give us feedback on the list between now and the end of IETF week about whether the two-session format is working for you. And otherwise, I will say thank you very much and thank you for your attendance. Session over.

Edward Birrane: Thank you, everybody!

Rick Taylor: Cheers all.

(Session ends at 61:29. The remaining duration of the video is silence or administrative wind-down.)


Session Date/Time: 17 Mar 2026 06:00

Rick Taylor: That's it. That's it, we start. So, right. Good afternoon, evening, morning, wherever you may be in the world. Welcome to ITF 125 DTN working group session two. Our second session for the first time. So, I'm really hoping you're familiar with the note well by now. If you're not, please try and find a copy and read it through carefully. Fundamentally, this covers IPR disclosures, best behavior, and the fact that all public statements, recordings, and video are freely donated to the public domain via the ITF. Please check the privacy statement and make sure you are happy with what's happening. If you are not, there is an ombuds team who can also help you with any complaints and conduct. So, to repeat what has been said in previous sessions, conduct guidelines. Please be courteous. Please understand that people have different views. Please understand that people may be working with English as not their primary language. So, please try to, for native English speakers, please try and control the pace of your discussion so that others can internally translate and digest. And there is an ongoing team within the ITF to handle any complaints or measures, who we would suggest are the best first stop for any complaints. Meeting tips. I'm assuming you have managed to get onto Meetecho. For people in the room, there are, it is important in order for us to gauge how many people and how many chairs we need to have in each session going forward, that you have scanned the QR code and you are connected through using the onsite tool, even if you are sat in the room and enjoying all the audio-visuals. That is mostly for our own internal accounting so that we can understand how large a session and how big a room we need for next time. For those of you remote, please try and manage your audio so that we don't get a lot of echo, a lot of side chat, etc. And to save on bandwidth, if you could kill your video when it's not actively in use, that would be really helpful. Headsets again strongly recommended, but we in these post-COVID days, I think people are pretty good at this sort of stuff these days, I hope. So, these are the links to the data tracker page, how to get into Meetecho, and how to report any issues. Adam, you are currently driving the shared notepad, but I strongly recommend that people also check what Adam is typing, add adjustments, corrections, because things can happen quite quickly, particularly around the correct spelling of names. Because people obviously say their own name nice and quickly because they say it a lot, but for others trying to pick up and type it into Meetecho, just some corrections there are always helpful. Mailing list, DTN mailing list, and if you wish to reach out to the chairs, dtn-chairs@ietf.org. Please subscribe to those lists. Useful discussion happens on those lists. So, our agenda for session two. We have quite a packed agenda, so we're going to try and keep it reasonably tight. I didn't set a timer for myself, so I have no idea whether I'm already overrunning. Not yet. So we've got 10 minutes on CoAP, we've got some BPSec information, and then I am presenting for about half an hour. I will try and keep that really quick on a number of things. Then Erik is going to cover a bit of BTPU, and then we have what I think is quite important is a 20-minute discussion on a proposal bis for RFC 4838, which I think is overdue, and Ed will talk about that at some length. Then we've got quarter of an hour for open mic or to crash into as we overrun on the agenda. So, would anyone like to bash this agenda, make a suggestion, make any changes? Now is your chance. Otherwise, we will switch across. Go ahead, Ed.

Ed Birrane: Sorry, just one last thing. I've asked Scott Burleigh if he would take the 4838bis discussion, because he is the one who's the primary editor on the bis draft.

Rick Taylor: Okay, perfect. Perfect. It's even better. It's good that Scott knows he's doing it, rather than just being told he's doing it. Excellent. So I think we'll move swiftly on to CoAP, I believe. Adam, can you find the slides? I am a slide idiot.

Adam Wiethuechter: I'll stop sharing these slides. A new deck is being shared. Yes, I had to click permission and stop sharing. There you go.

Ed Birrane: And then obviously, just to note, for those who are presenting, because we don't have anyone at the table presenting with the clicker, just please let us know when to advance the slides.

Rick Taylor: And I will monitor the queue and shout people out. If, Carles, for example, you're up first, can you let us know whether you want to answer questions during or at the end of your slot?

Carles Gomez: Either way would be fine with me. I don't know if you may have a preference.

Rick Taylor: Whatever the presenter prefers is my preference. Right, go.

Carles Gomez: Okay, thank you. So, hello everyone. My name is Carles Gomez. I'm going to present the last update of the draft entitled draft-ietf-dtn-coap-over-bp, Constrained Application Protocol (CoAP) over Bundle Protocol. My co-author is Ana Calveras, and we are both from UPC. Next, please. On the status of the draft, it was adopted right after Madrid, and today I'm presenting version 02, which aims to address comments received in the last ITF and also try to solve the last pending to-dos in the draft. Next, please. About the table of contents, we added a bit of new content in section 11, as we'll see later. And then we applied some internal structure within section 11, some subsections there, but the top-level or main structure of the draft remains the same as in the previous version of the document. Next, please. So now let's go through the updates in this last revision. The first set of updates are in section 8.1 on CoAP block-wise transfer parameters. You may recall that CoAP block is an option that allows to carry out application-layer fragmentation of large payloads. And we already had some content, in particular about one parameter which is called max payloads, which indicates the number of consecutive blocks an endpoint can transmit without eliciting a message from the other endpoint. The main specification for block, which is RFC 9177, indicates a default value for this parameter of 10. And now we've added the content that is highlighted with the red rectangle here, which is that the motivation in 9177 for that default value is in turn based on RFC 6928's motivation for a TCP initial window of 10 segments, which reads as follows: "10 segments are likely to fit into queue space available at any broadband access link, even when there are a reasonable number of concurrent connections." We've added that, however, the previous statement assumes typical Internet characteristics and TCP segment sizes. So, now we're dealing with CoAP over BP in this document. And for CoAP over BP environments, the characteristics of the paths and the CoAP message sizes involved will need to be considered when setting max payloads. For example, a deep space environment running CoAP over BP will not necessarily have the same characteristics as a terrestrial wireless sensor network running CoAP over BP. If you have any comments on that, please let us know. Otherwise, next please. So, then in section 11, which is about securing CoAP over BP, as I mentioned before, we've added a little bit of new content, which is currently subsection 11.3. And we have also applied some internal structure with subsection titles. So 11.1 now has the title of DTLS versus OSCORE, but its content is the same. This is about the main options available for CoAP to secure CoAP. Then 11.2 is about OSCORE and BPSec, but again, the content is the same, nothing new in this part. And what is really new is 11.3, which is entitled Security Requirements of CoAP Requests and Responses over BP. And here, the new content is actually quite short. We state that when CoAP is carried over BP, a CoAP response should be protected with at least the same level of security as its corresponding CoAP request. And well, we are wondering whether this might be sufficient or not. Actually, a couple of days ago, there was a message with a few comments from Marco Tiloca. And he mentioned that about this content, possibly we refer to BP about how we should protect these messages, because DTLS and OSCORE have their own rules for providing the protection for responses. And we agree with that, and we plan to add explicitly to mention that explicitly in the next revision of the draft. Next, please. Then in section 12.2, we have completed the content of an IANA request. Now that RFC 9758 is published, we would like to request the assignment of a well-known service number for CoAP over BP, and in particular, in the IPN scheme URI well-known service numbers for BP-bis registry. So we have completed the content in this subsection and also in a few other instances in the document that refer to this request. Next, please. And in section 14, which is about security considerations, you may recall that there is a risk when a node uses message aggregation, where the individual messages an aggregate message is composed of, those messages are called single messages, they need to carry the payload length option, which however is class U for OSCORE, which means unprotected. This means that an attacker might infer some features of the communication based on the payload size of the messages. And we received some suggestions in the last ITF that perhaps we might want to consider as possible mitigation using the new padding option, which is being defined in the cacheable OSCORE working group document in the CORE working group. And in that case, it is possible to try some mitigation, which is that a single message would first be padded, and then the padded message can be protected with OSCORE. And the padding option is class E for OSCORE, which means that it's protected with encryption and integrity protection. And as a result, the OSCORE message would still have a visible payload length, but an attacker might not be able to know which part of the payload corresponds to padding or the original data. And then among the comments from Marco, he also mentioned that perhaps we might want to reduce a little bit the requirement level for this specific mechanism and cite this draft, the cacheable OSCORE draft, rather as an example of how padding can be added. And in this way also, this document could be cited as an informative reference, because currently it is being cited as a normative one. So yeah, we agree with that suggestion, and we plan to do that for the next revision of the draft. Next, please. So for next steps, we believe that the document is getting mature. We are not at the moment asking for working group last call, and we need to address the comments received from Marco, which are all important, but we believe that they are rather of minor nature. And on the other hand, since the document is now getting in this mature state, we would like to ask for reviews at this time. In the past, we've had lots of feedback from both the DTN working group and the CORE working group, and we thank a lot everyone for the great feedback provided. And yeah, perhaps in the current state of the draft, it would be great to receive some additional reviews. So I believe that's my last slide. So yeah, thank you. I don't know if there may be any comments or questions.

Ed Birrane: Hey, really, really glad to see the work progress. Question with chair hat off. Have we started to see any reference implementations of this with BP?

Carles Gomez: Well, we have, we developed, meaning Ana Calveras, who's one of the authors, led, well, she supervised a student who developed an implementation. The implementation complies with not the latest version of the document, but one of the previous ones. I think it's the last one before getting adoption. Now, also other people are taking that work and they plan to continue the work, like to update the implementation to the current state of the draft. The implementation is publicly available and, yeah, so that's the only one I'm aware of. I'm not aware of other implementation efforts.

Ed Birrane: Excellent. Thank you.

Carles Gomez: Thank you.

Rick Taylor: Okay, if there are no other questions. Oh, I actually have a final statement. Carles, thank you. Could you share the link to where that implementation is, so that people could grab it and try it themselves or contribute or whatever?

Carles Gomez: Sure, I will do that right now. Yeah.

Rick Taylor: Thank you. Thank you very much. Excellent. Right, let me try and grab some slides because I think I'm next. I don't want to manage the slides. Adam, can you do the magic, please? I'm being an idiot.

Adam Wiethuechter: Yeah, I've noticed. You are not next.

Rick Taylor: I'm not next. It's Bhagya next. Thank you.

Bhagya: Oh, can everyone hear me fine?

Rick Taylor: Yes, loud and clear. Audio check complete. Would you like to take control of the slides? Adam, do you want to pass the magical token? There we go.

Adam Wiethuechter: How does one pass the magical token? There is someone who has asked to share slides. I'm clicking like mad, nothing is happening.

Bhagya: You know what, I'm not going to, I'll do the thing you did before. Okay, can I start? Okay, so my name is Bhagya. I'm a postdoc at King's College London working with Benjamin Darling, and today I'll give a quick presentation about our SBAM draft. Starting with a quick recap, because we have changed it quite a bit since the last time we talked about it. So, a quick recap. So SBAM provides specific end-to-end security guarantees, integrity guarantees, between a BPSec bundle source and a BPSec payload destination while preserving the BPSec default behavior, which allows intermediaries to process and discard blocks, including security blocks. The integrity guarantees SBAM provides applies to the security operations a BPSec bundle source adds to a BPSec bundle at origin. And SBAM design allows a BPSec bundle to maintain a verifiable record of all these security operations, even after an intermediary has processed and discarded the security block that contained these security operations. Next slide, please. So, in order to provide these additional security guarantees on top of the existing BPSec design, we introduce two additional behaviors to the BPSec design, namely reporting and auditing. For auditing, we require the original bundle source of a BPSec bundle to add an authenticator record of all of its security operations as part of the BPSec bundle. This authenticator record, we call the auditing record, will contain all the identifiable information per each security operation the bundle source adds to the bundle at origin. For reporting, we require any intermediary that processes and discards a security operation that has been added to the bundle by its original source to replace that, the block they process and discard, with a record, an authenticator record, of the identifiable information for that security operation. So, the idea is whenever an intermediary processes and discards a source-added security block, then there is a record of that for the benefit of the final payload destination of that bundle. And a payload destination that receives a bundle that contains these auditing and reporting information will need to correctly verify them, and if they can't do that, then they'll discard the bundle. So, next slide, please. So when we first proposed SBAM a while back, we required new distinct types of blocks in order to implement our SBAM design, which was not ideal. But recently Brian got in touch with us about this new draft for manifest block, which kind of perfectly aligns with our design for SBAM. So we have rewritten our SBAM draft incorporating the design for this manifest block within our SBAM design. So with that in mind, we proposed two different structures to facilitate our auditing and reporting functionalities. So for auditing, we propose this structure called an audit pair, which internally is a manifest and a BIB calculated over the information inside that manifest. The manifest block will contain the audit record that the BPSec source adds to a bundle at the beginning of the bundle creation, and the BIB will verify that information using a key the source shares with the payload destination. And this is created once at the beginning of the bundle creation and verified at the end when the payload destination receives it. Next slide, please. For reporting, we also propose a secondary structure that is also similar to the audit pair in that it is also a manifest and a BIB over that manifest. But this manifest will contain the identifying information for any security block an intermediary processes and discards that has been added to the bundle by the original source. So these identifying information, again, will contain things like block ID, security context information, and key identifiers. And unlike the auditing record, which is mandatory because for SBAM, a bundle that, a payload destination that receives a bundle without this auditing pair will discard the bundle, but for a reporting operation, it is only added when an intermediary processes and discards like a security operation added by the bundle source. Next slide, please. So within our SBAM design, we have a very kind of a heightened requirement for unique key identifiers because our, we build this kind of reciprocal trust relationship between sources, intermediaries, and bundle destinations. And for that, we rely on them being able to correctly authenticate these auditing and reporting operations. Within the BPSec specification as it currently stands, local security policies define keys to use for the security context, which unfortunately is somewhat nebulous and not specific enough for the requirements within our SBAM design. Because the possibility of colliding key identifiers will impact the correctness of our execution, and because if the participating nodes cannot identify which keys to use for the authentication, this may result in our execution not being able to like function correctly. So to kind of prevent that type of scenario, we do require the key identifiers to be included within the auditing and reporting pairs as a byte string, and we'd like your opinion as to whether this is like feasible and like your general thoughts on this. Next slide, please. One last thing is like, because of the way we have designed SBAM, it adds somewhat, it adds some overhead to the existing BPSec design, because every time an intermediary processes and discards a security operation added to the bundle by the bundle source, then it is replaced with a report pair, which internally is a manifest and a BIB over it. We wanted to get your opinion on whether this is feasible or like can be, like is acceptable within the limitations and the capabilities of existing infrastructure. An alternative to this would be appending a special type of manifest, which kind of instead of like replacing every security operation with like an audit pair, you can maybe internally expand this manifest every time an intermediary processes a security block and an operation. But again, we're not sure, and in that case, it will require like a MAC tag over every like data type, data map inside that manifest to authenticate the information. But again, we're not sure from an engineering perspective that this is feasible, if one is better than the other, so in general, would really appreciate your opinion on that. Next slide, please. So that's pretty much it. We have completely kind of rewritten our draft, so please have a look at it and let us know what you think and what should change and yeah, what is missing and things like that. So that's pretty much it. Thank you so much.

Rick Taylor: So, Bhagya, I'm in the queue because I've got a question and I'm really glad Brian is in the queue as well. So if you go back to I think it was slide five or six, you were asking about the key ID in the security context. Now this is something I'm really keen on because as part of my BPSec interop, it's really annoying. Brian has suggested to me that for RFC 9173, the symmetric interoperable security contexts, that really worrying about key ID is a bit pointless because it's not really, I mean, symmetric key crypto, meh. But with COSE context, the key ID is carried in the COSE data, so it doesn't need to be added as a security context. Brian, I know you're behind me in the queue. Do you want to answer that question? Am I right in my assertion? Because I'm still unpicking this.

Brian Sipos: You are correct in what you stated. There is, I think, a slightly different need in what is presented here for the SBAM in that the key IDs here don't need to be used necessarily for a key lookup as much as they need to be used to correlate and verify that the nodes themselves agreed with each other.

Rick Taylor: So they're definitely key identifiers, not so much as a key identifier that can be used as the key for a lookup. Okay, but they're sort of serving the same purpose. So I think this answers the question that you have on this slide, which is do we need a key ID as a security context parameter? Brian's suggestion to me for the same question is no if you use COSE, and my question therefore to the working group, chair hat off, is are we happy to say 9173 probably won't work with SBAM because of this weakness? It's really for interop testing and the COSE is the way forward.

Brian Sipos: The only thing I would say to that last is that you still I think can use this with the default security context, it's just that whatever key IDs go in there aren't going to be conveyed in the data of the blocks that are being audited.

Bhagya: Okay.

Brian Sipos: And my direct comment was on the last question about needing to use pairs of blocks. To me, that's actually a benefit of this mechanism because you are reusing BPSec and all of its diversity and not trying to do something additional and special.

Bhagya: Yeah, that makes sense. Thank you.

Season: Hey, Season here. Hey, thanks Bhagya for the presentation. So, I think the last point about the keys being explicit or not explicit and working with the default security context versus a COSE context might need to be addressed because I think what we mean here when we say if we can go back to the previous slide, it is that if we don't have it specified that your correctness in even running this protocol may be in jeopardy. And I think that's something that should be hammered in here because if you just rely on the implied fact that certain participants in the network will just know which key IDs to use, I think that that may be tricky and especially if you're trying to also catalog or in our case the, our record block here, the particular operations and which keys were used for what. So that's my comment.

Ed Birrane: I had two quick things. One is, I was really happy to see that we had migrated over to manifest block here. Thank you so much for that. The second is, I do see the manifest block definition and behavior recreated also in your draft. My guess is that at some point we would want to separate those and maintain them, not individually, just have them be in one place. And I apologize for not having given this a clean read, and so this may be a question you've already covered here. But it is a manifest block per security source, or is the manifest block meant to only be applied at the bundle source? What is being manifested?

Bhagya: So we have two separate use cases for the manifest block. So at the beginning, the bundle source adds a single manifest block plus BIB combo, which is the audit record. And that manifest block will internally contain a few data maps per each security operation it adds to the bundle. And also we have the report pair operation, which every time an intermediary processes and discards a security operation, then they also add a manifest block plus a BIB pair, which we identify as a report pair. And that is per intermediary that processes a security operation.

Ed Birrane: Oh good, so I did hear that correctly. And then agree with that approach. And then just a plus one on Brian Sipos's point of keeping that and keeping that as the diversity of BPSec seems like the right approach. So thank you so much.

Bhagya: Thank you.

Rick Taylor: I'll quickly jump in again with chair hat on. If we felt it was important, and this is following on from Season's comment, that a key ID as a context argument was useful because it was something that would be expected by an SBAM participant, then the SBAM draft can request an IANA allocation for that specific thing. That is an extensible registry, extra security context parameters if I've remembered the name correctly, can just be defined and as this goes forwards as a standards track document if it was to be adopted, that would be a way of introducing this without requiring everyone to implement it everywhere.

Bhagya: Okay, yeah that's good to know. I think we will continue this conversation over the mailing list. Thank you so much.

Rick Taylor: Perfect. Perfect. Thank you very much. Yes, now I think it's me. Sorry. It's fine. I will catch up time on this because I have 10 minutes to talk about Echo, which is...

Ed Birrane: Rick, you have three in a row, 10, 10, 10. Would you like three 10-minute timers or one 30-minute timer?

Rick Taylor: Can you give me a five-minute to start with, because I'm going to do Echo in five and then I'll do 10-10 because I think the last one is going to take longer. So I will grab, I've found out where the "grab the slides" button is. Grab number three. Take back control. Yes, time's free. I'm, you're free, I've got it. Okay, BP Echo service. So this is the world's simplest draft, driven by conversations I've had with various people, fundamentally to say, shall we have a well-known service number for the Echo service so that all of us who are implementing BPAs and various DTN stacks have a default which we can send ping packets to and expect them to come back. That's where this document started. Sorry, I should have started on this slide. As I started to work through the exact behavior and semantics for what an Echo should do, so I'm not particularly talking about ping at the moment, I started to dip into what do we mean by Echo. So that's the second bullet here. I have defined a simple reflector that lives on that well-known endpoint. The purpose being is so that you can ping it in order to measure round-trip times and connectivity verification and all the things you use IP ping for, this is the bundle protocol equivalent. Let's ask for a service number. And a point actually going back to Carles and his CoAP draft, what I have speculatively done here is again I'm asking for an IPN service number from the well-known service numbers record, but I've also asked for a DTN demux. And this is an open question to the working group is whether that is a valid request or or not. This is an individual draft. I'm going to ask for working group adoption on this because I think it just needs to be done. So without too much to do, yeah I've covered a lot of this. This is a repetitive slide. So fundamentally the Echo service, it receives a bundle and it sends it straight back to whoever it received it from. So literally clone the bundle, do whatever memory copying you actually want to do according to your implementation and how your implementation does bundle processing, swap the source and destination and then send it back. Important things to say about "send it back." This means go through the normal bundle processing pipeline. So go through whatever ingress security, firewalling, packet filtering, sorry, bundle filtering you may have on ingress, do the swap. It is effectively a service although it feels a lot like a loopback device that does that very simple swap, and then push it back out through your normal egress for your BPA. There's no special source, there's no shortcutting. It is absolutely testing whether ingress plus egress work on that BPA so that normal traffic routing, filtering, shaping that you may want to be testing by running this ping is actually performed. The Echo service itself does absolutely minimal processing. This is quite important. So, yeah, I have requested a port number. 128 is the current lowest. I'm happy if CoAP gets out the door first to have 129 or 150, it doesn't matter, there just needs to be a well-known number. Of course, that doesn't mean you have to use the well-known number, just like source ports or what, just like ports in in IP lands, in TCP and UDP land, you don't have to use the well-known port, it's just there if you don't need to specify a custom one. And the second bullet here is I suggest that /echo makes a good demux for a DTN service. I'll take questions at the end if you don't mind because I'll just blast through this. I've got too many slides on this this piece. 7 has historically been used. I have no reason to stop 7 being used but it happens to be in the private use space. So, important thing about what this reflection does. It literally swaps the source and destination EIDs. It does not change the creation timestamp or the lifetime. This allows clients to measure full round-trip time and it also stops the bundle, it stops the Echo service having to invent a lifetime that it really doesn't have any clue about because there's no real communication happening here. So if the ping tool sets a lifetime of 10 minutes, then the total end-to-end transmission is effectively 10 minutes. So really it's not a response, it is a literal reflection with mutation. Everything else is unchanged. All the extension blocks, all the bundle processing control flags, report to EIDs, all that funky stuff is absolutely preserved, but you must recalculate the CRC because you have just manipulated the source and the destination. Payload and extension blocks. Yeah, don't do anything that you wouldn't normally do. Very simple. No no special features here. Hop count, previous node, bundle age, do your usual thing. So if the hop count is 1, by the time you've reflected you won't get the result back because it will have decremented hop count according to 9171. Standard behavior applies. If you are trying to ping simultaneously from multiple clients, you need to have unique endpoint IDs. This kind of doesn't need to be said, but this allows different ping paths to be demultiplexed. So if you are using BIBs on the primary block, or using BIBs in general, do not include the "include primary block" flag because the primary block will be manipulated when it is reflected and your BIBs and your BPAs and your BCBs will no longer be valid and security filters will discard them. And do not fragment. You don't need to fragment this stuff, must not be fragmented should be said. If you want to do path MTU testing, we're going to have to work out how else to do it. In fact, actually, want to do path MTU testing, you do want the "do not fragment" set, so there we go. Yeah, the usual security considerations associated with ping. This is an OAM tool. There's all kinds of things you can do with this. Be sensible. This is a NetOps administration consideration. You really should be careful how you run pings within your network. Probably shouldn't expose them externally. There is some boilerplate text in the draft. And I have added an appendix which is some guidance, non-normative, for how one might want to write a ping service. Some hints and tips in there for how to make it work much in the same way that IP ping that we know and love. It's best practice stolen from the IP world and ported across to BP so that we can get some nice stats out and all that kind of stuff. I recommend reading it. And I have interop tested this. So my implementation has an Echo service that implements this spec, and I have tested it against the DTN7-Rust ping tool, the HDTN ping tool, and the DTNME ping tool, and they work without modification and without realizing that they're using my Echo service. And vice versa works. So my ping tool, which thinks it's using an Echo service that meets this spec, will actually run without modification against the existing reflector or Echo service that is implemented in these three implementations. The reason I haven't tested ION and Micro-DTN is because they don't support TCPCL V4. So there we go. That's pretty much it. What else have I got? Questions for the working group. Do we think this is appropriate? I think it is useful just even if we just get the number. We can argue about the exact details of reflection and what a reflection means, but I think it's really important to have a number. DTN demux, that's adding a column to that IANA registry. That might be something we push back on and say we don't really know what DTN EIDs really are beyond the 9171 descriptions, so perhaps we should hold off on that. Extra security considerations are probably things to add. That's it. I'll take the queue. Go ahead, Scott.

Scott Burleigh: Just that the security coming back on the Echo isn't necessarily going to be different. The BIB is going to be different because the primary block is different and the hash is computed over the primary block, so it will not be the same. So, I think that's a doable thing. You can work around that with one of the provisos that were on one of the earlier slides. I think it's cleaner in a way to simply Echo the payload and leave everything else just as, here comes a bundle coming back and with an extension block that says where it came from. But I would be a little concerned about the special case with at least security. And parenthetically, I also think that the DTN demux is also a fine idea. I think it's way overdue to do that.

Rick Taylor: So can I just respond about the BPSec piece. BPSec, there's some really nice properties as a ping tool if you can put a signature on something and say I expect to see my payload to come back unmodified. It gives you a certain sort of idea that oh, actually, you know, there are things along the path that are actually accepting BPSec. You could you could almost do some BPSec level pinging to check that it gets through filters correctly. As long as you say "do not include the primary block" in the generated cipher text that goes into that block, which is a perfectly valid flag option, then you're absolutely fine if the primary block gets manipulated.

Scott Burleigh: I would be worried, though, that there would be some networks in which that wouldn't work because it would always be mandatory to protect the BIB, the primary block with the BIB.

Rick Taylor: I would suggest that's an operational challenge, and in those cases you probably say don't put BIBs on, you know. This is, as I consider it, this is an internal network operations and management tool that you would run within your network, not try and run this across the solar system internet at scale, much like ping tool doesn't really leave administrative domains or local intranets.

Scott Burleigh: Okay. Thanks.

Rick Taylor: No worries. Next up, Brian, sorry.

Brian Sipos: Yeah, I see definite value in a payload-only Echo as an interop minimum. The other echoing I can see as being trouble and that's all.

Rick Taylor: What I'm trying to do is to say don't change anything. Change what is the absolute minimum you have to change to make to make this work. So effectively it does payload echoing but it maintains the total lifetime and the hop count. Actually it doesn't maintain the hop count, the hop count is just decremented. But yeah, happy to take that to the list.

Brian Sipos: Okay.

Erik Nye: On the point of adding a DTN demux column to the IANA registry, I think that makes sense to me. I'm in favor of that. The only thing is I think grabbing common words might not be great and it's might be better to have _ietf/ or .iana/ or something as a prefix just so that future work doesn't conflict with terms somebody might be using.

Rick Taylor: A fair point. And that that's kind of the can of worms about a DTN demux and I'm very happy to push that out to when we actually tackle the DTN EID structure as a whole and just go for an IPN well-known service number for now.

Erik Nye: Yeah. Thank you.

Rick Taylor: No worries. Felix.

Felix Walter: Yes, thanks Rick. I think this is really something extremely useful because it's used all the time when we set up testing. But I think in your draft you are really mixing two different concepts. We discussed in the past the simple ADU reflection which for me is kind of fully in line with RFC, means you receive an ADU you put this ADU in a new bundle, you don't reflect this bundle, you put it in a new bundle, you do the transmission, you do all the steps you do to normal bundles. And I think this is something quite simple to standardize and put forward and it also gives you information about the processing a node is doing to usual bundles. And you kind of toggle between these things. You say it's all the usual processing of the bundle but then you say, okay, but we keep some extension block, we might do a little bit here. So I think what you are having here with a reflection is something where I really have difficulties to describe it in terms of the RFC 9171, which may be a shortcoming of the RFC 9171, but I don't see really clear way to describe these things you're proposing here conceptually. And I also see some real difficulties, especially with the creation time, which is given really, we have clear rules on creation time, who is providing the creation time, and now we have another node basically just copying something over, so it seems to be not a very clean way to do it. So my question would be, shouldn't we go forward and define a simple ADU reflection, have a well-known service number, and understand the use cases for the reflection better to understand what we need to put into place to have this reflection mechanism. Last note on the interoperability testing. I assume the interoperability testing you get a bundle back but I'm really wondering whether it fully implements the behavior you are specifying here, especially in terms of extension blocks and creation time.

Rick Taylor: So, yes, I am absolutely sympathetic to saying just reflect the payload. I completely understand that. The reason I have touched on creation timestamp is because if you enable status reporting, which is quite useful and I know it's default false but this is an admin tool, if you enable status reporting, you would actually like all the status reports out and back to go to the same report-to EID. And in order to correlate the status reports, you need to understand that the creation timestamp maps to, ironically now the destination rather than the source, so that you can correlate these status reports into one sequence. That's the reason for not changing the creation stamp. If the Echo service on the far end creates a new creation stamp, then you get status reports sent to you and you have no idea why they're being sent to you. You have no idea which ping and you may have multiple pings in flight this status report corresponds to, unless you want to go into the ADU. And that I'm happy to have that discussion on list. Simple is best so I'm happy to start removing any of these suggestions to cut it down to really simple.

Ed Birrane: Yeah, with chair hat off. A few things. I like the idea of this service. I actually see value in both kinds of reflection. Perhaps it would be helpful to have a use cases or an expanded use cases in sections of this in the draft. I had one specific question, I just couldn't see it as I was skimming the draft itself. If I send an Echo request to you and you only switch the IDs and send it back to me, how do I determine that the thing coming back to me was the Echo versus something sent to me so that I need to Echo? It seems that there would need to be some signaling that says this is a reflection back and not an independent request from someone else that I myself now need to Echo. And I don't know how we avoid those reflections.

Rick Taylor: Because, and this is written in the appendix, the "how to implement a ping sensibly" non-normative section, it says your ping source EID should not be the Echo service. It should be some ephemeral number so that when the source and the destination are swapped, the response message comes back to you as the service, not to the Echo, otherwise you end up with an endless reflection.

Ed Birrane: Oh, I see. Okay. Makes sense and I missed it. So, thank you.

Rick Taylor: That's fine. If there's no one else, I will move on to the next, if that's okay. Adam, please do your slide magic. The battery in my mouse is dying.

Adam Wiethuechter: Okay. What are you? You're ARP next, right?

Rick Taylor: Yes, please. So this is part of Rick's series of filling in small holes in the road that we kind of need to make BP run sensibly at scale and cover those bases. So, let's request access to the slides. Yes, I want to take control of the slides. Perfect. So this is BP ARP, which looks an awful lot like IP ARP and does pretty much the same thing. So, fundamentally there are CLAs, and I'm looking at LTP here and maybe applies to UDP but I haven't read the V2 document in enough detail to define this, and definitely TCP, we'll start with those two, which do not have a mechanism in-band to advertise the node ID of the other end of the peer. So, the remote peer. So, you get a CLA connection, you get a link, you know I can pass bits of bundles, in fact I can pass whole bundles between two CLAs, but I actually don't know who the next hop neighbor is to do anything smart in terms of setting up my routing tables, to set up "I understand that that guy down that CLA is actually this EID" unless it is pre-configured. I don't stand a chance. And that is exactly the hole that this is trying to fill. So, here are some examples. There is a second example here as well, which is bundle protocol has two different addressing schemes, the IPN EID scheme and the DTN EID scheme. TCPCL V4 only allows you to advertise a single node ID. So you normally pick which one you want, IPN is kind of the de facto at the moment, which means you have no way of advertising "Oh, I have some other multi-homed addresses effectively, I have a DTN node ID as well." So this also is a second, a second feature of this ARP protocol is to allow a peer to say, "Actually, I also have this DTN EID that I couldn't tell you across this CLA link." Next slide. There we go. So the way it fits in is it's kept to very limited. It will do the "I have a CLA adjacency, let me learn the node ID" and then I can hand it off to whatever more complex two-hop neighborhood discovery I'm doing, or I can correlate it to some sort of contact planning I'm doing. So it bootstraps SAND and other things and it only does this one piece. I'm not trying to replace SAND, I'm not trying to cut into the early stages of SAND, I'm literally trying to say "I've got a CL address, I need to map that to an EID so that I can bootstrap everything else." It only works with singletons. There's no multicast business here. You give me your singleton node IDs. It starts to break down if you go multicast. This is not IGMP, this is the equivalent of ICMP. This is a BPA service. You are asking a BPA what its node ID is. So therefore in my mind that's an admin record and you target the administrative endpoint. So you're not asking for a the Echo service for what it thinks its EID is, you're asking the actual BPA because you want the node ID. It only requires a single admin type because you can tell whether it's a request or response because I introduce the local node as a destination and this is a change. So, and these bullets are a bit strange in order, actually. So the IPN-only probe mechanism, you have to use the IPN scheme in order to use BP ARP, although you can learn about DTN or other schemes, well multicast and anycast aren't supported, but you can learn other schemes via the IPN scheme bootstrap of ARP. So you must support IPN. Protocol flow. This should be really simple, but it's hides a little detail which I should have drawn out in its own slide. So node A knows a CL address but doesn't know node B's ID. So it composes a BP ARP request, which it sets its source as the admin endpoint because this is a BPA-to-BPA conversation, and it sends it to ipn:!.0. Now for those of you who've read the IPN update RFC, that is the local node address. So local node has some restrictions, which is it as written in the IPN update is not allowed to leave the local node. It allows one service to address another service on the same node without having to learn the node's node ID. I am slightly abusing this to say "whoever receives this message, this is for you," which is a slight variation and I've got a slide which talks about this in more detail. So, that is received by node B, node B replies with the response setting its back, setting its source to its actual one of its node IDs, in this case it's the IPN node ID because this is an IPN-based conversation, with the destination back to the source that sent it. That's it. The receiver then can A, it can see the source of that response and B, it can read the payload and I'll talk about the payload format in a second, and therefore perform the discovery. Very simple one exchange. Hop count equals 1. This is meant to be one hop. This is meant to be link local. Do not route this, and you do standard protection. I can't remember the name of the acronym, but it's a fairly common behavior in IP to control one-hop propagation of IP packets to set the TTL to max and therefore detect that it won't increase. In BP land, you set the max hop count to 1 to prevent it going any further. Also it is a direct CL transmission. You are specifically saying, using this convergence layer adjacency, I wish to probe, I need to send this bundle to the other end of this link. So don't go through the RIB and the FIB lookup, don't start using your contact graph, literally poke it down that CL hole. So again that's another reason why it's very much an administrative BP native implementation detail. This can't be an external service. It's got delta query support so you can say, "I know you've got this node ID, what other node IDs have you got?" which allows you to bootstrap off IPN and learn DTN node IDs. And you should use BPSec BIBs in order to authenticate that you are actually who you say you are, particularly with if you're using COSE context, you can start to assert your C509 certificate with actual key material which maps to your node IDs and you can put some proper security on the whole process. That's a "should" because I cannot make you do it but you really should do it. So, this is the critical change. So this update 9758. So 9758 says there is this concept of local node and local node must not leave the local node. So any bundles that come in addressed to local node must be discarded and no bundle destined for local node, they must not leave that local node. It was just an absolutely blanket security policy basically to say these are local, these make no sense on the network. I am proposing to loosen this to say, if you receive a bundle and it's an admin record and if you want to put type filtering on it and it's a BP ARP message then actually ipn:! is allowed for this specific case. Other destinations discarded, all previous rules apply. Okay? So it doesn't open local node for general use. It literally just cracks the door open a little bit for this case. The alternative to this formulation is we go and register a well-known IPN node ID in the default allocator in the IANA registry for this specific case and it is a non-singleton EID, but it's not an anycast EID, it's a very specific magic number which means, a bit like zero means nowhere, it's another magic number. And I am proposing that we should use the local node because we've already got this magic number, which is 2^32 - 1. Alternatively we can go and register a new magic number and that's an open question and I see Felix in the queue. Felix, go ahead now actually if you don't mind.

Felix Walter: Yeah, so because it's exactly on this topic. So I'm wondering why you are not considering using the IEC anycast naming scheme for that because this has the idea of kind of well-known groups which basically would say "any node" and you wouldn't have to do these tricks with the local node or the additional naming scheme or whatever. So because I'm a little bit worried we are putting on all these kind of small exceptions everywhere across let's say BP world which makes it quite difficult to understand. I think it's a typical use case we would have for anycast.

Rick Taylor: Frankly because I started writing this before I read the anycast draft. I am very happy to use a well-known anycast group ID. I think that makes perfect sense because you're exactly doing this: someone who is a member of that group and this is a "all nodes" would be a member of that group if they support BP ARP. So, yeah, that's a very sensible suggestion. Very happy to make that change.

Felix Walter: Yeah, it seems we need to put this anycast personal draft into the working group as a working group draft sooner. I'm happy to to work on this, yeah.

Rick Taylor: Okay, that let's let's pencil that in for topic for Vienna and we can sort of between us all collectively get that get that moving if that's all right. We'll check how Camiro is going with it. Okay, so security considerations. Go on Brian, did you want to say something?

Brian Sipos: I was just going to say that I am in big favor of some any mechanism to address a neighbor non-specifically and I'm definitely in favor of that and maybe decoupling that from the payload would be helpful.

Rick Taylor: What I'm trying to do is to say don't change anything. Change what is the absolute minimum you have to change to make to make this work. So effectively it does payload echoing but it maintains the total lifetime and the hop count. Actually it doesn't maintain the hop count, the hop count is just decremented. But yeah, happy to take that to the list.

Brian Sipos: Okay.

Erik Nye: On the point of adding a DTN demux column to the IANA registry, I think that makes sense to me. I'm in favor of that. The only thing is I think grabbing common words might not be great and it's might be better to have _ietf/ or .iana/ or something as a prefix just so that future work doesn't conflict with terms somebody might be using.

Rick Taylor: A fair point. And that that's kind of the can of worms about a DTN demux and I'm very happy to push that out to when we actually tackle the DTN EID structure as a whole and just go for an IPN well-known service number for now.

Erik Nye: Yeah. Thank you.

Rick Taylor: No worries. Felix.

Felix Walter: Yes, thanks Rick. I think this is really something extremely useful because it's used all the time when we set up testing. But I think in your draft you are really mixing two different concepts. We discussed in the past the simple ADU reflection which for me is kind of fully in line with RFC, means you receive an ADU you put this ADU in a new bundle, you don't reflect this bundle, you put it in a new bundle, you do the transmission, you do all the steps you do to normal bundles. And I think this is something quite simple to standardize and put forward and it also gives you information about the processing a node is doing to usual bundles. And you kind of toggle between these things. You say it's all the usual processing of the bundle but then you say, okay, but we keep some extension block, we might do a little bit here. So I think what you are having here with a reflection is something where I really have difficulties to describe it in terms of the RFC 9171, which may be a shortcoming of the RFC 9171, but I don't see really clear way to describe these things you're proposing here conceptually. And I also see some real difficulties, especially with the creation time, which is given really, we have clear rules on creation time, who is providing the creation time, and now we have another node basically just copying something over, so it seems to be not a very clean way to do it. So my question would be, shouldn't we go forward and define a simple ADU reflection, have a well-known service number, and understand the use cases for the reflection better to understand what we need to put into place to have this reflection mechanism. Last note on the interoperability testing. I assume the interoperability testing you get a bundle back but I'm really wondering whether it fully implements the behavior you are specifying here, especially in terms of extension blocks and creation time.

Rick Taylor: So, yes, I am absolutely sympathetic to saying just reflect the payload. I completely understand that. The reason I have touched on creation timestamp is because if you enable status reporting, which is quite useful and I know it's default false but this is an admin tool, if you enable status reporting, you would actually like all the status reports out and back to go to the same report-to EID. And in order to correlate the status reports, you need to understand that the creation timestamp maps to, ironically now the destination rather than the source, so that you can correlate these status reports into one sequence. That's the reason for not changing the creation stamp. If the Echo service on the far end creates a new creation stamp, then you get status reports sent to you and you have no idea why they're being sent to you. You have no idea which ping and you may have multiple pings in flight this status report corresponds to, unless you want to go into the ADU. And that I'm happy to have that discussion on list. Simple is best so I'm happy to start removing any of these suggestions to cut it down to really simple.

Ed Birrane: Yeah, with chair hat off. A few things. I like the idea of this service. I actually see value in both kinds of reflection. Perhaps it would be helpful to have a use cases or an expanded use cases in sections of this in the draft. I had one specific question, I just couldn't see it as I was skimming the draft itself. If I send an Echo request to you and you only switch the IDs and send it back to me, how do I determine that the thing coming back to me was the Echo versus something sent to me so that I need to Echo? It seems that there would need to be some signaling that says this is a reflection back and not an independent request from someone else that I myself now need to Echo. And I don't know how we avoid those reflections.

Rick Taylor: Because, and this is written in the appendix, the "how to implement a ping sensibly" non-normative section, it says your ping source EID should not be the Echo service. It should be some ephemeral number so that when the source and the destination are swapped, the response message comes back to you as the service, not to the Echo, otherwise you end up with an endless reflection.

Ed Birrane: Oh, I see. Okay. Makes sense and I missed it. So, thank you.

Rick Taylor: That's fine. If there's no one else, I will move on to the next, if that's okay. Adam, please do your slide magic. The battery in my mouse is dying.

Adam Wiethuechter: Okay. What are you? You're ARP next, right?

Rick Taylor: Yes, please. So this is part of Rick's series of filling in small holes in the road that we kind of need to make BP run sensibly at scale and cover those bases. So, let's request access to the slides. Yes, I want to take control of the slides. Perfect. So this is BP ARP, which looks an awful lot like IP ARP and does pretty much the same thing. So, fundamentally there are CLAs, and I'm looking at LTP here and maybe applies to UDP but I haven't read the V2 document in enough detail to define this, and definitely TCP, we'll start with those two, which do not have a mechanism in-band to advertise the node ID of the other end of the peer. So, the remote peer. So, you get a CLA connection, you get a link, you know I can pass bits of bundles, in fact I can pass whole bundles between two CLAs, but I actually don't know who the next hop neighbor is to do anything smart in terms of setting up my routing tables, to set up "I understand that that guy down that CLA is actually this EID" unless it is pre-configured. I don't stand a chance. And that is exactly the hole that this is trying to fill. So, here are some examples. There is a second example here as well, which is bundle protocol has two different addressing schemes, the IPN EID scheme and the DTN EID scheme. TCPCL V4 only allows you to advertise a single node ID. So you normally pick which one you want, IPN is kind of the de facto at the moment, which means you have no way of advertising "Oh, I have some other multi-homed addresses effectively, I have a DTN node ID as well." So this also is a second, a second feature of this ARP protocol is to allow a peer to say, "Actually, I also have this DTN EID that I couldn't tell you across this CLA link." Next slide. There we go. So the way it fits in is it's kept to very limited. It will do the "I have a CLA adjacency, let me learn the node ID" and then I can hand it off to whatever more complex two-hop neighborhood discovery I'm doing, or I can correlate it to some sort of contact planning I'm doing. So it bootstraps SAND and other things and it only does this one piece. I'm not trying to replace SAND, I'm not trying to cut into the early stages of SAND, I'm literally trying to say "I've got a CL address, I need to map that to an EID so that I can bootstrap everything else." It only works with singletons. There's no multicast business here. You give me your singleton node IDs. It starts to break down if you go multicast. This is not IGMP, this is the equivalent of ICMP. This is a BPA service. You are asking a BPA what its node ID is. So therefore in my mind that's an admin record and you target the administrative endpoint. So you're not asking for a the Echo service for what it thinks its EID is, you're asking the actual BPA because you want the node ID. It only requires a single admin type because you can tell whether it's a request or response because I introduce the local node as a destination and this is a change. So, and these bullets are a bit strange in order, actually. So the IPN-only probe mechanism, you have to use the IPN scheme in order to use BP ARP, although you can learn about DTN or other schemes, well multicast and anycast aren't supported, but you can learn other schemes via the IPN scheme bootstrap of ARP. So you must support IPN. Protocol flow. This should be really simple, but it's hides a little detail which I should have drawn out in its own slide. So node A knows a CL address but doesn't know node B's ID. So it composes a BP ARP request, which it sets its source as the admin endpoint because this is a BPA-to-BPA conversation, and it sends it to ipn:!.0. Now for those of you who've read the IPN update RFC, that is the local node address. So local node has some restrictions, which is it as written in the IPN update is not allowed to leave the local node. It allows one service to address another service on the same node without having to learn the node's node ID. I am slightly abusing this to say "whoever receives this message, this is for you," which is a slight variation and I've got a slide which talks about this in more detail. So, that is received by node B, node B replies with the response setting its back, setting its source to its actual one of its node IDs, in this case it's the IPN node ID because this is an IPN-based conversation, with the destination back to the source that sent it. That's it. The receiver then can A, it can see the source of that response and B, it can read the payload and I'll talk about the payload format in a second, and therefore perform the discovery. Very simple one exchange. Hop count equals 1. This is meant to be one hop. This is meant to be link local. Do not route this, and you do standard protection. I can't remember the name of the acronym, but it's a fairly common behavior in IP to control one-hop propagation of IP packets to set the TTL to max and therefore detect that it won't increase. In BP land, you set the max hop count to 1 to prevent it going any further. Also it is a direct CL transmission. You are specifically saying, using this convergence layer adjacency, I wish to probe, I need to send this bundle to the other end of this link. So don't go through the RIB and the FIB lookup, don't start using your contact graph, literally poke it down that CL hole. So again that's another reason why it's very much an administrative BP native implementation detail. This can't be an external service. It's got delta query support so you can say, "I know you've got this node ID, what other node IDs have you got?" which allows you to bootstrap off IPN and learn DTN node IDs. And you should use BPSec BIBs in order to authenticate that you are actually who you say you are, particularly with if you're using COSE context, you can start to assert your C509 certificate with actual key material which maps to your node IDs and you can put some proper security on the whole process. That's a "should" because I cannot make you do it but you really should do it. So, this is the critical change. So this update 9758. So 9758 says there is this concept of local node and local node must not leave the local node. So any bundles that come in addressed to local node must be discarded and no bundle destined for local node, they must not leave that local node. It was just an absolutely blanket security policy basically to say these are local, these make no sense on the network. I am proposing to loosen this to say, if you receive a bundle and it's an admin record and if you want to put type filtering on it and it's a BP ARP message then actually ipn:! is allowed for this specific case. Other destinations discarded, all previous rules apply. Okay? So it doesn't open local node for general use. It literally just cracks the door open a little bit for this case. The alternative to this formulation is we go and register a well-known IPN node ID in the default allocator in the IANA registry for this specific case and it is a non-singleton EID, but it's not an anycast EID, it's a very specific magic number which means, a bit like zero means nowhere, it's another magic number. And I am proposing that we should use the local node because we've already got this magic number, which is 2^32 - 1. Alternatively we can go and register a new magic number and that's an open question and I see Felix in the queue. Felix, go ahead now actually if you don't mind.

Felix Walter: Yeah, so because it's exactly on this topic. So I'm wondering why you are not considering using the IEC anycast naming scheme for that because this has the idea of kind of well-known groups which basically would say "any node" and you wouldn't have to do these tricks with the local node or the additional naming scheme or whatever. So because I'm a little bit worried we are putting on all these kind of small exceptions everywhere across let's say BP world which makes it quite difficult to understand. I think it's a typical use case we would have for anycast.

Rick Taylor: Frankly because I started writing this before I read the anycast draft. I am very happy to use a well-known anycast group ID. I think that makes perfect sense because you're exactly doing this: someone who is a member of that group and this is a "all nodes" would be a member of that group if they support BP ARP. So, yeah, that's a very sensible suggestion. Very happy to make that change.

Felix Walter: Yeah, it seems we need to put this anycast personal draft into the working group as a working group draft sooner. I'm happy to to work on this, yeah.

Rick Taylor: Okay, that let's let's pencil that in for topic for Vienna and we can sort of between us all collectively get that get that moving if that's all right. We'll check how Camiro is going with it. Okay, so security considerations. Go on Brian, did you want to say something?

Brian Sipos: I was just going to say that I am in big favor of some any mechanism to address a neighbor non-specifically and I'm definitely in favor of that and maybe decoupling that from the payload would be helpful.

Rick Taylor: Yeah, okay. Use BPSec, the very simple stuff. Um, there's some security considerations, I suggest you read the document, they are basically be sensible. Be aware that this is revealing information. Use BPSec, all the usual stuff. And also there's some recommended on what a BPA should do in terms of configurable policy. So a system administrator can say, "Actually, I don't want to turn this on for some things" or "I don't want to turn it on for my box." Yeah, IANA considerations, new service number please. Yeah, so the open questions I kind of covered on the way. Is there interest in adoption? Are people interested in the delta query semantics, the ability to say "I know one node ID, what other ones have you got?" We've kind of talked about local node. SAND, I'm not touching SAND. I'll let Brian talk about SAND, he has already, but it's, I'm not trying to do SAND. Are we happy to have this as a separate protocol or bring it into SAND? I think you've kind of answered the first question and I will stop. Ed, go ahead. I know I'm overrunning and I said I wouldn't.

Ed Birrane: No, you read my mind. So just chair hat on, we have eaten through much of what would have been our open mic time, so I would just say let us proceed and then try and keep to the 10 minutes for the next two topics.

Rick Taylor: Noted. Thank you. Right, Adam please do your slide magic.

Adam Wiethuechter: Okay. What you next, right?

Rick Taylor: Yes, please. So this is part of Rick's series of filling in small holes in the road that we kind of need to make BP run sensibly at scale and cover those bases. So, DTN Peering Protocol. Let me request slide access. Okay, so this is my attempt to solve the problem that you have two independently administered DTN domains. So let's say, and I will use this purely for example purposes but something everyone can understand. NASA operates a load of DTN enabled stuff, and ESA operates a lot of DTN enabled stuff. Now NASA and ESA in this example want to share information about how they may use each other's relays to to deliver useful science from space to ground, but of course NASA doesn't want to allow ESA to manipulate the internal routing of the NASA network and vice versa. So how can we have a mechanism where ESA and NASA can agree to share the relaying capability of their individually managed domains such that relaying can occur because of course this is a store and forward system we're building. That is the problem space. So, in order to do that, you need to have, and there are too many slides here and I recommend you read it yourself. There cannot be a single global contact graph. We have to understand that there is a separation of administrative control between these two domains, and what we can do for two domains we can scale to N domains and it should be a single mechanism that supports all of it. Now those of you who have been around the IETF for long enough go, "Ooh, this feels an awful lot like BGP." And yes, this is BGP for DTN with a couple of little tweaks based on what the world has learned about BGP since it was first invented. So I'm going to blast through these slides for timeliness, because there are 19 of them but there's some useful properties in here. Key design principles. Okay, it is transport agnostic. So it actually runs over gRPC. And the reason it runs over gRPC is there is a separation, and it's the final point, between DPP speakers and the gateways that transit the bundles. The idea being that whatever NASA is running to manage its DTN network and whatever ESA is running to manage its DTN network is on the ground. And these two back-end systems can have a terrestrial back-end conversation about the gateways, the relay points, the transit points that may well be in deep space, but the ground system knows where these things are because it's their assets. And that is a reality at the moment. I'm not going to touch on what happens when we're all living in bubbles on Mars. I will be retired and that will be somebody else's problem. For now, I'm trying to solve the easy problem which is big orchestration systems sat on the earth are allowed to talk to each other about transit gateways that are off-earth. Nice simple architecture. So, we can use gRPC, which is great, which means we can be transport agnostic and we can just get the code running nice and easily. But in order to do this we need to unify the IPN and the DTN schemes into a single way of doing longest-prefix-first matching because this is lifted straight out of BGP if people know longest-prefix-first matching, so that we can work out which routes are more general or less general so that we can work out a prioritization between them, and we do that in this document. The other thing we need to determine is exactly whether the person who was saying "I know where the ESA transit routes are" is actually ESA. So there's got to be some identity. And the way I'm asserting this identity is I'm using DNS because DNS is also terrestrial, and remember we're talking about on the earth at the moment. So that means you can do all the good stuff with DNSSEC and you can start looking up cryptographic material in DNS and start to assert identity that way. So therefore you can separate who I am from where and what I can route. That's quite an important consideration. Speakers and gateways. Read this slide at your own speed. Everything else for those of you, and this is a nice diagram explaining exactly the same thing. So a DSN would have a DPP speaker speaking to another autonomous region which would be ESA, two speakers hold the session, they talk about gateways that manage the routes, and bundles travel between those gateways and off to the spacecraft. Nice simple architecture. DNS-based identity. Okay, so you publish an SVCB record of the DTN domain on your DNS name that you own because you are a large organization who manages spacecraft and can therefore afford to get a DNS domain, and you can publish your public key and the algorithm you use in your public key into that DNS record using DNSSEC to assert it, so that when two peers in DPP need to assert identity, they can do a DNS lookup, check the keys are valid and therefore use that as part of the key exchange and the the key agreement hello mechanism that's described in the protocol doc. Yeah, and this is the handshake flow for it. So you do your DNS lookup, you issue a challenge, you get a response, you do a verification on the cryptographic primitives and therefore you know you're there and then you can start sharing records. So after you've you've done that establishment, as long as you're happy with the the lifetime of the cryptographic material, you don't need to constantly do it, you've got an established session which you can trust. Obviously you're using gRPC so you'll be using TLS, won't you? Everything else is pure BGP. So it's BGP updates, it's distance vector, you've got a list of hops for route loop avoidance, so you can understand that in a multi-domain environment where you've got ESA, JAXA, NASA, or private companies, Intuitive Machines, and Rocket Labs involved in this or whatever, you can work out that you're not going round in a big loop by bouncing ESA, NASA, ESA, NASA, ESA, NASA constantly. That's what distance vector gives you. There are metrics for administrator preference, which is pretty much the MED from BGP if people know MED. The the root prefixes, and I haven't clarified this enough, are EID patterns. And I think on the next slide I come into that. So the EID patterns I have simplified to say there can only be a wildcard at the end of the pattern. So you have a fixed prefix and then you can say effectively star. So it is a strict subset of Brian's EID patterns (draft-ietf-dtn-eid-pattern) specifically for route sharing. So I'm not trying to say that Brian's draft is invalid, I'm saying for the route sharing part we need a stricter subset, which is what I talk about here. Extensible attributes straight out of BGP. Things which get stripped hop-by-hop, things which maintain all the kind of valid stuff in here. A bit of TVR has crept in, so "valid from", "valid until," so you've got contact windows as well as forwarding, so you can advertise a future contact and do your scheduling that way. Sorry, I'm trying to condense a very large slide. Specificity scoring. So this is how you map both DTN EID patterns with wildcards and IPN EID patterns with wildcards down to a single numerical number you can compare to work out whether one route is more specific or less specific than another route. Because if you look at the underlying BGP rules which I've kind of copied across into this, the specificity scoring is how you determine which routes override other routes or duplicate or mask groups. So there is a basic algorithm for reducing EID patterns down to a score so that you can compare them correctly. I'll let you go through that in your own time, it's a little bit complicated. So root selection, very simple. Highest score, so prefer an exact over a wildcard. Fewer hops is the second, administrator preference metric is third, and the oldest route is better than a newer route to maintain stability. Of course, local policy can decide what it wants to do, but these are the general rules. I am going to stop now because there's going to be a lot of questions and I have a lot more slides, but I'm conscious of time. Does anyone have any questions on this or have I just baffled you with 19 slides of deep detail? The draft is now on the data tracker. I had to wait until the window opened before I could push it because I was still churning it until quite recently. Go ahead, Brian.

Brian Sipos: Thanks. I think using DNS to assert some ownership is is a good strategy and I think there's enough detail in here to be able to start picking over those details as as a draft.

Rick Taylor: I strongly recommend people grab the slides off the data tracker, read the slides, and then go to the draft because there's quite a lot of content in here and the slides give you a pretty good overview. And on that note I will shut up. Thank you for your patience. I yield the floor. Ed, take over.

Ed Birrane: No, I was just going to say, thank you Rick. And then I think it's Erik for BTPU, Ethernet and Quick Convergence Layer. draft-ietf-dtn-btpu

Erik Nye: Do you mind flipping slides? I'm inside Chrome on an iPad. I don't know what that would look like.

Ed Birrane: I can absolutely do that. So these look promising. Can we start here?

Erik Nye: That's the BTPU one, not the Ethernet one. We can start with that if you like.

Ed Birrane: Well, sorry. There we go. Let's start with that one.

Erik Nye: Sure. Thank you. And you can put 10 minutes on for the two together if that's okay. Thank you. Yes, so this is the latest version of BTPU over Ethernet. If anybody hasn't read it, it's basically requesting some Ethernet properties from IEEE and from IANA to allow transmitting bundles directly over Ethernet for links that don't have IP configuration and for other Ethernet and Ethernet-like links. Some of which are in the document. I added a reference to the fact that you can have Ethernet bearer sessions in 3GPP and also that the US Space Development Agency's optical terminal specification uses Ethernet, although if memory serves in a slightly unusual way. So originally I had tried to think about replicating all of the UDPCL semantics over Ethernet, but then along came BTPU. Next slide please. It has all of the required issues sorted. It's got multiplexing and segmentation, and it has FEC support. Draft 04 is out with a bunch of updates responding to feedback from Brian principally I believe. And I added a little bit more Ethernet security stuff. We talked about virtual channels, how do you identify one stream of Ethernet BTPU packets from another, including how that applies to the MAC address and I reordered the sections based as well on on Brian's feedback. Next slide please. So I think this has been around for a little while. I think I think it's actually it's beyond ready for adoption at this point probably, ready for last call. I don't know what else is missing but I'll just leave that for the group. The document is informational. Very short. Just requests the IESG to authorize talking to the IEEE to request an EtherType and also to talk to IANA to request a multicast MAC address.

Ed Birrane: Yeah, chair hat on, we can certainly take that to the mailing list. Also Brian.

Brian Sipos: Yeah, I'll just mention that I think this is a necessary part of proving out BTPU. So it's a good thing and a needed thing.

Erik Nye: Great. Thank you. And that's all for Ethernet.

Season: I had a quick question or quick comment. Yes, please. Yeah, so the the Quick protocol here, being a convergence layer adapter for bundle protocol, if it's purely being used over traditional comms, or terrestrial comms, is there still consideration for then using this over space links as well, like space link to space link?

Erik Nye: Okay, so you're on this presentation now, not the Ethernet one. Your question is about QUIC. Yes, there isn't any reason why you could not use the QUIC timer adjustment work that's being discussed in TIPTOP to control how the QUIC CL behaves.

Season: Yeah, I guess so I have an active or we tried to propose a draft in there as well to address the security concerns with the handshake mechanism in QUIC, which uses TLS and we were trying to well we said that there are some issues with essentially using that synchronous type of handshake mechanism that QUIC uses to establish a secure session in these kind of DTN environments. And if it's going to be used as a convergence layer adapter for BP then we have the same issue again except now magnified for most of the BP uses for deep space. So that's that's one of the concerns I would have with adopting it as-is and perhaps considering asynchronous style connections instead.

Erik Nye: I'm not coming to, it's not coming to mind exactly what it is you're you're referring to security-wise. I probably need to go find it and read it.

Season: I'll put a link in the in the chat and maybe move this to mailing list then.

Erik Nye: Okay. Yes, Ed.

Ed Birrane: Oh, sorry just to address the or question about that point with QUIC and the security part of it. I certainly see how that's an issue if it is only a Quick layer, but if QUIC is the CLA underneath and you are running a bundle layer that is getting and applying BPSec for and and different keying strategies at its layer, does that not relax your concern about the QUIC security issues?

Season: I think that actually is is a little bit duplicative. You would then have two different keying services. I guess you'd have to choose one over the other as to which particular keys to use. Do you want to use the ones that were established by QUIC which is a session-based style using ephemeral keying information over the links or do you want to use the ones provided through BPSEC which isn't specified at least through the default security context.

Rick Taylor: Chair hat on for a second because I'm conscious of time. Erik, can you get further through the presentation because I think this might answer some of Season's questions because it should describe some of the use casing. And then this is a longer discussion and Season, I absolutely understand your concerns and I think they should be addressed carefully at length on the mailing list. And yes I absolutely want to address these because a lot of them are applicability concerns. It's like if you use it in these cases with these sort of round-trip times and delays, then you have a lot of these concerns. I think what Erik is proposing is a fairly generic mechanism and we'll get into some more of the specialization in later revisions. So, Erik go ahead please.

Erik Nye: Yeah I must also admit that I had principally had in mind here terrestrial relay as well and I had thought that some of the TIPTOP timer adjustment might be suitable for other deployments but yeah. I think maybe I should in the interest of time skip past what QUIC is. If you don't know what QUIC is, here's a list of RFCs. It's a UDP-based transport that has some mandatory TLS, it's got some anti-ossification properties that the DTN community hasn't probably worried about for a while. It can have both reliable and unreliable datagram delivery and it has multiple streams in the same connection so there's no head-of-line blocking. Next slide please. So I know there was some other work to propose a QUIC convergence layer but this is a very short document that just tries to provide a very straightforward mapping of bundle transfer mechanics to QUIC semantics. It basically says for reliable transfer just one and only one bundle per stream. You get transport layer ACKs that confirm transport layer delivery. Just to hang on a point here for one second, it's not the same thing as agent-to-agent delivery. I think we actually need a separate agent-to-agent message to confirm one-hop delivery. Obviously it supports multiple simultaneous streams and there's basically you're never going to run out of streams. It's also possible to use unreliable datagrams if you're somehow for some reason not worried about dropping them, and the document says you should just use BTPU in that case. Notably the document requests an ALPN for use in the TLS handshake and it also requests an ad-leaf _qbcl for use in DNS service discovery. Again, principally for environments where DNS works, terrestrial mostly. Next slide please. And yes because the intent here is to get to this place where you could have a mars-orbiter.example name that you know you want a delivery for and you can do the standard _dtn.bundle lookup in SRV that's already specified and get an SRV record that points you to cloud-agent here, or you could use a new the service binding RFC 9460 lookups to _qbcl.mars-orbiter to get a reference to this other information. And there you can find a service binding record that will give you all the connection parameters you need to bootstrap a QUIC connection straight away. Get the port number, confirm the ALPN is QBCL, get some IP addresses and away you go. So there's and that's basically the end of it. Yeah, I'm out of time. So, just one more slide but not really. Oh if we do actually get an ipn.arpa then it will also be possible to take an IPN number, put it into a ipn.arpa form and then just look it up directly, get a CNAME to rs.orber and then begin to do all the same lookups kind of thing. So that's sort of where this is headed. I think this will make terrestrial delivery quite easy to discover things without having to have a lot of manual config. But it's all just sketched out. Anyway the document requests these these parameters. So I think that's out of time for me. Thank you.

Ed Birrane: All right, Erik thank you so much. And last which will be a quite a summary, is just going to be a short presentation. Scott, happy to to flip slides if you wanted to work through this to talk about the existence of a 4838bis and some of the things that are on our mind for that.

Scott Burleigh: Sure. Very briefly, I think the, this is like only five slides. The most important one probably is this one: the motivation for doing an RFC 4838bis, which is that there are some concepts that have evolved over the last 19 years since 4838 came out. And it's worthwhile to document those changes. But I think more importantly than just the desirability of keeping up to date is that as delay-tolerant networking becomes more broadly used and adopted in spaceflight operations, more and more people are tending to become interested in it and want to know something about it and will look for a document that tells them what this is all about. And if they pick up RFC 4838, they may get some confusing information that is different from the way the protocols work now. And it would be good to bring the architectural overview into alignment with the protocol definitions. So, let's go to the next slide then. I think what we've got here is a summary of the rationale for the DTN architecture and I think most of these things are are items that we've discussed a lot over the last couple of decades. There have been some rethinking and the next three slides I think summarize that rethinking. There are things that are retained, we think these are still as true as they were in 2007. Next slide, there are some things that may need to be reviewed, revised, removed. And all of these would be of course fun to talk about right now but we don't have time. And then last next slide, I think these would be additional concepts that are significant enough that we ought to present them to people who are looking for information about delay-tolerant networking and belong in that introductory document which is really what we're trying to do with this architecture draft. And again, you know, these are all like interesting things to talk about and we'll have to do it later and on the mailing list. And I think I should stop right there and ask if there are any questions.

Ed Birrane: Scott, thank you for that. And I would just chime in, chair hat on. Exactly what you said at the beginning, which is we're getting a lot of interest in DTN. We have more than filled two and a half hours at this IETF. 4838 is a thing that comes up quite readily when people search for DTN. Having an update to that is going to be an important thing, so let us take some of this to the mailing list and please keep it an active conversation there.

Erik Nye: Erik. Don't just say thank you and goodbye.

Ed Birrane: We are done. Thank you. Thank you everyone for a really packed two session.

Rick Taylor: Thank you all for your patience. Thank you guys. See you in Vienna. Bye. Bye bye.