The IEPG meets each Sunday at the start of the IETF week. The intended theme of these meetings is essentially one of operational relevance in some form or fashion — although the chairs will readily admit that they will run with an agenda of whatever is on offer at the time! One of the presentations at the recent IEPG meeting was on the topic of deep-space networking.
It has been an enduring fascination to see how we could use packet networking in the context of digital communications in space. Why can’t we just use the IP protocol suite and declare success? The tricky issue with space is that it is really very big! The combination of the large-scale distances and the limiting factor of the speed of light means that round-trip delays are significantly longer than terrestrial scenarios.
It takes some 2.4 to 2.7 seconds to send a signal to the moon and back. If we are talking about sending a signal to Mars and back, then the comparable delays are between 10 to 45 minutes. There is also the factor of extended interruption where an orbiting spacecraft is behind the object it is orbiting. If we look at communications with other planets in the solar system, there is a periodic interval when the planet aligns with the sun. For example, for an interval of around two weeks, the Earth’s view of Mars is blocked by the Sun every two years.
Such protracted Round-Trip Time (RTT) intervals are well beyond what we experience in the everyday Internet, even in the most bizarre of fault scenarios! For an end-to-end reliable protocol, the sender must retain a copy of the sent data until it is acknowledged as received from the other end. We have become used to a network where the RTT intervals are of a few tens of milliseconds, so simple interactions, such as a three-way TCP handshake, or a DNS query and response can happen within the limits of human perception. When such interactions blow out to some 30 minutes or so, is an end-to-end interaction model the right architectural choice?
Hop-by-hop relay protocols can mitigate this to some extent, as each intermediate system performs a reliable handover to the next intermediate system on the path. This was a feature of many message-based communications protocols in the 1980s that used a store-forward messaging system layered on top of intermittent links provided by analogue modem connections over voice calls. When data networking migrated to use permanent circuits then the relay-based approach was largely abandoned due to efficiency and cost opportunities that are an intrinsic aspect of end-to-end protocols,
The IETF published RFC 4838 in 2007, describing an architecture for an inter-planetary communications framework. This informational memo noted that the existing Internet protocols do not work efficiently in such environments, and attributed this conclusion to some fundamental assumptions built into the Internet’s architecture, specifically that:
- No store/forward relays, which requires an end-to-end network path between source and destination for the duration of a communication session
- Reliable communication) that retransmissions based on timely and stable feedback from data receivers is an effective means for repairing errors
- End-to-end loss is relatively small
- All routers and end stations support the TCP/IP protocols
- Applications need not worry about communication performance
- Endpoint-based security mechanisms are sufficient for meeting most security concerns
- Packet switching is the most appropriate abstraction for interoperability and performance
- Selecting a single route between sender and receiver is sufficient for achieving acceptable communication performance
The Delay Tolerant Networking (DTN) architecture was conceived to allow most of these assumptions to be relaxed, and it appears to be a return to the earlier store/forward relay form of communications. These DTN design principles led to the Bundle Protocol (RFC 5050, RFC 9171), using a relay design with its own transport, routing, naming, security, neighbor discovery, application API, and network management components.
This Bundle Protocol is in itself an interesting exercise in revisiting a now 40-year-old past and applying subsequent learnings to the design of a store/forward relay system. Like the original Internet, the DTN approach is that of an overlay network that passes units of application data termed ‘bundles’. The delivery of each bundle should allow an application to make progress on its desired task. This is an extension of the original protocol framework that was designed solely with space communications in mind and has potential application in many forms of heterogenous communications environments where connectivity is intermittent, such as can be found in many ad hoc networks.
However, this generalization of the DTN work has also promoted a re-evaluation of the constraints for space-based communications. The basic question posed in this effort is: ‘Is it possible to fit a more conventional IP architecture into the context of a deep space environment?’ This presentation and the associated study explore this question in further detail.
In the context of deep space, local caching of a packet may occur not only because of link congestion but also due to temporary link unavailability due to orbital mechanics or similar. Not only does this necessitate deep queues in intermediate routes but also the use of active queue management, bidirectional forwarding detection with large timers and other aspects of time-variant routing. Extended in-flight storage of IP packets may exacerbate the buffer bloat issues, and a carefully managed queuing discipline is necessary.
As for routing, it seems that a link state approach where each node computes its view of the network topology seems to be a better fit for this environment than dynamic distance vector routing.
It is feasible to place a reliable transport on top of this packet transport, although there is a need to use extended timers and a multi-stream approach to mitigate the head-of-line blocking issues that come from single-stream TCP. The stream multiplexing architecture used by QUIC would appear to be a better fit in this respect, although timer adjustments on QUIC’s inner reliable transport module would require adaptation to a deep space environment.
Some local adaptations to the DNS would be appropriate, but this could be as simple as pre-provisioning names into the local host file to avoid protracted name resolution delays from commonly used DNS names.
In this context of looking at the feasibility of porting the IP protocol suite to the deep space environment, it is useful to remember the efforts of the mobile data effort, where at one point in the evolution of mobile data the industry had convinced itself that IP could not work over mobile radio systems and they devised an entirely new framework, the Wireless Application Protocol (WAP).
WAP never became established in the mainstream in the mobile market. Once it was established that the IP transport protocols could operate stably over IP within the mobile environment then the entire rich Internet application ecosystem was accessible to the mobile user, and at that point, WAP was in trouble. Early efforts by Nokia and DoCoMo showed that it was possible to use IP over mobile radio systems and bring the rich ecosystem of the emerging Internet to bear, and the mobile IP market quickly gathered momentum. The release of Apple’s iPhone in 2007 sealed WAP’s demise.
The question here is whether the assumptions in RFC 4838 hold to the extent that it’s necessary to construct an entirely novel deep-space digital environment. The attraction of being able to leverage the existing IP environment and apply it to the deep space domain is undeniable, assuming that it can be configured to work seamlessly.
The combination of QUIC and TLS 1.3 with its fast open capability removes much of the overheads of connection establishment, and the QUIC framing concepts eliminate the ambiguity over retransmitted segments. So far, the project has been working through delay units that simulate the delay characteristics of deep space, and they have been able to adapt a QUIC implementation to operate over a connection with an RTT of up to 10 days.
By temporarily storing IP packets in deep space forwarding nodes until links are backed up (which is a modification of existing IP forwarding behaviour where packets are discarded if they cannot be forwarded immediately), setting appropriate QUIC timer parameter values, or using UDP with application-level timers appropriate to the deep space environment then it’s possible to consider a deep space communications environment that is far close to IP than the Bundle Protocol used by DTN. The contention is that the use of IP can significantly decrease costs and leverage existing IP technologies and associated operational expertise. It’s an interesting direction to pursue, although obviously more work is needed to test various scenarios and the viability of tools and applications.
More information on this project can be found at IP Protocol Stack for Deep Space.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.