Two concerns need to be voiced about the direction in which the TCPSAT
group appears to be heading. First, let me say where I'm coming from; I run
NASA's space mission operations standardization program. Our goal is to be
able to "seamlessly" integrate emerging commercial technologies into our
space missions in order to lower their cost. By definition, our missions
involve dialog over a space link that is grossly characterized by:
- Significant propagation delay;
- A potentially large number of non-congestion related losses, e.g.,
channel-induced error and outages;
- Asymmetry in its forward and return channel transmission capacity.
In contrast, TCP is designed for a "fiber-like" environment of:
- Low propagation delay;
- Negligible non-congestion induced errors;
- Symmetric channel transmission capacity.
The TCP solutions for satellite applications that have so far been proposed
by the group appear to be solely focused on the delay problem, i.e., they
treat satellites as "long-delay fibers". Furthermore, they seem to require
that the end user has a-priori knowledge of the precise end-to-end
transmission path, and in particular the detailed characteristics of the
space channel. My first concern, therefore, is that any dialog which
requires the user to tune the communications parameters is hardly "seamless".
Instead of pushing the problem off to the global end-user community,
wouldn't a better solution be to handle the problem locally - where it
occurs, i.e., over the satellite link - via a transparent gateway approach
(at least in the near term)?
My second concern is that the error-prone nature of space channels is being
glossed over by the assertion that you can simply "FEC the problem away".
Yes, forward error correction can do a wonderful job of progressively
bootstrapping crummy channels so that they begin to look clean. However, as
at least one member of the group (Bill Sepmeier) has observed,
"unfortunately, in the real-world of satellite communications, one can be
assured of one thing: your path will degrade, or be degraded, to below
"fiber quality" more often than you would expect". That is certainly the
situation in most of our space missions, where the coding will eventually
start to break down and will either cause marginal data quality (10E-05 or
10E-06 bit error rate) to be generated, or it will simply give up and cause
an outage. I have a strong and uneasy feeling that there are a lot of
satellite providers out there who know this is a problem yet cannot admit
it in public for fear of exposing a chink in their armor to the fiber
advocates.
Furthermore, we have to deal with other sources of intermittent data outage
- such as low-horizon tracking passes and mechanical antenna obscurations -
on a routine basis. Any TCP implementation that treats all sources of
outage as congestion-induced is going to perform very poorly in a space
application. Since we mostly have very strong indicators available from the
link protocol when such errors or outages occur, what's wrong with
exploiting that information to tell TCP that the problem *wasn't* caused by
congestion?
Even though this group has no power to make changes to TCP itself, it does
have a strong "consciousness raising" role within the IETF community and a
clear responsibility to represent the concerns of the commercial satellite
communications industry and (perhaps) related enterprises such as the
wireless/mobile providers. We have a very real opportunity to effectively
advocate a research direction for TCP evolution, yet we seem to be
abdicating this role in order to avoid potential controversy. By hiding
satellite problems in order to fend-off the short-term fiber advocates,
aren't we in danger of doing long-term damage to the industry?
Best regards,
Adrian J. Hooke
NASA-JPL
+1.818.354.3063
This archive was generated by hypermail 2b29 : Mon Feb 14 2000 - 16:14:37 EST