In response to Tom Lynch, I feel maybe we're getting somewhere now,
since I helped fuel this line of thought, I thought I'd try to
bring it to agreement:
> TCP assumes all loss is due to congestion. How true is this
> assumption? How would the use of ECN improve the way that
> TCP responds to loss and congestion?
I think you could generailise this to differentiation of type of loss,
be it explicit or implicit (if such schemes ever achieve widespread
acceptance?) - notification may be of link and/or congestion - but
either way, partial information is a problem. Adaptive bit rate systems
further complicate this - satellite links do not inherently operate
at a fixed data rate.
FEC or/and Link Layer ARQ are important to protect from loss.
Loss may also result from lack of "tracking" e.g. in a burst modem.
Link layer ARQ is beneficial at low rates, and most useful if it
considers the type of traffic AND does not adversely impact TCP.
At low rates, concatenated FEC (e.g. RS) may introduce considerable
coding delay. At high bit rates (>2 Mbps), FEC or adaptive FEC is
probably beneficial and may be "fibre quality" (e.g. RS) or may
still leave residual errorts in worst-case conditions (e.g. Turbo).
This area is concerned with doing things to HELP TCP work better,
it may not be wise to make fundamental changes to TCP for this.
> Is loss that IS caused by congestion distributed fairly? What
> about non-TCP streams, are they reducing their throughput when
> congestion is indicated? Some of the research on this issue
> seem to indicate the answer is NO in both cases. What can be
> done to correct this? Is RED the answer?
This is a key issue, since many satellite links operate at SPEEDS
less tha the fibre network. Congestion control IS therefore
very important. Agree - RED needs to be studied - particularly to
understand the impact of lots of short sessions sharing a congested
router buffer.
> TCP recovers from congestion at a rate that is related to the RTT.
> This would seem to give the advantage to low latency paths when
> shared resources are in use. I have seen some suggestion for
> changing the congestion recovery scheme of TCP so that it is no
> longer linked to RTT but would increase by some constant rate. I
> haven't seen any research on what the negative implications of
> this might be.
> The only way to improve throughput without dealing with the
> losses is to lower the latency of the link. This is not possible
on a
> GEO satellite without seriously breaking a few laws of physics.
> What can be done is to give the APPEARANCE of lower latency
> by spoofing acks. This approach has its limits though, it only
> works well when the traffic is asymmetrical, it does nothing for
> interactive applications, and may have a serious impact on
> security. These need to be well documented so that those who
> choose the spoofing route are fully aware of the risks as well as
> the benefits.
i.e. GEO delay is a fact of life for GEO satellites. This si an
End-to-End issue.
Interactive applications may find this tough - so do badly written
applications e.g. X-windows calling XFlush all the time.
Spoofing may be a solution or may be too ugly and raises security
issues.
This archive was generated by hypermail 2b29 : Mon Feb 14 2000 - 16:14:38 EST