In a message dated 6/16/98 9:23:42 PM Eastern Daylight Time, [email protected]
writes:
> In modern TCP stacks the timeout values are updated based on round trip
>  time measurements, so they _should_ self-adjust to the long delay.  Some
>  stacks have bugs in this area.  They still work, but not as efficient as
>  they should.
TTI has a lot of experience with the Mexican satellite network
circa 1993.  At that time period the satellites all contained
X.25 switches.  Link layer protocols were terminated  in
the satellite so that backward error correction on each
link could handle the relatively high bit error rate.  As I remember
one could get fairly good performance with k between 16
and 32 with 128 byte L3 data.  (I think they might have
been better off with SDLC and selective acknowledgement,
but the option was not available).  Because of the constraints
of some PAD devices they rarely ran level 3 windows larger
than 4, but routers that could open multiple VC's between
two endpoints could overcome this restriction.
With careful link layer and router configuration 
good performance was possible.  Without such
careful configuration, no matter how much TCP
parameters were tweaked, TCP performance
would be miserable.  The local link considerations
far outweighed the effect of end-to-end
TCP parameter tweaking.
Joachim Martillo
<A HREF="http://members.aol.com/Telford001/">Telford Tools, Inc.</A> 
This archive was generated by hypermail 2b29 : Mon Feb 14 2000 - 16:14:44 EST