making satellite channel loss transparent

From: ketan bajaj ([email protected])
Date: Thu Jul 22 1999 - 02:50:24 EDT


Hi,
Pls refer to section 3.3.4 of
http://www.ietf.org/internetdrafts/draft-ietf-tcpsat-res-issues-09.txt

"
Differentiating between congestion (loss of segments due to router buffer
overflow or imminent buffer overflow) and corruption (loss of segments due
to damaged bits) is a difficult problem for TCP. This differentiation is
particularly important because the action that TCP should take in the two
cases is entirely different .... "
"....
Perhaps the greatest research challenge in detecting corruption is getting
TCP (a transport-layer protocol) to receive appropriate information from
either the network layer (IP) or the link layer. Much of the work done to
date has involved link-layer mechanisms that retransmit damaged segments.
The challenge seems to be to get these mechanisms to make repairs in such a
way that TCP understands what happened and can respond appropriately."

The issue i'm trying to raise here is why bother TCP to distinguish between
packet loss due to corrution or due to congestion, why not make the
corruption loss transparent to sender tcp ? Some of the work ive seen in
doing the above, are split-connection and berkeley snoop designs. But they
are limited to a particular satellite network topologies, where the
satellite hop is the last hop of the end-to-end path. What about more
generic topologies ?

ketan

______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com



This archive was generated by hypermail 2b29 : Mon Feb 14 2000 - 16:14:55 EST