The ability to differentiate between congestion loss and loss due to
errors in the physical layer would be very desirable since
no matter how much FEC is used some channels (E.g. Mobile Radio) will
exhibit certain packet losses and this differentiation will certainly
enhance the throughput of TCP. Moreover since wireless spectrum is a scarce
resource, it will also result in more effective use of the scarce
resource.
I don't know how one can implement this without a major tweak at the
upper layers or letting network drivers snoop into
device drivers but it would certainly be nice.
Bora
Steve Goldstein <[email protected]> wrote:
> At 8:45 AM -0400 10/7/97, Mark Allman wrote:
> >...  Some people have said that
> >we can/should design mechanisms whereby we can tell why a packet was
> >dropped (congestion or corruption).  Personally, I remain
> >unconvinced this will work well (it can never completely work; who
> >does a machine tell when it receives a corrupted packet?  it can
> >never be sure the right host is being told, as the packet is
> >corrupt).  I may be wrong, it is a research area at best.  In the
> >absence of such a mechanism, we must choose conservativly and
> >therefore take the drop as congestion and backoff.
> 
> I hear from the IPv6 folk that the latest protocol spec's talk of the
> routers setting a congestion bit.  If that were done, and if everybody were
> to speak IPv6 (let's not hold our breath) TCP would be able to tell the
> difference between a packet dropped because of congestion and one lost to
> corruption.
> 
> --Steve G.
> 
> ____________________________________________
> Steve Goldstein, National Science Foundation
>          +1(703)306-1949  Ext. 1119
>  "Let's not procrastinate until next week!" 
> 
-- Bora Akyol BBN Technologies GTE Internetworking
This archive was generated by hypermail 2b29 : Mon Feb 14 2000 - 16:14:30 EST