Re: BER and TCP/IP performance

From: Anil Agarwal ([email protected])
Date: Wed Mar 31 1999 - 18:22:42 EST


Jon,

Here is some more TCP throughput data that you might find useful.
Some of the numbers do not match those sent by Fred or Bob Durst,
but I have verified these numbers, by re-measuring them on our testbed
today. Also, these numbers match those predicted by theory. It
shows that there is probably lots of variation in performance across
different implementations.

The maximum throughput for a single TCP connection is given by -

        tput = sqrt(MSS * 6) / RTT / sqrt(BER) bps

        where RTT is in seconds, MSS is in bytes, tput is in bits/sec (bps).

This is really the same equation derived in the paper by Mathis, except
that I have used BER instead of Packet Loss Probability. This equation
assumes delayed acks, fast retransmit and recovery, no SACK
and no TCP timeouts. Bit errors are assumed to be uniformly distributed.

For MSS = 1500 bytes, RTT = 0.6 sec, we get

        tput = 158 / sqrt(BER) bps

        ----- ----------- -----------
        BER tput tput
                Predicted Measured
                (uniform BER) (random BER)
        ----- ----------- -----------
        10-4 15.8 kbps
        10-5 49.9 kbps
        10-6 158 kbps 140. kbps
        10-7 499.6 kbps 510. kbps
        10-8 1.580 Mbps 1.494 Mbps
        10-9 4.996 Mbps
        10-10 15.800 Mbps
        10-11 49.963 Mbps
        10-12 158.000 Mbps

   Measurements were done using Sun Solaris 5.6 (window scaling, MSS = 1460,
   send/recv buffers = 256 kbytes, fast retransmit and recovery,
   delayed ack, no SACK), Cisco router, a satellite link simulator and
   very large file transfers ( > 30 minutes).

As an additional note, bursty errors can actually improve throughput. If
error bursts are of the 40 to 100 bit length variety (which is typical of
errors seen after Viterbi decoding for geo-satellite links), most
error bursts result in the loss of only one segment. Therefore, the
packet loss probability is correspondingly smaller (by a factor of 10 to 20,
depending on the number of actual bit errors in the burst). Hence, for the
same BER value, a link with bursty errors will provide better TCP throughput
than a corresponding link with random bit errors. If the burst lengths
get larger, so that the probability of more than one consecutive packet
errors is non-negligible, then throughput will reduce dramaticaly, since fast
retransmit will not be able to recover from multiple packet losses per
round trip.

So what is the right BER to engineer? It depends on what you want the
throughput of any single TCP user to be. If that number is say 500 kbps,
then 10-7 seems right. If that number is larger (assuming users use window
scaling and large windows), you can improve the link BER or get such users
to use SACK.

Regards,
Anil Agarwal
COMSAT Labs
Clarksburg, MD 20878
[email protected]



This archive was generated by hypermail 2b29 : Mon Feb 14 2000 - 16:14:54 EST