Re: BER and TCP/IP performance

From: Robert C. Durst ([email protected])
Date: Wed Mar 31 1999 - 12:53:12 EST


Jon,

There's been a fair amount of traffic here already, but let me chime
in to provide some theory, some opinion, and some data.

Theory: Matt Mathis, and folks developed and validated an equation for
representing the steady-state behavior of TCP in it's congestion avoidance mode
[1]. Roughly, achievable throughput for a TCP connection (BW) is related to
Maximum Segment Size (MSS), and inversely related to the Round Trip Time (RTT)
and to the square root of the probability of packet loss (p):

   BW < (MSS/RTT)*1/sqrt(p)

The probability of packet loss in their paper was based on congestion loss, but
works equally well with corruption loss since TCP doesn't distinguish. If you
add to the probability of congestion-related loss the losses due to bit-errors,
you can get an idea of the bound on throughput that a TCP connection may
experience. One can use a Bernoulli process as an estimator of packet loss
probability if you assume uniformly-distributed errors (different coding schemes
yield different distributions -- your mileage may vary!). With a Bernoulli
process, the packet loss probability (due to corruption) is

   p = (1-BER)**(8*L)

Where L is the length of the (entire) packet in bytes.

One thing to keep in mind is that this equation depends on the *full*
round trip time, not just that of the satellite link. It's also a function
of the complete probabilty of packet loss (congestion *and* corruption).
Note that if you mix long RTT connections with short RTT connections (say,
in the wired portions of the Internet), the long RTT connections will lose
to the short RTT connections. That's one of the things that motivates the
LEO brigade as well as folks who advocate "impedance-matching" gateways.

Note also that this equation applies to the *congestion avoidance* phase of
a TCP connection. The TCP congestion window, if you're running with delayed
acknowledgments, opens at a rate of one segment every two RTTs in congestion
avoidance phase and doubles essentially every two round trips (not quite with
long delays) in slow start. The exponential growth of slow start is important
to get a congestion window open quickly, and you're really disadvantaged if a
TCP connection takes a corruption hit early in slow start on a long RTT, high
(potential) bandwidth path, because it will fall out of slow start and into
congestion avoidance. This causes a cessation of exponential rate increase and
a commencement of the linear rate increase of congestion avoidance. Bummer.

Opinion: Unless you're the only game in town, a commercial satellite provider
would be well advised to pay close attention to the performance achievable by
any single customer's TCP connections. While it *may* be true that you can
maintain reasonable aggregate utilization of your satellite capacity if you're
supporting hundreds of simultaneous users, the performance seen by each one of
them has a large effect on whether they think that the service is worth the
cost. Having suffered with very poorly performing trans-Atlantic TCP
connections, I can assure you that given the opportunity, I would have voted
with my feet! Also, the ability fully utilize the satellite channel via lots of
users isn't necessarily a given, since each user attempting to consume
additional satellite channel capacity must also successfully consume additional
capacity in the wired portions of the Internet. The availability of SACK is a
Good Thing (tm Martha Stewart, no doubt), but its reduction in retransmission
timeouts means that router buffers will tend to remain full, meaning that
additional capacity to other users will be harder to come by.[2]

Data: The URL cited by Will, below, is a good reference. Also at that same
web site is a report that was done using the ACTS satellite. The URL for that
report is http://www.scps.org/scps/assets/images/SCPS_ACTS97.pdf
Unfortunately, we did not include tables of raw TCP throughput performance in
the document. Here is the data on TCP corruption performance; this data was
taken between 11 and 14 August 1997 at Rome Labs in New York.

Conditions: TCP (window scaling, 200k byte send and receive buffers, no SACK),
2Mbps channel (rate 1/2, constraint length 7 coding, transmit power adjusted to
achieve measured BER).

                                   Throughput (kbps)
BER Eb/No Range 1400-byte MSS 512-byte MSS
1.00E-08 6.4-6.6 1480.14 avg 820.52 avg
                Run 1: 1475.6 821.45
                Run 2: 1487.36 819.24
                Run 3: 1481.17 821.69
                Run 4: 1480.74 819.34
                Run 5: 1475.84 820.9
1.00E-07 5.8-6.2 1440.2 757.13
                                1478.61 820.36
                                1482.89 512.31
                                1339.78 819.11
                                1477.87 820.27
                                1422 813.58
6.00E-07 5.4-5.7 1410.99 671.09
                                1479.45 815.04
                                1131.43 558.14
                                1482.92 433.1
                                1481.44 742.71
                                1479.69 806.44
1.50E-06 5.2-5.4 527.25 469.62
                                449.73 725.14
                                520.04 402.25
                                697.94 234.97
                                383.6 164.94
                                584.95 820.82
4.50E-06 4.9-5.1 298.57 207.44
                                246.06 230.28
                                351.45 191.26
                                260.74 159.09
                                388.68 240.61
                                245.93 215.98
1.20E-05 4.6-4.8 158.74 102.99
                                182.78 103.07
                                148.47 114.44
                                169.59 99.91
                                143.64 79.52
                                149.22 117.99
5.00E-05 4.1-4.3 71.35 40.77
                                79.03 34.9
                                73.1 44.69
                                61.91 42.71

This same report has performance data for the SCPS transport protocol, which is
described on the SCPS web page (www.scps.org/scps). Appendix A of that report
discusses the mechanics of a TCP connection as it responds to corruption and
congestion, and does the same for SCPS-TP (the SCPS transport protocol).

Best regards,

Bob Durst
[email protected]

[1] M. Mathis, J. Semke, J. Mahdavi, T. Ott, "The Macroscopic Behavior of the
TCP Congestion Avoidance Algorithm",Computer Communication Review, volume 27,
number 3, pp. 67-82, July 1997.

[2] M. Mathis, private communication.

William D Ivancic wrote:
>
> At 03:09 PM 3/30/99 -0800, Jon Mansey wrote:
> >Hi,
> >
> >Can anyone point me to some research or tables showing how BER affects
> >TCP/IP thoughput?
> >
> >At the same time, does anyone have any real world experience of what is an
> >acceptable BER to shoot for in designing end-end sat links for TCP/IP.
> >
>
> 1E-8 or better. It really depends on the length of transfer. As soon as
> you get an error, your window is going to half it size due to congestion
> control. So for EXTREMELY long file transfers, you need near error free
> links. For short file transfers, you'll never get past slow start. Also,
> for hundreds of connections consisting of UDP and TCP, when one stream
> reduces its rate, another will fill in the gap.
>
> The following report has a graph showing TCP vs SPCS in an errored
> environment. Remember. This is for a single stream. SCPS performs very
> well here, but was optimized for a specific environment that does not
> correspond to that of a commercial satellite.
>
> http://www.scps.org/scps/assets/M22.pdf
>
> Regards,
>
> Will
> *********************************************
> William D. Ivancic
> NASA Glenn Research Center
> 21000 Brookpark Rd. MS 54-8
> Cleveland, Ohio 44135
> USA
> Phone: 1 216 433 3494
> FAX: 1 216 433 8705
> Email: [email protected]
> [email protected]
> http://ctd.grc.nasa.gov/5610/5610.html
> *********************************************



This archive was generated by hypermail 2b29 : Mon Feb 14 2000 - 16:14:54 EST