The Internet Congestion Control Problem


Today's TCP/IP-based Internet relies on the congestion control in TCP for stability. After NSFnet experienced network-wide congestion collapse in the 1980s, Van Jacobson devised the congestion control algorithms for TCP [Jacobson]. The tremendous benefit from these algorithms is that they allow an unlimited number of users to share a bottleneck in a stable manner.

There are shortcomings to Van Jacobson congestion control, however. When large bandwidth-delay product (BDP) flows pass through a bottleneck router, the Van Jacobson algorithms provide poor utilization of the bottleneck link [Partridge], long RTT flows are often unable to get a fair allocation of bottleneck bandwidth [Henderson], and packet loss due to errors is incorrectly interpreted as a sign of congestion [Dawkins]. While this was an acceptable tradeoff to gain network stability in the 1980s, growth of the Internet has meant faster links and increased diversity in network access technologies. Today, supercomputer, satellite, and mobile phone Internet users experience performance penalties because their expensive links interact poorly with TCP.

We describe three scenarios in which TCP performs poorly.

• Poor Link Utilization at High BDP

Obtaining high TCP throughput and, hence, good link utilization for high bandwidth-delay product flows is quite difficult. As described in [Floyd], a single TCP flow can saturate a 10Gbps link, albeit incurring large, oscillatory queues. However, sustaining these high flow rates require unreasonably low packet loss. For example, consider a 10Gbps flow with 1500byte packets and 100ms round-trip time (RTT). To sustain this throughput requires at most one error or congestion event per 1? hours – or one packet lost in 5 billion. More reasonable loss rates result in significantly lower TCP throughput. For example, a loss rate of one packet in 55 seconds, or 0.000002, will yield a maximum TCP throughput of 100Mbps.

Additionally, current work applying control theory to congestion control [Low02] reveals that as per-flow delay or bandwidth grow, TCP congestion control becomes less stable and more oscillatory.

• Unfair at Long RTT

TCP flows with long round trip times, found in satellite and in cellular telephony networks, have a hard time obtaining their fair share of bandwidth on a bottleneck link. Once TCP enters congestion-avoidance mode, the congestion window opens by one packet per RTT. This means that long RTT flows open their congestion windows slowly. When sharing a bottleneck with short RTT flows, the short RTT flows capture the available bandwidth more quickly.

When the RTT is much longer than the congestion event (typical for RTTs in the hundreds of milliseconds), these flows have the perverse behavior of continuing to increase sending rate during a congestion event (because the sender hasn’t yet received the loss indication) and reducing the sending rate after the congestion event has passed, thus exhibiting exactly the opposite of desired behavior.

• Confused by Lossy Links

TCP uses packet loss as a binary indicator of congestion; there is no feedback on the intensity of congestion. Packet loss is a reasonable congestion indicator on those links that have negligible non-congestion related packet loss. However, many wireless links, for example, are subject to uncorrectable bit errors. Wireless link designers have been advised to make their links as error-free as possible through the use of link-layer retransmission and forward error correction [Karn] but there are important cases, e.g., legacy, military, and some capacity constrained systems, where low error rates are not possible. At this time there is no way to tell a TCP that a packet was lost for reasons other than congestion and so avoid the congestion control response. As shown in the earlier example, recovery from any loss in a large BDP flow takes many round trip times. Non-congestive loss punishes these flows unnecessarily.

Historically, transport protocol performance in the environments described above has not been of great concern. However, as network capacity increases and new link technologies such as terrestrial and satellite wireless become more pervasive, the limitations of TCP congestion control will more intrusively inhibit large file transfers and good link utilization.
XCP provides improved performance in all three of these TCP performance problem areas.


References

[Dawkins] S. Dawkins, G. Montenegro, M. Kojo, V. Magret, N. Vaidya. “End-to-end Performance Implications of Links with Errors,” Request for Comments 3155, Network Working Group, August 2001.

[Henderson] Tom Henderson, Emile Sahouria, Steven McCanne, and Randy Katz. “On Improving the Fairness of TCP Congestion Avoidance,” Proc. IEEE Globecom '98, Sydney, Australia, November 1998. http://daedalus.cs.berkeley.edu/publications/globe98.ps.gz

[Jacobson] Van Jacobson. “Congestion Avoidance and Control,” ACM SIGCOMM 1988, August 1988.

[Karn] Phil Karn. “Advice to Subnet Designers,” Internet draft draft-ietf-pilc-link-design-13.txt, work in progress, February 2003. ftp://ftp.rfc-editor.org/in-notes/internet-drafts/draft-ietf-pilc-link-design-13.txt

[Low02] S. H. Low, F. Paganini, and J. C. Doyle. “Internet Congestion Control: An Analytical Perspective.” IEEE Control Systems Magazine, 22(1):28-43, Feb. 2002.

[Partridge] Craig Partridge. “Gigabit Networking,” Reading, Massachusetts. Addison-Wesley, 1994. 241-244.


xcp home

last modified: May 12, 2003