A very interesting experiment, and good to hear you can get such good throughput. However, it is important to consider what *type* of TCP/IP data you will be sending over such a highly asymmetric link. The type of application data can make a *huge* difference in the effective throughput you can get. If the asymmetry of the link (i.e, ratio of forward to return bitrate) is significantly greater than the asymmetry of the traffic (i.e., ratio of forward bits to return bits), throughput will be throttled. This has been called "return channel starvation", meaning that the narrow return pipe can't send enough ACKs, etc., to allow the TCP sender to keep the forward pipe full.
Web (HTTP) traffic is much more symmetric than bulk-transfer FTP-type traffic (the type apparently used in the experiment reported below). In the FTP case, the forward/return bit ratio can be up to about 75:1, depending on ACK frequency, size of data packets, and number of header bytes. HTTP traffic, however, averages around 10:1, in the packet traces we have captured. There is much more return channel data, due to the GET requests (often 300-400 bytes) issued by the Web client for every page and in-line image on the page. On a highly asymmetric link, this return traffic encounters a log jam in the return channel, and thus gets spread out in time at the other end.
A large TCP window helps the situation for bulk FTP traffic, if the file is not too big, as it allows the sender to keep pumping as the ACKs struggle their way up the return channel (if the file is very large, the window will eventually exhaust itself and the sender's data will be clocked out by the slow return ACK stream). But the large TCP window doesn't help HTTP, which also relies on timely arrival of GETs at the server to keep it pumping. So the server has to stop and wait between GETs. We have seen and measured this return channel starvation effect in experiments running HTTP traffic (10:1) over a 1.5 Mbps/64Kbps ADSL link. The throughput for that link was roughly half of what it was over a 1.5Mbps/384Kbps ADSL link. (The TCP window was kept larger than the bandwidth*delay product in both cases.)
IN SUM: return channel starvation is expected to be a significant factor in highly asymmetric links when HTTP traffic is involved. You should consider running experiments over your 200:1 link using real, typical HTTP traffic (i.e., including a number of complex pages with many in-line image objects).
= Peter
>May thanks to Dave Beering and David Brooks for obtaining this information.
>
>
>>Subject: FW: Answer to your TCP/IP Asymmetry Question - Experimentally
>Verified
>>
>>Some very useful information from Dave Beering about the ratio of
>>return-to-forward link capacity using TCP/IP over long satellite delays.
>>This approximate ratio is important for those of us looking at commercial
>>services such as Direct Data Distribution and GEO relays. It should help
>us size the forward
>>link sufficiently enough to let the TCP/IP protocol work, without
>>excessively burdening the communications hardware.
>>
>>For example (if I understand correctly), using a ratio of 200:1, a D3
>>subsystem would have to have about 3-Mbps forward link (uplink) for each
>>622-Mbps return link (downlink).
>>
>>
>>>From: "GILBERT, CHARLENE E. (JSC-TE)" <[email protected]>
>>>To: "'Jim Budinger'" <[email protected]>,
>>> "COSTELLO, THOMAS A. (TOM) (JSC-TE)"
>><[email protected]>
>>>Subject: FW: Answer to your TCP/IP Asymmetry Question - Experimentally
>>Verified
>>>Date: Sat, 5 Sep 1998 15:40:54 -0500
>>>
>>>FYI -
>>>CEG
>>>
>>>> ----------
>>>> From: David R. Beering[SMTP:[email protected]]
>>>> Sent: Saturday, September 05, 1998 3:17 PM
>>>> To: SMITH, JON M. (JSC-TA); SEYL, JACK W. (JSC-TA); Mike Smith
>>>> (Personal)
>>>> Cc: David R. Beering; GILBERT, CHARLENE E. (JSC-TE);
>>>> [email protected]; [email protected]; Pete Vrotsos;
>>>> [email protected]
>>>> Subject: Answer to your TCP/IP Asymmetry Question -
>>>> Experimentally Verified
>>>>
>>>>
>>>> Mike & Jack:
>>>>
>>>> I wanted to write to follow up on the telephone conversation
>>>> we had two weeks ago regarding TCP/IP and asymmetric
>>>> satellite channels. Yesterday, David Brooks and I set up a
>>>> test at LeRC on the 118x network to answer the question:
>>>>
>>>> "For a given forward satellite channel, how much
>>>> return data date can I support using TCP/IP?"
>>>>
>>>> Just to make sure we're talking on the same page -- in
>>>> the TDRSS world, everything is seen from the White Sands
>>>> hub looking out. So a remote spacecraft sending data to
>>>> White Sands is "returning" data, and the link is called a
>>>> Return Link. Conversely, if a link is mapped from White
>>>> out to the remote spacecraft, they call it a Forward Link.
>>>> As you are aware, most of the time, there is no Forward
>>>> Link employed on TDRSS, at least in the Ku-Band. On the
>>>> Shuttle, I'm pretty sure they use S-Band to go forward at
>>>> very low data rates, and Ku-Band Single Access to
>>>> support very (relatively) high speed Return Links.
>>>>
>>>> For the 118x tests ...
>>>>
>>>> We established a Forward Link on the 118x test network
>>>> operating at 1 Megabit per second (1,000,000 bits per
>>>> second). That link speed was enforced at the Sun
>>>> workstation's ATM interface by a rate queue set to one
>>>> Megabit per second, and also in the FORE Systems
>>>> ATM switch by a Constant Bit Rate traffic contract set
>>>> to 1 Megabit per second.
>>>>
>>>> The Return Link was not constrained, so it could
>>>> theoretically go up as high as 622 Megabits per second.
>>>> The experiment configuration stick figure is below.
>>>>
>>>> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
>>>>
>>>> Forward Data Rate = 1 Megabit per second --------------->
>>>>
>>>> 622 Mbps
>>>> Sun -------------> FORE -------> Satellite --------> FORE -------> Sun
>>>> Ultra 2 ATM link w/ ATM
>>>> Ultra 2
>>>> Wkstn <--------- Switch <------- 537 ms <-------- Switch <------ Wkstn
>>>> delay
>>>>
>>>> Return Data Rate = 622 Megabits per second <------------
>>>>
>>>> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
>>>>
>>>> We had the Sun workstations set up to support TCP congestion
>>>> windows as large as 32 Megabytes. When we ran the tests
>>>> between the two workstations without the traffic contracts
>>>> enforced, the workstations were able to support data rates
>>>> of 500 Megabits per second sustained.
>>>>
>>>> We then ran the tests with the traffic contract enforced for
>>>> the forward channel at one Megabit per second, and then again
>>>> at two Megabits per second.
>>>>
>>>> THE ANSWERS:
>>>>
>>>> For the 1 Mbps Forward Link Case
>>>> Best sustained return link data rate = 211 Megabits per second
>>>>
>>>> For the 2 Mbps Forward Link Case
>>>> Best sustained return link data rate = 422 Megabits per second
>>>>
>>>> So, for this configuration, we were able to run successfully with
>>>> an asymmetry of 211 to 1. The 211 number will depend on
>>>> several factors, most importantly the end-to-end channel delay,
>>>> and the protocols used on the satellite channel, since the
>>>> asymmetry is a function of protocol overhead. For very small
>>>> Geostationary return links, therefore, the number would be
>>>> smaller. The numbers we generated are a good approximation
>>>> for TDRSS, since the delays and channel speeds are ver
>>>> similar (within 20-30 milliseconds on the delay and within 100
>>>> Megabits per second on the speed).
>>>>
>>>> Therefore, for a TDRSS half channel of 150 Mbps, you would
>>>> need a forward link in the 750 Kbps range to successfully
>>>> support TCP/IP data transfer at line rate. For a whole TDRSS
>>>> channel of 300 Mbps, you would need a little less than a T1
>>>> (1.544 Mbps) forward.
>>>>
>>>> Please feel to forward this to other interested parties. We also
>>>> have all of the TCP/IP stack traces from the tests runs if
>>>> anyone wants to look at them to confirm the validity of the
>>>> tests.
>>>>
>>>> We plan to do one or two additional tests using a physical
>>>> interface operating at 1.544 Mbps to constrain the forward
>>>> traffic. I expect that we will see results on the range of 300 Mbps
>>>> for that case. More as it develops. Please contact me with
>>>> questions or issues.
>>>>
>>>> Best regards,
>>>>
>>>> Dave Beering
>>>
>>>
>>>
>>>
>>>> [email protected]
>>>> 630-665-1396 (Office)
>>>> 630-235-7965 (PCS)
>>>>
>>>>
>>>>
>>>
>>>
>>--------------------------------------------------------
>>James M. Budinger
>>6100/Space Communications Office
>>216.433.3496 Fax: 216.433.6371
>>--------------------------------------------------------
>>
>>
>*********************************************
>William D. Ivancic
>NASA Lewis Research Center
>21000 Brookpark Rd. MS 54-8
>Cleveland, Ohio 44135
>USA
>Phone: 1 216 433 3494
>FAX: 1 216 433 8705
>Email: [email protected]
> [email protected]
>*********************************************
Peter G. Warren
Performance Analysis Group
Advanced Systems Laboratory
GTE Laboratories
40 Sylvan Rd., Waltham, MA 02254
(617) 466-4142
This archive was generated by hypermail 2b29 : Mon Feb 14 2000 - 16:14:46 EST