Here are some comments on the I-D, almost all minor.
I think an additional mechanism should be documented, which is acking every
packet. This is fully allowed by the existing standards, and results in
the congestion window opening appreciably faster.
Vern
> corresponding reply (one round-trip time or RTT) would be no more
> than 558 ms.
"would be no more" -> "could be at least" - the satellite is just
one link contributing to the total RTT
> time. The RTT can be increased by other factors in the network,
"can be" -> "will be"
> bit error rates for a satellite link today are on the order of 1
add "(BER)" after "bit error rates"
> error per 10 million bits (1 x 10^-7) or better. Advanced error
It's not quite clear whether "better" means "more frequent" or "less
frequent"
> ... Ku band
> transponders are typically around 50 MHz. Furthermore, one
> satellite may carry a few dozen transponders.
A few dozen transponders isn't enough then to fill the 12 GHz bandwidth.
Is there excess capacity, or am I missing something?
> TCP must assume the drop was due to network
> congestion to avoid congestion collapse [FF98].
Suggest citing [JK88] here instead.
> Another common situation arises when both
> the incoming and outgoing traffic are sent using a satellite
> link, but the uplink has less available capacity than the
> downlink. This asymmetry can have an impact on TCP performance.
I was confused whether this is referring to the incoming and outgoing
using the *same* satellite link, and the asymmetry comes from uplink/
downlink capacity asymmetries; or if they're using different satellite
links.
> appropriate window sizes). The impact link level retransmissions
Typo, "The impact [that] link"
> However, in practice, Path MTU Discovery does not consume a large
> amount of time due to wide support of common MTU values.
You might mention that caching can avoid needing to do it at all.
> The relationship between BER and segment size is likely to vary
> depending on the error characteristics of the given channel.
Worth adding a sentence here to explain what the dependence might be.
> Choosing the maximum packet size to be used on the satellite link
> should be based on the characteristics of the channels and the
This paragraph is redundant with the preceding one.
> However, at the present time there is no accepted method for
> detecting corruption loss.
"accepted" implies that there are methods but they haven't been standardized
yet. I don't know of any feasible methods.
> [PS97] the link should be made as clean as possible to prevent TCP
Add comma after the cite.
> satellite links. There are some situations where FEC cannot be
> expected to solve the noise problem (such as military jamming, deep
> space missions, noise caused by rain fade, etc.).
It's not obvious why deep space missions have this problem. It's clear
for the others that you can suffer from large outage blocks, rendering
FEC ineffective. Is that the same problem with deep space?
> Furthermore, TCP has
> an end-to-end feedback loop; therefore, noise in other parts of a
> high delay connection will cause problems even if the satellite link
> portion is error-free.
This struck me as a non-sequitor in the context of the rest of the
paragraph.
> Further research is needed into mechanisms that allows TCP to
"allows" -> "allow"
> transmission rate accordingly can lead to congestive collapse
> [FF98].
Again, I'd prefer [JK88].
> the current network conditions, during a connection, TCP employs
delete comma after "connection"
> value of cwnd. However, if cwnd is greater than or equal to
> ssthresh the congestion avoidance algorithm is used. The initial
Actually, some implementations use cwnd >= ssthresh, others use
cwnd > ssthresh. BSD uses the latter; I think that's what RFC 2001
says, too. Part of the RFC 2001 revision will be to allow either.
> Furthermore, the value of ssthresh is reduced when congestion is
> detected.
"reduced" -> "set", strictly speaking. It might become larger, if
the window has grown appreciably during congestion avoidance.
> [Ste97]. Slow start begins by initializing cwnd to 1 segment. This
Worth noting that draft-floyd-XXX is in the pipeline as an
Experimental RFC.
> packets. This continues until cwnd meets or exceeds ssthresh, or
> loss is detected.
same comment as before, sometimes it's just "exceeds"
> When the value of cwnd is greater than or equal to ssthresh the
ditto
> Therefore, if one ACK is received for every data segment, cwnd will
> increase by 1 segment per round-trip time (RTT).
"increase by" -> "increase by (about)"
> the dropped segments must wait for the RTO to expire, which causes
"must wait" -> "must usually wait"
> Using large windows requires applications or TCP stacks to be hand
> tuned (usually by an expert) to utilize large windows.
Is this always the case? Might change "requires" -> "often requires"
> capacity as needed is currently underway [SMM98]. This will better
> allow stock TCP implementations and applications to better utilize
> the capacity provided by the underlying network.
This reads awkward: "better .... better"
> TCP senders that do not use SACKs must infer which segments have not
> arrived and retransmit accordingly. This can lead to unnecessary
> retransmissions, in the case when the sender infers incorrectly.
> When utilizing SACKs, the sender does not need to guess which
> segments have not arrived, thus eliminating the majority of
> unnecessary retransmissions. Furthermore, when SACKs are used, the
> sender gets information about which segments need to be
> retransmitted more rapidly (within the first RTT following the loss)
> than without SACKs.
>
> Some satellite channels require the use of large TCP windows to
> fully utilize the available capacity, as discussed above. With the
> use of large windows, the likelihood of losing multiple segments in
> a given window of data increases (either due to congestion or
Both of these paragraphs are redundant with the (long) one that comes
right before them.
> Mechanism Use Section Where
> +------------------------+-------------+------------+--------+
> | Path-MTU Discovery | Recommended | 3.1 | S |
> | FEC | Recommended | 3.2 | L |
> | TCP Congestion Control | | | |
> | Slow Start | Required | 4.1.1 | S |
> | Congestion Avoidance | Required | 4.1.1 | S |
> | Fast Retransmit | Recommended | 4.1.2 | S |
> | Fast Recovery | Recommended | 4.1.2 | S |
> | TCP Large Windows | | | |
> | Window Scaling | Recommended | 4.2 | S,R |
> | PAWS | Recommended | 4.2 | S,R |
> | RTTM | Recommended | 4.2 | S,R |
> | TCP SACKs | Recommended | 4.3 | S,R |
> +------------------------+-------------+------------+--------+
> Table 1
IMHO, add:
> | Ack every packet | Recommended | 4.X | R |
> [JK88] Van Jacobson and Michael Karels. Congestion Avoidance and
> Control. In ACM SIGCOMM, 1988.
The SIGCOMM '88 paper lists just Van as the author. I think the revision
published in ACM TOCS lists Mike too. I'm almost certain the version
available off of ftp.ee.lbl.gov lists Mike (I'm away on travel so it's
not trivial to check).
This archive was generated by hypermail 2b29 : Mon Feb 14 2000 - 16:14:44 EST