At 6:51 PM -0700 4/15/98, Jacob Heitz wrote:
>This algorithm runs only while the sender is in the congestion avoidance
>phase. During the slow start phase, if meas_rtt > (min_rtt + rto)/2,
>slow start is terminated and congestion avoidance begins. Slow start
>will also terminate when cwnd reaches ssthresh, as usual.
I wonder if Van has any comment; if dim recollection serves, his 1988 paper
suggested something along this line, and it hasn't survived in the code.
The issue that comes to mind is the definition of min_rtt. Let's imagine
that a TCP session is blazing along, has developed a reasonable rto and
min_rtt value, and now routing changes. There are two possible cases.
Case 1, routing got better, and delay has decreased, so a new min_rtt value
is recognized. Case 1a, it isn't so very much better that it triggers the
algorithm, and the TCP starts to adapt. Case 1b, it triggers the algorithm.
In this latter case, why does it not trigger on every message until the
adaptation has progressed enough to be within range? I should think that we
want to adapt like this at most once per RTT.
Case 2, routing got worse, so the min_rtt is no longer a reasonable
comparison value. Unlike case 1, there is probably a hiatus here - a line
failed or something, and routing isn't fixed until the failure is detected
and routed around. There are again 2 sub-cases: 2a, it isn't worse enough
to trigger the algorithm, so what we have done is created a hair trigger -
it will trigger as soon as even a moderate increase is detected, far less
than the average of the two values. 2b, it does trigger the change.
It seems to me that you want min_rtt to be the least rtt measured over the
past few (10-20) RTTs, as opposed to the least RTT measured during the life
of the connection. And you want to trigger this change at most once per RTT
(which you may have said or intended, but I didn't find that in your note).
Something else. It seems (I'm working from memory and not code, so someone
staring at code will surely correct me) that rto is calculated as mean rtt
plus a small constant times mean variance. Mean and min rtt are initialized
as the duration of the SYN/SYN-ACK exchange, and mean variance is
initialized as zero. Generally, I would guess that the typical RTT is less
that initial estimate, so the variance is likely to rise until well after
the mean rtt has been deduced. But if stable times are achieved, then rto
should approach min_rtt, as min_rtt and mean_rtt are presumably close to
each other and the mean variance reduces. Does the fact that the rtt
measurement has stabilized mean that we trigger a congestion-detected event?
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
If at first you don't succeed, call it version 1.0.
This archive was generated by hypermail 2b29 : Mon Feb 14 2000 - 16:14:42 EST