[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ns] Peculiar TCP/Reno behaviour



Hello ns-users,

while experimenting with ns2.1b7a, I stumbled across a very peculiar
behaviour of TCP/Reno. I have searched the mailing list archives for old
messages concerning what I am about to explain, but haven't found any
messages dealing with this -- please apologize in case I have overseen
them.

The way I see things, what happens there is a Slow Start followed by a
Fast Recovery, but without an intermediate Fast Retransmit. Please see
for yourself:

The TCL script containing the scenario can be downloaded at
	http://www.cs.uni-bonn.de/IV/Mitarbeiter/dewaal/ns-reno-peculiar.tcl
and a graphic visualizing the strange behaviour can be viewed at
	http://www.cs.uni-bonn.de/IV/Mitarbeiter/dewaal/ns-reno-peculiar.gif
(the cwnd_ and dupacks_ plots show the traces of the corresponding
TCPAgent variables; so the actual send window is the result of the sum
of both, if dupacks_ >= 3).

The scenario in which this behaviour occurs is very simple: two nodes
are connected through a link using RED queuing, and an FTP Agent uses a
TCP/Reno Agent to send packets from one node to a TCPSink on the other
node. At the beginning of this transfer, the sender transmits the
segments at full speed up to segment no. 300, when a triple duplicate
ack causes a fast retransmit of segment no. 150. (This causes the
recover_ variable to be set to 300 in RenoTcpAgent::dupack_action().)
Since also this retransmit segment gets dropped and the cwnd remains too
small for new segments to be sent, finally a timeout causes a Slow Start
at segment no. 150. Then, quite at the beginning of this Slow Start,
another segment (no. 191) gets dropped, and subsequently, three
duplicate acks are received for no. 190. Now, since 191 < recover_ ==
300, the RenoTcpAgent::dupack_action() function sees we have not yet
recovered from a previous Fast Retransmit, and simply continues sending,
which is possible because for each further duplicate ack received, the
cwnd is increased by one segment size. Now this "recover" variable is
actually introduced in RFC2582 for TCP/NewReno, but it is also used in
the tcp-reno.cc code for the "bugfix" to avoid multiple fast
retransmits, that is introduced in RFC2582 as well. There, it reads

"If the duplicate ACKs don't cover "send_high", then do nothing. That
is, do not enter the Fast Retransmit and Fast Recovery procedure...",

but in the simulation, a Fast Recovery actually is started since
numdupacks_ is continuously increased.

Any thoughts, comments, ideas, ... on this behaviour are greatly
appreciated. Thanks a lot in advance.

Regards,
Christian de Waal