[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[ns] FullTCP: 2nd connection establishment fails (?)
Hi,
I've been playing with FullTCP and found something that IMHO doesn't
seem right (a bug?), but since I'm neither an ns expert nor a TCP
guru...
I was running the enclosed script, for testing the opening & closing of
a connection. I wanted to do the following:
(1) open the connection,
(2) send some data in one direction,
(3) close the connection,
(4) wait for some time,
(5) repeat from (1), k times
The "src" agent would do an active open, and the "sink" agent would act
as the "passive" side.
The first "cycle" (1)-(4) runs fine. But here's what happens at the
second cycle:
- "src" sends a SYN;
- "sink" answers with a SYN+ACK;
- "src" sends an ACK, followed by the first data segment;
- "sink" balks at the received ACK (and at the data segment as well),
and replies by sending two "reset" (*) segments (!?).
(*) I mean, these are what I guess in reality would be segments with the
RST bit on -- in ns, *all* flags are set to 0 in these 2 segments.
The problem seems to be that, after a close, some members of the "sink"
object (t_seq_no_, ...) are set to their starting values, while
highest_ack_ and maxseq_ keep their *old* values (i.e. the values
attained during the previous cycle).
So, when the ACK arrives at "sink", this code in FullTcpAgent::recv()
goes wrong (tcp-full.cc, lines 1356-1365):
/*
* Ack processing.
*/
switch (state_) {
case TCPS_SYN_RECEIVED: /* want ACK for our SYN+ACK */
if (ackno < highest_ack_ || ackno > maxseq_) {
// not in useful range
goto dropwithreset;
}
even though the ACK is a perfectly good one.
I noticed too that the agent doing an active open gets a reset() in
FullTcpAgent::advance_bytes() when it goes from CLOSED to SYN_SENT,
whereas the passive agent is *not* resetted when going to either CLOSED
or LISTEN.
Inserting a call to reset() between lines 1613 & 1614 (method:
FullTcpAgent::recv()) seems to solve the problem:
(starting at line 1606)
case TCPS_LAST_ACK: /* passive close */
// K: added state change here
if (ourfinisacked) {
newstate(TCPS_CLOSED);
#ifdef notdef
cancel_timers(); // DOES THIS BELONG HERE?, probably (see tcp_cose
#endif
finish();
// ---------->>> BEGIN CHANGE <<<------------ //
reset();
// ---------->>> END CHANGE <<<------------ //
goto drop;
} else {
I tested this and now the succesive re-openings of the connection work
fine, but I'm not sure whether this will introduce unexpected side
effects and, as I said before, I'm not a {ns,TCP} expert.
Any comments are welcomed (I'm running ns 2.1b6 on Solaris, BTW.)
TIA,
David.
=================================================================
David ROS mailto:[email protected]
IRISA/INRIA, Campus de Beaulieu, Phone: +33 2 99 84 72 98
35042 Rennes Cedex, France Fax: +33 2 99 84 25 29
"Any sufficiently advanced
technology is indistinguishable from magic." --- Arthur C. Clarke
test.tcl