[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ns] when dupACK keeps arriving, TCP Reno doesn't send packetseven though there are no outstanding packets. Why?



hi, liang and Huike,

Thanks for the inputs.

I guess I didn't describe the problem clearly so that Liang thinks there
may be a probelm with my setting-up. I checked the pointed Liang
mentioned. I think my setting-up should be a general case, not a special
case. I found this things happen quite often in my testing. I give a part
of TCP trace at the end. You can see how it happens in traces.
Please check the explaination in following. 

Since similar things happened to Huike before, I guess there may be a
small bug in TCP Reno implementation. I checked the NS code and didn't get
it. 

Folks in NS team, please help us out.


On Sat, 29 Jul 2000, Guo, Liang wrote:

> I'm not sure whether I understand your question correctly.How
> much is the buffer size on the link? And how big is the packet size?
> If you used default 1000 bytes, the pipe can only allow 1 packet
> pass through. So I assume your buffer size is 100 packets.

The buffer size is 62 pkts. The packet size is 1000 bytes for both flows.
TCP flow is inifinite FTP and another flow is CBR wit 0.35Mb.
The link capacity is 0.5Mb. The link delay is 2ms.

> Even with such a setup, TCP
> should eventually adapt its window to the actual network pipe
> size by adjusting parameters like ssh_threshold, estimated RTT, 
> etc. (I'm assuming that you omitted the intermediate
> cwnd values in your description).

Yeah, TCP does adjust ssthesh and cwnd to a steady state. It is not a
normal SIN curve like steady state as we usually see.

But the cwnd isn't changed for dupack arrivals. As I mentioned in last
email, this sounds reasonable since dupacks means congestion or packet
corruption( no reasons to increase cwnd). Until a real ACK arrive,
cwnd starts to be alive. 


> And I don't understand how you can get 44 duplicate ACKs. Does

Since the window size was 46 pkts, if packet #i is dropped, all the 
packets after #i before the retransmission will cause a dupACK.

> that mean you have lost 44 packets? Or you have 44 packets

Only one packet drop.

> sitting behind a extremely large packet sent by the CBR agent?
> (BTW, what's the packet size of the CBR?) 

CBR has the same packet size, 1000 bytes. 
 
> In one words, I guess you are simulating TCP under a very strange
> setup, namely, a very narrow pipe with a big
> buffer, and TCP is trasmitting together with an extremely
> bursty UDP traffic. In such case, it makes sense that TCP

UDP traffice is CBR. So, the injection of UDP packets are very regular.

> congestion control behaves strangely. However, it still
> maintains the ability to transmit at the desired rate
> (around 30 packet/RTT, which can be computed by (0.5-0.35)/0.5 * (100 +
> 1)), although the cwnd varys quite drastically 
> due to the extremely high variable queueing behavior.

Since I check all the TCP actions in my traces, I think my setting-up is 
not the reason to cause this problem.  I still think there maybe a problem
in TCP Reno implenmetation. We need help from NS team.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ The following is a part of TCP trace from another case.
+
+
+ . Link Capacity 0.5 Mb (62.5pkts/sec), delay 2 ms, queue size 31 pkts.
+ 
+ . Two flows on the same link with a same packet szie 1000 bytes:
+ 
+   -- UDP infinite CBR with a rate 0.300Mb, 37 pkts/sec
+
+   -- TCP flow a inifinite FTP
+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
At time 922.40464, cwnd 18 --> 9, 18 outstanding pkts,
At time 922.75664, dupAck 10, 8 outstanding pkts, so TCp could send one
pkt, which did after recv dupACK 10.

So, every dupACK recved, one packet is sent out. cwnd doesn't change
until a real ACK at time 923.17264. 8 outstanding pkts < cwnd==9. 
But TCP didn't send any pkts even dupACK 1, 2 and 3 arrives. 

At time 923.52464, cwnd 9 --> 4, ssthresh 9 -->4, 

When dupack 6 arrives at time 923.68464, there are less than 4 outstanding
pkt, TCP should be able to send at least one pkt out.
 
But TCP wait until time 924.08464, at the time, 0 outstanding pkts,
it sends a burst of 4 pkts.

The reaction to dupacks between 922.40464 and 923.52464 is different
from the reactions to dupACKs between 923.52464 and 924.08464.  

That is my question. When outstanding pkts are less than pkts or equal to
0, TCP should be able to send pkts. It does in the first period, it
doesn't in the second period. The only difference I can see is that
ssthesh decreases in the second period.

thanks,


yingfei


+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The following is part of the  trace
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

922.10064 0  1  1  1  cwnd_ 18.701
922.10064 0  1  1  1  maxseq_ 17832
922.14864 0  1  1  1  ack_ 17815
922.14864 0  1  1  1  cwnd_ 18.755
922.14864 0  1  1  1  maxseq_ 17833
922.19664 0  1  1  1  ack_ 17816
922.19664 0  1  1  1  cwnd_ 18.808
922.19664 0  1  1  1  maxseq_ 17834
922.22864 0  1  1  1  ack_ 17817
922.22864 0  1  1  1  cwnd_ 18.861
922.22864 0  1  1  1  maxseq_ 17835
922.29264 0  1  1  1  dupacks_ 1
922.35664 0  1  1  1  dupacks_ 2
922.40464 0  1  1  1  dupacks_ 3
922.40464 0  1  1  1  cwnd_ 9.000 
922.45264 0  1  1  1  dupacks_ 4
922.50064 0  1  1  1  dupacks_ 5
922.56464 0  1  1  1  dupacks_ 6
922.61264 0  1  1  1  dupacks_ 7
922.66064 0  1  1  1  dupacks_ 8
922.72464 0  1  1  1  dupacks_ 9
922.75664 0  1  1  1  dupacks_ 10
922.75664 0  1  1  1  maxseq_ 17836
922.78864 0  1  1  1  dupacks_ 11
922.78864 0  1  1  1  maxseq_ 17837
922.83664 0  1  1  1  dupacks_ 12
922.83664 0  1  1  1  maxseq_ 17838
922.88464 0  1  1  1  dupacks_ 13
922.88464 0  1  1  1  maxseq_ 17839
922.94864 0  1  1  1  dupacks_ 14
922.94864 0  1  1  1  maxseq_ 17840
922.99664 0  1  1  1  dupacks_ 15
922.99664 0  1  1  1  maxseq_ 17841
923.02864 0  1  1  1  dupacks_ 16
923.02864 0  1  1  1  maxseq_ 17842
923.17264 0  1  1  1  dupacks_ 0
923.17264 0  1  1  1  ack_ 17819
923.17264 0  1  1  1  cwnd_ 9.111 
923.42864 0  1  1  1  dupacks_ 1
923.47664 0  1  1  1  dupacks_ 2
923.52464 0  1  1  1  dupacks_ 3
923.52464 0  1  1  1  ssthresh_ 4
923.52464 0  1  1  1  cwnd_ 4.000 
923.57264 0  1  1  1  dupacks_ 4
923.63664 0  1  1  1  dupacks_ 5
923.68464 0  1  1  1  dupacks_ 6
923.71664 0  1  1  1  dupacks_ 7
924.08464 0  1  1  1  dupacks_ 0
924.08464 0  1  1  1  ack_ 17842
924.08464 0  1  1  1  cwnd_ 4.250 
924.08464 0  1  1  1  maxseq_ 17843
924.08464 0  1  1  1  maxseq_ 17844
924.08464 0  1  1  1  maxseq_ 17845
924.08464 0  1  1  1  maxseq_ 17846
924.48464 0  1  1  1  ack_ 17843
924.48464 0  1  1  1  rtt_ 0.400 
924.48464 0  1  1  1  srtt_ 0.600 
924.48464 0  1  1  1  cwnd_ 4.485 
924.48464 0  1  1  1  maxseq_ 17847
924.50064 0  1  1  1  ack_ 17844
924.50064 0  1  1  1  cwnd_ 4.708 
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

> > 
> > hi, folks,
> > 
> > I found a situation in TCP reno congestion control which is hard to
> > understand.
> > 
> > I have a TCP flow and a CBR flow on a same 0.5Mb-2ms link. The CBR is
> > 0.35Mb. the TCP flow is a ftp. TCP is set to ack every packet recv.
> > 
> > When the TCP flow is in steady state, cwnd changes regular in this way
> > 
> >     k RTT	1 RTT        m RTT 		k RTT      1 RTT
> > 46---------> 23 ----> 11 ----------------> 46 ---------> 23 ---> 11
> > 
> > ------>... (loop)
> > 
> > 
> > 1) When cwnd reach 46.xxx, 3 dupACK happen. There 46 outstanding packets.
> >    cwnd is set to 23, and ssthesh is set from 11 to 23. ssthesh=11
> >    from last steady state period.
> > 
> >    Then, more dupACKs (upto dupack 44) come. The number of outstanding
> >    packets keeps reducing. Before the number of ourstanding packets is
> >    decreased to less than 23, no packets are sent. 
> >    When the number of out standing packets is less than 23, the TCP sender
> >    starts to send a packet whenever a dup ACK comes.
> > 
> >    Until a real ACK comes after dupack 44, cwnd is increased from 23.000
> >    to 23.043. Then 3 dupacks happen again. cwnd and ssthresh are set to
> >    11.
> > 
> > 2) There 21 dupacks after cwnd and ssthesh cuting from 23 to 11.
> > 
> > 
> >    Same as the above. dupACK keep arriving because of 23 outstanding
> >    packets. The number of outstanding packets keep descreasing. 
> > 
> >    But, the TCP sender doesn't send any packets even though the number
> >    of outstanding packets is smaller than 11, even when the number
> >    of outstanding packets is 0.
> > 
> >    until a real ACK comes, cwnd is set to 11.091 and the TCP sender send a
> >    "burst" of 11 packets.
> > 
> > 
> > My questions are: 
> > 
> > 1) why the TCP sender doesn't send any packets even though
> >    the number of outstanding packets is 0 in the latter case?
> >    It does send packets in the first case when the number of outstanding
> >    packets are less than cwnd.
> > 
> >    The only thing I can think of is, in the first case, ssthesh is change
> >    from 11 to 23, but in the later case the ssthesh is changed from 23 to
> >    11. But it should be irrelevant.
> > 
> > 2) cwnd doesn't increase when receiving a dupACK. It sounds reasonable.
> >    But it is not standard stuff, right? For estimate RTO, Karn's
> >    Algorithm doesn't count ACKs for retransmitted packets. Is there a 
> >    similar algorithm for dupACK? Or, it is just an optimization of NS
> >    implementation?
> > 
> > 
> > Please drop me few lines if you have some ideas. Thanks a lot.
> > 
> > 
> > Yingfei 
> >    
> > 
> > 
> > 
> 
> Guo, Liang 
> 
> [email protected]                     Dept. of Comp. Sci., Boston Univ.,
> (617)353-8924 (O)                  111 Cummington St., MCS-217,
> (617)375-9206 (H)                  Boston, MA 02215
> 

===================================================
Yingfei Dong
4-192 EECS Building,
200 Union Street, SE            Tel: 612-626-7526
Minneapolis, MN 55455           FAX: 612-625-0572
===================================================