[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

NS2 modifications for enhanced recovery



Hello all,

I have modified the Reno and NewReno code to implement enhanced 'right-edge' recovery for clients with small windows (see www.eecs.harvard.edu/~htk/paper/infocom-tcp-final-198.pdf), and have noticed the expected decrease in timeouts.  However, the drop in RTOs persists even with large windows (above 20).  This is not expected, any ideas why?

I believe the only code that needed changing was in the recv() function in the following places.  Can someone look at the following changes and tell me if they are correct (and sufficient to implement this modification)?  Basically I simply increase the congestion window to send out the packets, but I'm not sure exactly when to shrink _cwnd back again, or if we need to adjust dupacks.

Note that NUMDUPACKS = 3, and er_sent is the number of packets that have been 'slipped-in' to help increase the chances of getting 3 dupacks (and starting recovery mode).

Thanks,
Tim

------------


void ERRenoTcpAgent::recv(Packet *pkt, Handler*)
{
....existing code....

// received a good ack
if (tcph->seqno() > last_ack_) {
  
  // Let TCPReno take care of it's business
  recv_newack_helper(pkt);
  
  // ***** Added for enhanced small-window recovery (START)*****
  int numacked = tcph->seqno() - last_ack_;
  if (numacked < er_sent_) {
    // if the number acked didn't cover those "slipped in" packets, then 
    // we need to deduct them from the current number of dupacks.
    dupacks_ -= numacked;
  }
  else   // otherwise it is safe to reset the dupack counter
     dupacks = 0;

  // in any case, reset the number of "slipped in" packets
  er_sent_ = 0;
  // ***** Added for enhanced small-window recovery (END)*****
  
....existing code....
  
 // received a duplicate ack (bad)
 } else if (tcph->seqno() == last_ack_)  {
    
....existing code....
  
  // if we have reached our dupack limit (3), then start recovery!
  if (dupacks_ == NUMDUPACKS ) {
  
     // ***** Added for enhanced small-window recovery (START)*****
     // before we start recovery, "undo" any packets that were slipped in
     cwnd_ -= er_sent_;
     // ***** Added for enhanced small-window recovery (END)*****
     
     dupack_action();
     dupwnd_ = NUMDUPACKS;
  }   

  // More than 3 dupacks? Then just increase the dup window
  else if (dupacks_ > NUMDUPACKS) {
     ++dupwnd_;   
  } 
  
  // ***** Added for enhanced small-window recovery (START)*****
  // If we have only received one of the first 2 dup acks, then 
  // lets try to increase the congestion window by one and send 
  // out an additional packet to promote receiver to send more dupacks.
  else if (dupacks_ >= 1) {

    // Try to open the congestion window (cwnd_) by 1
    if (opencwnd() == true) {
      // if we could, then send an additional packet
      output (t_seqno_++,0);

      er_sent_++;   // increase our 'slipped-in' counter
    }
  }
  // ***** Added for enhanced small-window recovery (END)*****  
 }

....existing code....

 if (dupacks_ == 0 || dupacks_ > NUMDUPACKS - 1)
        send_much(0, 0, maxburst_);
}


The scenario that I used for testing is displayed below: 

8 high-speed links (s1-s8 to r1) running through a single low-bandwidth path (r1-k1), with a router queue of 50.

Class Topology/netmany -superclass NodeTopology/manynodes
Topology/netmany instproc init ns {
    $self next $ns
    $self instvar node_
    $ns duplex-link $node_(s1) $node_(r1) 8Mb 15ms Droptail
    $ns duplex-link $node_(s2) $node_(r1) 8Mb 15ms Droptail
    $ns duplex-link $node_(s3) $node_(r1) 8Mb 15ms Droptail
    $ns duplex-link $node_(s4) $node_(r1) 8Mb 15ms Droptail
    $ns duplex-link $node_(s5) $node_(r1) 8Mb 15ms Droptail
    $ns duplex-link $node_(s6) $node_(r1) 8Mb 15ms Droptail
    $ns duplex-link $node_(s7) $node_(r1) 8Mb 15ms Droptail
    $ns duplex-link $node_(s8) $node_(r1) 8Mb 15ms Droptail
    $ns duplex-link $node_(r1) $node_(k1) 800Kb 100ms RED
    
    # Router Queue size is 50 
    $ns queue-limit $node_(r1) $node_(k1) 50
    $ns queue-limit $node_(k1) $node_(r1) 50
}

Is this a resonable scenario?

Thanks,
Tim
-----------------------------------------------------------------------
Tim Schweitzer                     [email protected]
Enterprise Solutions               ph: ESN 667-8151 (612-932-8151)