[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

DRR (Deficit Round Robin), FQ, bug in SFQ



Hi, all,

I asked several questions, but got no answer so far. I am wondering if
anyone is familiar with those files? Also the archives for searching is not
working, so I have to ask many questions here. If that bothers you, I am
really sorry.

So far, as I understand, if I set MASK as 1, I could use DRR to distinguish
the flows from different source nodes. This is what I want because I want
all source nodes' flows through a multiplexor. Also I notice what this DRR
wants to do is to send almost fixed number of bytes for each flow every
time. While FQ wants to send packets for each flow fairly according to their
finish time. So they are just using different algorithm to be fair to each
queue, right? 

Can FQ put all flows from the same source node in one queue? In which file
could I find flowid(), which FQ use to discpatch the packets? Can I define
this myself? I mean, can I define the packets from the same source node to
be of same flowid so that I could put them in the same queue?

In SFQ, can I use the similar hash() method in drr.cc? Does the difference
from DRR still lie in the way it is fair to each queue? When it exceeds its
fairshare, it will just drop the packet, right?

Also I think there is one bug in sfq.cc. In SFQ::clear(), there should be
--i; after ++q; in the while(i) loop. But this function seems not very
useful.

Please give me some thoughts. Thanks a lot.

Huiwen

> -----Original Message-----
> From: Sandeep Bajaj [mailto:[email protected]]
> Sent: Monday, August 02, 1999 2:18 PM
> To: [email protected]
> Cc: [email protected]
> Subject: Re: DRR (Deficit Round Robin)
> 
> 
> "In reply to Gamze Tunali :"
> > 
> > Hi all,
> > 
> > I tried to use DRR and it always gave errors like (when I 
> have 31 TCP
> > connections between 2 nodes, there is only 1 hop):
> > 
> >   Collisions between 1 and 11 src addresses
> >   Collisions between 257 and 267 src addresses
> 
> The problem is because the default number of DRR buckets is 
> 10 and you have 31
> sources. So the hash function modulo the number of buckets 
> produces collisions. Please increase the number of DRR 
> buckets appropriately using :
> 
> Queue/DRR set buckets_ <some sutiable number greater than 31>
> 
> 
> Sandeep.
> 
> > I think the problem may be the hush function.. Why it uses this
> > function:
> >   ((i + (i >> 8) + ~(i>>4)) % ((2<<23)-1))+1;
> > 
> > If this function produces the same number for two different 
> flowids than
> > I guess collision error is given. Could somebody help me to 
> understand
> > what is going on?
> > 
> > Thanks
> > 
> > Gamze Tunali
> > 
> > 
> > 
> 
>