[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

loss-models for computational bottlenecks?



Hi all,

is there support for loss models in which packets get lost due to a
processing power problem rather than a queue overflow in ns-2? I know there
were some links like "lossy-uniform" and "lossy-det" etc. in ns-1.4, but I
can't find anything similar in ns-2.

In my simulations, it appears that packet loss is _very_ little (with pure
TCP traffic) while bandwidth is still pretty good utilized, which is a
strong indicator that TCP does a pretty good job. Also, real measurements
show that packet loss is mainly close to the leaves in the MBone (where
workstations etc. are often used for routing, which can become a
performance bottleneck is we are in a timesharing system) rather than near
the backbone-links (where dedicated routers are employed).

There are sort of "session level loss"-classes, e.g. chapter 10 of latest
ns-2 draft, but I'm looking for something on a node-level (or link-level).

    -Chris.