[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: HELP !!! RP node failure ...( ctrmcast )



On Wed, 16 Feb 2000, Nader Salehi wrote:

> There is also another problem.  You are not supposed to hard code your
> multicast address.  Instead of assigning the address, i.e, 0x8000,
> yourself, you should call;
> 
> set group [Node allocaddr]

I would have thought that hard-coding multicast addresses was dead as
of ns 2.1b6, since they shouldn't work in the now-default 32-bit
address space.

But a quick look at current CVS shows:

http://www-mash.cs.berkeley.edu/cgi-bin/cvsweb/ns-2/tcl/ex/srm-session.tcl?rev=1.4
[..]
set group 0x8000

http://www-mash.cs.berkeley.edu/cgi-bin/cvsweb/ns-2/tcl/ex/srm-star-session.tcl?rev=1.5
[..]
set group 0x8000

and then there's everything in tcl/ex/newmcast/ - same. Such as, oh,
for CtrMcast:
http://www-mash.cs.berkeley.edu/cgi-bin/cvsweb/ns-2/tcl/ex/newmcast/cmcast.tcl?rev=1.11
cmcast.tcl:$udp0 set dst_ 0x8003
cmcast.tcl:$ns at 0.3 "$n1 join-group  $rcvr1 0x8003"
cmcast.tcl:$ns at 0.4 "$n0 join-group  $rcvr0 0x8003"
cmcast.tcl:$ns at 0.45 "$mrthandle switch-treetype 0x8003"
cmcast.tcl:$ns at 0.55 "$n3 join-group  $rcvr3 0x8003"
cmcast.tcl:$ns at 0.65 "$n2 join-group  $rcvr2 0x8003"
cmcast.tcl:$ns at 0.7 "$n0 leave-group $rcvr0 0x8003"
cmcast.tcl:$ns at 0.8 "$n2 leave-group  $rcvr2 0x8003"
cmcast.tcl:$ns at 0.9 "$n3 leave-group  $rcvr3 0x8003"
cmcast.tcl:$ns at 1.0 "$n1 leave-group $rcvr1 0x8003"

so we can see where the idea of hard-coding multicast addresses is
still coming from.

This teaches Entirely The Wrong Thing, and bad examples bite users. As
long as this example code is there, people will be wasting their time 
stuck on this in ns 2.1b6.

We need something that executes _every piece_ of the example code
which is run every so often, so a human in the VINT loop can inspect
the output and pick up on all these things that they'd normally
happily ignore. On a two-monthly basis, say.

(Can anyone list all the various multicast tcl commands that actually
 work? The example code is less than helpful here.)

L.

ns - just because you've Read It and it's Offered As An Example 
doesn't mean it's True.

<[email protected]>PGP<http://www.ee.surrey.ac.uk/Personal/L.Wood/>


> Nader
> 
> Polly Huang wrote:
> 
> > On Tue, 15 Feb 2000, Vijayan Ramakrishnan wrote:
> >
> > >
> > > Hi all
> > >
> > >   Suppose if I configured node1 & node2 as candidate-RPs
> > >   for a group, say 0x8000 ( using CtrMcast model )
> > >   ( in this case node2 would be the RP for this group )
> > >
> > >   If at some time t, there is a node failure for node2,
> > >   will CtrMcast automatically choose node1 as RP and
> > >   resume the multicast data forwarding ???
> >
> > no. node failure means the data couldn't reach the
> > supposed core (called rp in cmcast). Unless a detailed dynamic core
> > selection mechanism is implemented, you'll have to play with the
> > candidate core (called c-rp) list instead.
> >
> > >
> > >   ( I tried this scenario, but node2 wasn't choosen as RP
> > >      and all the data was lost - is this expected behaviour ? )
> >
> > Yes.
> >
> > >
> > >   I have attached the code.
> > >   Also I get errors when i call the method manually
> > >     ( _o162 unable to dispatch method "compute-mroutes" )
> > >
> > >   NOTE: compute-mroutes works in simple scenarios and there
> > >         is no problem with installation.
> >
> > I ran your m5.tcl. This is what I got:
> > >../../ns m5.tcl
> > using backward compatible Agent/CBR; use Application/Traffic/CBR instead
> > ns: _o162 compute_mroutes: _o162: unable to dispatch method
> > compute_mroutes
> >        ^
> >     while executing
> > "_o162 compute_mroutes"
> >               ^
> > As opposed in your email:
> > ( _o162 unable to dispatch method "compute-mroutes" )
> >                                           ^
> >
> > -Polly
> > >
> > >   ----------------------------------------------
> > >   Any help is greatly appreciated !!!
> > >   I am stuck with this problem for few days !!
> > >   ----------------------------------------------
> > >  - vijayan