[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ns] compute-mroutes



Hi,

May be this can help: I have noticed that multicast routing and
hierarchical addressing are not fully compatible.   To my understanding,
you can not have a multicast source with an address X.Y.Z where X and Y
are <> than 0.    In this situation, there are no TCP output in the
trace.

Could someone confirm this ?

I guess the problem comes from classifier-mcast.cc because the function
classify().


Classify receives paquets and identifies the source.  Source is of type 
nsaddr_t src

It then call the OTCL interpreter:
tcl.evalf("%s new-group %ld %ld %d cache-miss",  name(), src, dst,
iface);

"src" is therefore passed to the OTCL interpreter which translates it as
the INDEX  in array Node_.  This work fine for flat addressing but fails
in hierarchical addressing.

May the NS experts from ISI have a work about this, please ?


Thanks,
Thierry.





> When running a multicast example using dynamic links I call the function
> compute-mroutes that should recompute all the multicast entries.
> However, when I establish a TCP connection, one of the sinks stopts
> receiving packets.
> I'm attaching my script, maybe someone found something similar.
> Cheers,
> Ana
> _____________________________________________________________________
> 
> #Simulator instance
> set ns [new Simulator]
> 
> #We allow multicast
> $ns multicast
> 
> #We define hierarchical format
> $ns set-address-format hierarchical
> 
> #Initial setup - set addressing to hierarchical
> AddrParams set domain_num_ 9
> lappend cluster_num 1 1 1 1 1 1 1 1 1
> AddrParams set cluster_num_ $cluster_num
> lappend eilastlevel 1 1 1 1 1 1 1 1 1
> AddrParams set nodes_num_ $eilastlevel
> 
> #We establish the nodes
> set node0 [$ns node 0.0.0]
> set node1 [$ns node 1.0.0]
> set node2 [$ns node 2.0.0]
> set node3 [$ns node 3.0.0]
> set node4 [$ns node 4.0.0]
> set node5 [$ns node 5.0.0]
> set node6 [$ns node 6.0.0]
> set node7 [$ns node 7.0.0]
> set node8 [$ns node 8.0.0]
> 
> #Open a trace file
> set namtrace [open multicast2.nam w]
> $ns namtrace-all $namtrace
> set trace [open multicast2.tr w]
> $ns trace-all $trace
> 
> #We define a multicast group
> set group [Node allocaddr]
> 
> #We define the links between the nodes
> $ns duplex-link $node0 $node1 1.5Mb 10ms DropTail
> $ns duplex-link $node1 $node2 1.5Mb 10ms DropTail
> $ns duplex-link $node1 $node3 1.5Mb 10ms DropTail
> $ns duplex-link $node2 $node4 1.5Mb 10ms DropTail
> $ns duplex-link $node2 $node5 1.5Mb 10ms DropTail
> $ns duplex-link $node3 $node6 1.5Mb 10ms DropTail
> $ns duplex-link $node3 $node7 1.5Mb 10ms DropTail
> $ns duplex-link $node4 $node8 1.5Mb 10ms DropTail
> $ns duplex-link $node5 $node8 1.5Mb 10ms DropTail
> $ns duplex-link $node6 $node8 1.5Mb 10ms DropTail
> $ns duplex-link $node7 $node8 1.5Mb 10ms DropTail
> 
> #We define the multicast protocol
> set mproto CtrMcast
> set mrthandle [$ns mrtproto $mproto]
> 
> $mrthandle set_c_rp $node1
> 
> #We establish the TCP connection
> set tcp0 [new Agent/TCP]
> $ns attach-agent $node0 $tcp0
> 
> set sink0 [new Agent/TCPSink]
> $ns attach-agent $node1 $sink0
> 
> $ns connect $tcp0 $sink0
> 
> set ftp0 [new Application/FTP]
> 
> $tcp0 set dst_addr_ $group
> $tcp0 set dst_port_ 0
> 
> $ns attach-agent $node2 $sink0
> $ns attach-agent $node3 $sink0
> $ns attach-agent $node4 $sink0
> $ns attach-agent $node5 $sink0
> $ns attach-agent $node6 $sink0
> $ns attach-agent $node7 $sink0
> $ns attach-agent $node8 $sink0
> 
> $ftp0 attach-agent $tcp0
> 
> #When the ftp connection starts
> $ns at 1.1  "$ftp0 start"
> $ns at 20.0  "$ftp0 stop"
> 
> #When the nodes join the tree
> $ns at 1.1 "$node2 join-group $sink0 $group"
> $ns at 1.1 "$node3 join-group $sink0 $group"
> $ns at 1.1 "$node4 join-group $sink0 $group"
> $ns at 1.1 "$node5 join-group $sink0 $group"
> $ns at 1.1 "$node6 join-group $sink0 $group"
> $ns at 1.1 "$node7 join-group $sink0 $group"
> $ns at 1.1 "$node8 join-group $sink0 $group"
> 
> #$ns at 6.5 "$node4 leave-group $sink0 $group"
> #$ns at 11.5 "$node5 leave-group $sink0 $group"
> #$ns at 16.5 "$node6 leave-group $sink0 $group"
> 
> #When the simulation finishes
> $ns at 20.0 "finish"
> 
> #Define a 'finish' procedure
> proc finish {} {
>     global ns trace namtrace
>     $ns flush-trace
>     close $namtrace
>     close $trace
>     exec nam multicast2.nam &
>     exit 0
> }
> 
> #Wireless simulation
> $ns rtmodel-at 0.5 up $node8 $node4
> $ns rtmodel-at 0.5 down $node8 $node5
> $ns rtmodel-at 0.5 down $node8 $node6
> $ns rtmodel-at 0.5 down $node8 $node7
> 
> $ns rtmodel-at 6.6 down $node8 $node4
> $ns rtmodel-at 6.4 up $node8 $node5
> $ns at 6.5 "$mrthandle compute-mroutes"
> 
> $ns rtmodel-at 11.6 down $node8 $node5
> $ns rtmodel-at 11.4 up $node8 $node6
> $ns at 11.5 "$mrthandle compute-mroutes"
> 
> $ns rtmodel-at 16.6 down $node8 $node6
> $ns rtmodel-at 16.4 up $node8 $node7
> $ns at 16.5 "$mrthandle compute-mroutes"
> 
> #Start the simulation
> puts "Starting Simulation..."
> 
> #Run the simulation
> $ns run

--
* mailto:[email protected]  Tel +33 (0) 4 76 61 52 69 
* INRIA Rhone-Alpes Projet PLANETE       (fax 52 52) 
* http://www.inrialpes.fr/planete/people/ernst/Welcome.html