[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ns] Yet Another Seg. Fault




Hi Debo,
    Thanks for your enthusiasm.
    
    I've made lots of modifications after I sent that mail 
to ns mailing list, however, here I attached what you might like to see.
    
    Below I just decribe what my problem is again lest you
should miss my points by my inferior exposition.
<1> At first I d/l wf2q+.cc and added wf2q+.o in Makefile before
I exec. "make." But the source code forgot to add "int off_cmn_."
Well, this one was trivial.
<2> Then I got seg. fault after changing Agent/CBR to
Agent/UDP conn. w/ APP./Traffic/CBR and running the scripts.
It's shown (by gdb) that sth. went wrong after the line 
"hdr_ip*  hip  = (hdr_ip*)pkt->access(off_ip_);" was exec. 
in enque() of wf2q+.cc.
    <2_a> Simply adding an argument named "cbri" (whatever name
is fine) in that "build-udp" arg. list and making $cbr in the original
utils.tcl $cbr(cbri). It worked! A surprise, though.
    <2_b> I changed all lines using pkt->access(off_ip_) or 
pkt->access(off_cmn_) to hdr::access(pkt), and removed the changes
made in the above step. It passed.
    
    Hence I can't figure out why these two methods of "access" differ from
each other, and why the change in step <2_a> happened to make it work.

> Could you send me your script. I will check it out.
> 
> debo
#
# Carnegie Mellon University
# 1999-2000
# Shahzad Ali
#
# Test script for wf2q+.
# Illustrates the behavior of the worst-case weighted FQ 
# (implemented by wf2q.cc) policy in the presence of 
# 3 UDP flows with different parameters 
# 
#     source, flow 0 (UDP)         sink, flow 0
#        \                              /
#         \                            /
#          n0 ----------------------- n1
#         /                            \
#        /                              \
#     source, flow 1 (UDP)         sink, flow 1
#     source, flow 2 (UDP)	   sink, flow 2

source utils.tcl

# create an instance of the simulator
set ns [new Simulator]
set nf [open out.nam w]
$ns namtrace-all $nf
# create nodes
set n0 [$ns node]
set n1 [$ns node]

$ns color 0 red
$ns color 1 blue
$ns color 2 black
$ns duplex-link $n0 $n1 1Mb 100ms WF2Q

# set sizes of queues allocated to  flows 0 and 1 at the output
# port of n0 to 5000 bytes
# 
# NOTE: first we get a handle to the RR queue discipline
set q [$ns get-queue $n0 $n1]
$q set-queue-size 0 5000
$q set-queue-size 1 5000
$q set-queue-size 2 5000
# Set the weight of the flows
$q set-flow-weight 0 2 
$q set-flow-weight 1 1
$q set-flow-weight 2 1


# build the three UDP flows.  First two with 1000 byte packets at 1 Mbps
# and the thiird with 500 bytes packets at 0.6 Mbps
build-udp $n0 $n1 1000 0.008 0 0.1 10
build-udp $n0 $n1 1000 0.008 1 0.2 10
build-udp $n0 $n1 500 0.0067 2 0.3 10

#
# Create a trace and arrange for all link
# events to be dumped to "out.tr"
#
set tf [open out.tr w]
$ns trace-queue $n0 $n1 $tf
$ns at 10.0 "finish"

# flush all trace data when finish
proc finish {} {
        global ns tf nf
        $ns flush-trace
	close $nf
        close $tf
        exit 0
}

# start simulation
#debug 1
$ns run
#
# Carnegie Mellon University
# 1999-2000
#
# Shahzad Ali ([email protected])
#
# File implementing various procedures (e.g., creating TCP and UDP flows)

# Create a TCP flow 
# - type - TCP type (TCP is by default TCP Tahoe) 
# - src - TCP's source node
# - dst - TCP's destination node
# - pktSize - packet size
# - window  - maximum window size; if window is passed as 0, then
#             the default window size is set to 20 
# - flowid    - id associated to the TCP flow
# - startTime - time when the TCP source starts transmitting
# - stopTime  - time when the TCP source stops transmitting

proc build-tcp { type src dest pktSize window flowid startTime stopTime } {
    global ns

    # build tcp source
    if { $type == "TCP" } {
      set tcp [new Agent/TCP]
      set snk [new Agent/TCPSink]
    } elseif { $type == "Reno" } {
      set tcp [new Agent/TCP/Reno]
      set snk [new Agent/TCPSink]
    } elseif { $type == "Sack" } {
      set tcp [new Agent/TCP/Sack1]
      set snk [new Agent/TCPSink/Sack1]
    } elseif  { $type == "Newreno" } {
      set tcp [new Agent/TCP/Newreno]
      set snk [new Agent/TCPSink]
    } else {
      puts "ERROR: Inavlid tcp type"
    }
    $ns attach-agent $src $tcp

    #build tcp sink
    $ns attach-agent $dest $snk

    # connect source to sink
    $ns connect $tcp $snk

    # init. tcp parameters
    if { $pktSize > 0 } {
      $tcp set packetSize_ $pktSize
    }
    $tcp set class_ $flowid
    if { $window > 0 } {
      $tcp set window_ $window
    } else {
      # default in ns-2 version 2.0
      $tcp set window_ 20
    }

    set ftp [new Source/FTP]
    $ftp set agent_ $tcp
    $ns at $startTime "$ftp start"
    $ns at $stopTime "$ftp stop"

    return $tcp
}

# Create an UDP flow 
# - src - UDP's source node
# - dst - UDP's destination node
# - pktSize  - packet size
# - interval - the average interval between two packets (seconds)
# - flowid    - id associated to the UDP flow
# - startTime - time when the UDP source starts transmitting
# - stopTime  - time when the UDP source stops transmitting

proc build-udp {src dest pktSize interval id startTime stopTime} {
    global ns

    # build udp source
    set udp [new Agent/UDP]
    $ns attach-agent $src $udp

    set cbr [new Application/Traffic/CBR]
    $cbr set packetSize_ $pktSize
    $cbr set interval_ $interval
    $cbr set random_   1
    $cbr attach-agent $udp

    #build cbr sink
    set null [new Agent/Null]
    $ns attach-agent $dest $null

    #connect cbr sink to cbr null
    $ns connect $udp $null

    $udp set fid_ $id
    $ns at $startTime "$cbr start"
    $ns at $stopTime "$cbr stop"

    return $udp
}

#
# Return the reference to the link between node1 and node2
#
Simulator instproc get-link { node1 node2 } {
    $self instvar link_
    set id1 [$node1 id]
    set id2 [$node2 id]
    return $link_($id1:$id2)
}

#
# Return the reference to the output queue at node1 
# for the link between node1 and node2
# 
# NOTE: In most cases the output queue coincides to the 
# scheduling discipline associated to that output
#
Simulator instproc get-queue { node1 node2 } {
    global ns
    set l [$ns get-link $node1 $node2]
    set q [$l queue]
    return $q
}