27.2 Write a Test Suite

You can take one of the test suites under ~ns/tcl/test as a template when you are writing your own, for example the test suite written for wireless lan (test-all-wireless-lan, test-suite-wireless-lan.tcl, and test-output-wireless-lan).

To write a test suite, you first need to write the shell script (test-all-xxx). In the shell script, you specify the module to be tested, the name of the ns tcl script and the output subdirectory. You can run this shell script in quiet mode. Below is the example (test-all-wireless-lan):

   \# To run in quiet mode:  "./test-all-wireless-lan quiet".

   f="wireless-lan"		\# Specify the name of the module to test.
   file="test-suite-\$f.tcl"	\# The name of the ns script.
   directory="test-output-\$f" 	\# Subdirectory to hold the test results
   version="v2"			\# Speficy the ns version.
   
   \# Pass the arguments to test-all-template1, which will run through
   \# all the test cases defined in test-suite-wireless-lan.tcl.
   ./test-all-template1 \$file \$directory \$version \$@

You also need to create several test cases in the ns script (test-suite-xxx.tcl) by defining a subclass of TestSuite for each different test. For example, in test-suite-wireless-lan.tcl, each test case uses a different Ad Hoc routing protocol. They are defined as:

	
   Class TestSuite

   \# wireless model using destination sequence distance vector
   Class Test/dsdv -superclass TestSuite

   \# wireless model using dynamic source routing
   Class Test/dsr -superclass TestSuite
   ... ...

Each test case is basically a simulation scenario. In the super class TestSuite, you can define some functions, like init and finish to do the work required by each test case, for example setting up the network topology and ns trace. The test specific configurations are defined within the corresponding sub-class. Each sub-class also has a run function to start the simulation.

	
   TestSuite instproc init {} {
     global opt tracefd topo chan prop 
     global node\_ god\_ 
     \\$self instvar ns\_ testName\_
     set ns\_         [new Simulator]
      ... ...
   } 

   TestSuite instproc finish {} {
     \\$self instvar ns\_
     global quiet

     \\$ns\_ flush-trace

     puts "finishing.."
     exit 0
   }
        
   Test/dsdv instproc init {} {
     global opt node\_ god\_
     \\$self instvar ns\_ testName\_
     set testName\_       dsdv
     ... ...    

     \\$self next
     ... ...

     \\$ns\_ at \$opt(stop).1 "\$self finish"
   }

   Test/dsdv instproc run {} {
     \\$self instvar ns\_
     puts "Starting Simulation..."
     \\$ns\_ run
   }

All the tests are started by the function runtest in the ns script.

   proc runtest {arg} {
     global quiet
     set quiet 0

     set b [llength \$arg]
     if {\$b == 1} {
        set test \$arg
     } elseif {\$b == 2} {
        set test [lindex \$arg 0]
        if {[lindex \$arg 1] == "QUIET"} {
         set quiet 1
        }
      } else {
         usage
     }
     set t [new Test/\$test]
     \$t run
}

global argv arg0
runtest \$argv

When you run the tests, trace files are generated and saved to the output subdirectory. These trace files are compared to the those correct trace coming with the test suite. If the comparation shows difference, the test is failed.

Tom Henderson 2011-11-05