[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

a lesson about memory consumption in large scale simulations



In this email I'll briefly explain a lesson we learned from tuning
the memory usage of a large-scale (wrt traffic) simulation. Hopefully
it'll be useful for others. Here ns is referred to the current snapshot. 

Originally tcl/ex/large-scale-web-traffic.tcl was a self-similar web
traffic model used in "Dynamics of {IP} Traffic: A study of the role of
variability and the impact of control", SIGCOMM'99. It was implemented
completely in otcl (tcl/http/http-mod.tcl). When there are 400 HTTP
sessions and with the parameters given in that script, its memory usage
linearly increases with time, and the simulation ends with about 800+MB
memory (see http://www.isi.edu/~haoboy/old-usage-400.ps). 

There are two factors that contributed to this. First, http-mod.tcl
allocates various split objects using 'new', but does not _explicitly_
'delete' them in otcl. It turns out that this caused a memory leak every
time that new-ed split object is reassigned a value. Second, implementing
things, including HTTP pages, objects, completely in otcl takes a lot of
extra memory than to maintaining those complex data structures in C++.

After redesigning the HTTP traffic model and implementing the data
structures completely in C++, we cut down memory usage to be stabilized
around 62MB for a 400 session simulation
(http://www.isi.edu/~haoboy/new-usage-400.ps). 

However, a sad news is, moving things into C++ does not help much in
reducing executaion time. This is because packet passing in ns is
completely done in C++, and that's the main cause of the long execution
time (assuming there's enough memory). 

The lesson we learned is: (1) be very careful about 'new' and 'delete'
about split objects, and (2) always maintain complex data structures in
C++. 

Hope this will be helpful for others. 

- Haobo