615 Project 1
Fall 2002
Project 1
The goal of this project is to learn to use the ns2 network
simulator. Here are a set of exercises, in increasing order
of difficulty, to accomplish this task.
First, you will need to acquire and install the ns2 sources.
You can do so from the
ns2 distribution here.
Here are some reference links on ns2:
Once you have installed and compiled ns2, it's time to run some
simulations through it. In the examples below, replace the
ns2 version number (2.1b8a) with the version number that
came with the ns2 installation you are using.
- Run the following examples to get a rough idea of how ns and nam work:
- ns ns-allinone-2.1b8a/ns-2.1b8a/tcl/ex/nam-example.tcl
- This is just a demo of nam's capabilities. What happens at time 4 ?
- ns ns-allinone-2.1b8a/ns-2.1b8a/tcl/ex/tg.tcl
- This showcases the different default traffic generation schemes
in ns. What are the traffic patterns being generated in this example ?
- ns ns-allinone-2.1b8a/ns-2.1b8a/tcl/ex/mcast.tcl
- This shows the operation of a wired multicast algorithm. What
happens at time 1 ?
-
ns ns-allinone-2.1b8a/ns-2.1b8a/tcl/ex/wireless-test.tcl
- This simulates a wireless ad hoc network! The resulting trace
is in out-test.tr
- Simulate 10 CBR source-sink pairs communicating over DSR, using a field
size of 1000x1000, 250m radius, random waypoint mobility and constant
bit rate sources. Measure the simulation time for 10, 30, 50, 100,
150, and 200 nodes. You can use the Unix time command for
measurements. How does ns2 scale with increasing numbers of nodes ?
- The DSR paper examined a field size of 1500x300m, with 250m
radius. First, duplicate one of their protocol comparison
graphs. Next, change the field size to 1500x1500 (while also increasing
the number of nodes to keep density constant) so the network is no
longer linear and caching is less likely to be useful. Is there a
qualitative difference in the results ?
Some people have noticed that a field size of 1500x1500
requires 250 nodes, at which point ns2 runs too slowly to collect any
measurements (as shown by your measurements from the previous question).
So, you can scale the field size down (say, to 1000x1000m or 750x750m), which
should make the simulation run much faster and hopefully complete before the
deadline!
- Set up a linear, static (i.e. no mobility) network of six
nodes, connected together over 802.11b. That is, A can talk to B can
talk to C can talk to D can talk to E can talk to F, and there are no
other links in the network. Set up a constant-bit-rate (CBR) source on
node B, with a corresponding sink on E. Set up another CBR source-sink
pair between A and F. Run the system for a sufficiently long time to
determine statistically significant usage measurements, and compare the
bandwidth achieved by the B-E flow to the bandwidth achieved by the
A-F flow. Is there a substantial difference between the bandwidths
achieved by the two flows ? Are they efficiently using the maximum
available bandwidth available under 802.11b ? Describe why your
findings make sense. (Hint: think about the consequences of the
RTS/CTS protocol used by the 802.11 MAC layer protocol).
- Set up a simulation, using a field size of 1000x1000, 250m
radius, random waypoint mobility, 802.11b wireless physical layer,
30 nodes, using TCP for all communication, and {DSR, AODV and TORA}
underneath. The nodes repeatedly engage in TCP transactions (at a
reasonable rate, e.g. 10 per second), where one node initiates a TCP
connection to another node and sends {1KB, 6KB, 30KB, 60KB} of
data. Determine the mean latency and bandwidth as a function of
routing protocol and transaction size. Does the interaction of any of
the routing protocols with TCP cause any anomalies ?
You should now be a fairly decent ns2 user.