Tips and tricks

Getting root access

sudo su

I want internet access on my nodes

What do you really want ?

Getting NATted IPv4 access to the internet on w-iLab.t

For w-iLab.t, this is enabled by default.

Getting NATted IPv4 access to the internet on Virtual Wall 1

For physical machines on the Virtual Wall 1, use the following route changes:

sudo route del default gw 10.2.15.254 && sudo route add default gw 10.2.15.253
sudo route add -net 10.11.0.0 netmask 255.255.0.0 gw 10.2.15.254
sudo route add -net 10.2.32.0 netmask 255.255.240.0 gw 10.2.15.254

If you are connected through iGent VPN, you can also add these routes, in order enable direct access (ssh, scp):

sudo route add -net 157.193.214.0 netmask 255.255.255.0 gw 10.2.15.254
sudo route add -net 157.193.215.0 netmask 255.255.255.0 gw 10.2.15.254
sudo route add -net 157.193.135.0 netmask 255.255.255.0 gw 10.2.15.254
sudo route add -net 192.168.126.0 netmask 255.255.255.0 gw 10.2.15.254

For VMs on the Virtual Wall 1, use the following route changes:

sudo route add -net 10.2.0.0 netmask 255.255.240.0 gw 172.16.0.1
sudo route del default gw 172.16.0.1 && sudo route add default gw 172.16.0.2

If you are connected through iGent VPN, you can also add these routes, in order enable direct access (ssh, scp):

sudo route add -net 157.193.214.0 netmask 255.255.255.0 gw 172.16.0.1
sudo route add -net 157.193.215.0 netmask 255.255.255.0 gw 172.16.0.1
sudo route add -net 157.193.135.0 netmask 255.255.255.0 gw 172.16.0.1
sudo route add -net 192.168.126.0 netmask 255.255.255.0 gw 172.16.0.1

Getting NATted IPv4 access to the internet on Virtual Wall 2

For physical machines on the Virtual Wall 2, use the following route changes:

sudo route del default gw 10.2.47.254 && sudo route add default gw 10.2.47.253
sudo route add -net 10.11.0.0 netmask 255.255.0.0 gw 10.2.47.254
sudo route add -net 10.2.0.0 netmask 255.255.240.0 gw 10.2.47.254

If you are connected through iGent VPN, you can also add these routes, in order enable direct access (ssh, scp):

sudo route add -net 157.193.214.0 netmask 255.255.255.0 gw 10.2.47.254
sudo route add -net 157.193.215.0 netmask 255.255.255.0 gw 10.2.47.254
sudo route add -net 157.193.135.0 netmask 255.255.255.0 gw 10.2.47.254
sudo route add -net 192.168.126.0 netmask 255.255.255.0 gw 10.2.47.254

For VMs on the Virtual Wall 2, use the following route changes:

sudo route add -net 10.2.32.0 netmask 255.255.240.0 gw 172.16.0.1
sudo route del default gw 172.16.0.1 && sudo route add default gw 172.16.0.2

If you are connected through iGent VPN, you can also add these routes, in order enable direct access (ssh, scp):

sudo route add -net 157.193.214.0 netmask 255.255.255.0 gw 172.16.0.1
sudo route add -net 157.193.215.0 netmask 255.255.255.0 gw 172.16.0.1
sudo route add -net 157.193.135.0 netmask 255.255.255.0 gw 172.16.0.1
sudo route add -net 192.168.126.0 netmask 255.255.255.0 gw 172.16.0.1

Using custom images on Virtual Wall 1, pcgen1 nodes

If you use the default image UBUNTU12-64-STD or create your custom image from that one, then there is no issue, and you can ignore this.

If you want to use older images, or other custom images, please make the following changes if you want to run your experiment on the pcgen1 nodes and use networking (more specifically, the nvidia MCP55 forcedeth interfaces on the 6 interface machines).

We have to load the forcedeth driver with some options.

Become root on your image and create a file /etc/modprobe.d/forcedeth.conf with the following contents:

options forcedeth msi=0 msix=0 optimization_mode=1 poll_interval=38 max_interrupt_work=40 dma_64bit=0

To load this also in the initrd, do the following:

uname -r
update-initramfs -u -k 3.2.0-56-generic
(kernelversion is the output of uname -r)

ethtool and mii-tool should be renamed or removed from your image (otherwise emulab startup scripts try to set things which are not needed)

Then create from that node your new image, and use that.

This should solve the link issues with these cards.

If you still see link issues, please contact vwall-ops .a.t. atlantis.ugent.be and LET YOUR EXPERIMENT RUN so we can inspect this.

Sharing a LAN between experiments

A LAN can be shared between different experiments.

Behind the scenes, this is implemented by assigning the same VLAN to links in the different experiments. From the perspective of the experiments, this detail is hidden, and it just looks like all nodes on the link are connected to the same switch.

Not that this is a layer 2 link, so the IP addresses of nodes connected to the shared links in both experiments, must be on the same subnet, and must all be unique in that subnet.

To use this feature, first a LAN in an existing experiment must be shared. Then, additional experiments can be started with a connection to this LAN.

Step by step instructions:

1. Create an experiment (e.g. 2 nodes) with a LAN that you want to share, and run it. Once it is running (in jFed) and completely ready, right click on a LAN you want to share, and select “Share/Unshare LAN”. In the dialog, give the LAN a unique name. Anyone who knows this name will be able to connect links to the LAN, in the second step.

2. Create an experiment (but do not run it yet) that includes a LAN that should be connected to the shared LAN created in step 1. This experiment has to be on the same testbed. (So you cannot use this feature to share LANs between Virtual Wall 2 and Virtual Wall 1). Make sure that the IP adresses of the nodes connected to the LAN to be shared are in the same subnet that is used on the shared LAN. In this experiment, you can right click the LAN when you design the experiment, go to configure link, Link type, and then check ‘Shared Lan’ and type in the same name you gave in 1.

In the RSpec, the following link_shared_vlan tag will be added:

<link client_id="link2">
    <component_manager name="urn:publicid:IDN+wall2.ilabt.iminds.be+authority+cm"/>
    <link_type name="lan"/>
    <sharedvlan:link_shared_vlan name="shared_lan_mylanname"/>
</link>

Run the experiment, and the LANs will be connected. This second step can be repeated with other experiments if needed.

Note that when the first expiriment is terminated, or the “Unshare LAN” feature is used in the first experiment, All experiments using the shared LAN will be disconnected from it.

Fetching node information (interface names etc.) in scripts

When automating experiment setup and/or experiments themselves, often information such as the name of the local machine, the control interface or the local experiment interface names are needed.

An example: Consider an experiment with a node named xenNode, which is connected with a link named link0 by an interface xenNode:if0. A setup script that runs on this node, might need to know the linux interface name of the interface, which might for example be eth2. Note that each time an experiment is created, even though the same request RSpec is used, the linux interface name may be different (it might for example be eth1 or eno1 the next time).

The most generic method to retrieve this info is by using the geni-get command. See http://groups.geni.net/geni/wiki/GeniGet for detailed info.

2 common commands are:

#Get name of local machine in RSpec
geni-get client_id

#Get manifest RSpec
geni-get manifest

To use this info to retrieve interface names, processing of the manifest data is needed. This is a manifest RSpec, which is an XML based format, so many XML processing tools can be used. There are also tools that can process RSpecs specifically, such as geni-lib.

You can find an example python script here: (geni-get-info.py) This script uses basic python XML processing. It extracts interfaces names and other usefull data. It can be used a a starting point for your own scripts.

To try the script, log in on a node, and run the following commands:

wget http://doc.ilabt.iminds.be/ilabt-documentation/_downloads/geni-get-info.py
chmod u+x geni-get-info.py
./geni-get-info.py

Example output:

Name of this machine in the RSpec: "node0"

SSH login info:
      someuser@n172-02.wall1.ilabt.iminds.be:22

Control network interface:
      MAC: 00:30:48:43:5d:c2
      dev: eth0
      ipv4: 10.2.2.22 (netmask /20)
      ipv6: 2001:6a8:1d80:2021:5062:9363:5859:6fab (netmask /64)
      ipv6: 2001:6a8:1d80:2021:230:48ff:fe43:5dc2 (netmask /64)
      ipv6: fe80::230:48ff:fe43:5dc2 (netmask /64)

Experiment network interfaces:
      Iface name: "node0:if0"
            MAC: 00:31:58:43:58:e8
            dev: eth2
            ipv4: 192.168.0.1 (netmask 255.255.225.0)

Requested public IPv4 pool (routable_pool):
      193.190.127.161/255.255.255.192
      193.190.127.162/255.255.255.192

Note that geni-get can give you also other information:

root@node0:/tmp# geni-get commands
{
 "client_id":    "Return the experimenter-specified client_id for this node",
 "commands":     "Show all available commands",
 "control_mac":  "Show the MAC address of the control interface on this node",
 "geni_user":    "Show user accounts and public keys installed on this node",
 "getversion":   "Report the GetVersion output of the aggregate manager that allocated this node",
 "manifest":     "Show the manifest rspec for the local aggregate sliver",
 "slice_email":  "Retrieve the e-mail address from the certificate of the slice containing this node",
 "slice_urn":    "Show the URN of the slice containing this node",
 "sliverstatus": "Give the current status of this sliver (AM API v2)",
 "status":       "Give the current status of this sliver (AM API v3)",
 "user_email":   "Show the e-mail address of this sliver's creator",
 "user_urn":     "Show the URN of this sliver's creator"
}

On emulab based sites, there is also an alternative to using geni-get: The /var/emulab/boot/ dir contains various info files. For example, link info can be found in /var/emulab/boot/topomap, the control interface in /var/emulab/boot/controlif and the full machine name in /var/emulab/boot/nickname. Note that this method is not recommended, as there is no guarantee that this information will stay the same in case of emulab software upgrades.

Change the impairment

If you use node-based impairment (no separate impairment bridge), you can change this impairment by running tc commands. After starting the experiment with impairment, you can check the current impairment by running tc qdisc show:

qdisc htb 130: dev eth5 root refcnt 9 r2q 10 default 1 direct_packets_stat 0 direct_qlen 1000
qdisc netem 120: dev eth5 parent 130:1 limit 1000 delay 200.0ms

This shows the current impariment (bandwidth, latency and packet loss). The commands that the testbed uses can be found on the node in /var/emulab/boot/rc.linkdelay. The above has been installed by issuing:

/sbin/tc qdisc del dev eth5 root
/sbin/tc qdisc del dev eth5 ingress
/sbin/tc qdisc add dev eth5 handle 130 root htb default 1
/sbin/tc class add dev eth5 classid 130:1 parent 130 htb rate 10000000 ceil 10000000
/sbin/tc qdisc add dev eth5 handle 120 parent 130:1 netem drop 0 delay 200000us

This installs a 10Mb/s bandwidth limit and 200ms one-way latency (0% packet loss). The other node can have a similar or different delay.

You can find more examples of using tc and netem at http://www.linuxfoundation.org/collaborate/workgroups/networking/netem.

Port forwarding

All nodes have by default a public IPv6 address and a private IPv4 address. If you don’t have IPv6, then jFed works around this, by using a proxy for ssh login. However, if you want to access e.g. webinterfaces of your software, you can use the following method.

On virtual wall 1 and 2, you can use XEN VMs with a public IPv4 address (right click the node in jFed and select Routable control IP). These XEN VMs can then be accessed freely on IPv4. As such you can use them to port forward particular services from IPv6 to IPv4.

Suppose you have a webserver running on a machine on the testbed listening on port 80, IPv6 only, e.g. n091-07.wall2.ilabt.iminds.be. You have a XEN VM running with a public IPv4 (can be the same experiment or another one). IP address is e.g. 193.190.127.234.

Log in on the XEN VM, and do the following:

sudo apt-get update
sudo apt-get install screen
screen   (this makes it permanent, also if the terminal stops, use CTRL-A, D to get out of this screen while keeping it running)
ssh -L 8080:n091-07.wall2.ilabt.iminds.be:80 -g localhost

Now you can surf to http://193.190.127.234:8080 and you see what happens at n091-07.wall2.ilabt.iminds.be port 80.

Debugging problems with adding SSH keys

If the “Edit SSH Keys” options fails to add a user, you can check the logfile at /var/emulab/logs/emulab-watchdog.log for more information.