3.6. Networking Configuration Examples

In this section, we describe some common networking configurations and how to use Rocks commands to set up various networking scenarios.

3.6.1. Adding a public IP address the second ethernet adapter on a compute node

Often, owners want the second ethernet adapter to be on the public network and for the default routing to be in the public network. Assuming that the public network is 1.2.3.0/255.255.255.0 and the default gateway for that network is 1.2.3.1, the following set of commands define the second interface of a compute to have address 1.2.3.25 with name mypublic.myuniversity.edu, update all required configuration files on the frontend, update all required configuration files on the node compute-0-0 and restart the network on compute-0-0.

# rocks set host interface ip  compute-0-0 iface=eth1 ip=1.2.3.25 
# rocks set host interface gateway  compute-0-0 iface=eth1 ip=1.2.3.1 
# rocks set host interface name  compute-0-0 iface=eth1 name=mypublic.myuniversity.edu
# rocks sync config
# rocks sync network compute-0-0 

3.6.2. Adding an IP network for local message passing.

Often, users will want to use the second ethernet device for messaging passing. In this example, we illustrate creating a named subnet and then scripting the IP assignment for a rack of 32 nodes with IP range of 192.168.1.10 ... 192.168.1.41.

rocks add network fast subnet=192.168.1.0 netmask=255.255.255.0
IP=10
NNODES=32
NODE=0
while [ $NODE -lt $NNODES ]; then \
	rocks set host interface ip  compute-0-$NODE iface=eth1 ip=192.168.1.$IP; \
	rocks set host interface subnet  compute-0-$NODE iface=eth1 subnet=fast; \
	rocks set host interface name  compute-0-$NODE iface=eth1 name=compute-fast-0-$NODE; \
	let IP++; \
	let NODE++; \
done
rocks sync config
rocks sync network compute

The above will add the named subnet called "fast", assign IP addresses sequentially, name the eth1 interface on each node, rewrite the DNS configuration (sync config) and finally rewrite and restart the network configuration on each compute appliance. This additional network configuration is persistent across re-installation of nodes.