Share an internet connection to a local network

Required: One PC with two network cards, one connecting to the outer world and the other to the inner world. Assuming that the eth0 interface goes out and the eth1 interface goes in, the internal network card can be configured by editing the interfaces file:

sudo nano /etc/network/interfaces
auto eth1
iface eth1 inet static
address 192.168.0.1
#this will be the gateway adress for the clients connected to this network
netmask 255.255.255.0
gateway 'real_world_ip_address_of_eth0'
dns-nameservers 'some_dns_servers'

Then, the IP forwarding settings must be defined; via command line, using iptables, we have to:

sudo ip addr add 192.168.0.1/24 dev eth1

 

sudo iptables -A FORWARD -o eth0 -i eth1 -s 192.168.0.0/24 -m conntrack --ctstate NEW -j ACCEPT 
sudo iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
sudo iptables -t nat -F POSTROUTING
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

sudo iptables-save | sudo tee /etc/iptables.sav
echo "iptables-restore < /etc/iptables.sav" > /etc/rc.local
sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
sudo sed "s/\#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/" -i

Reboot and it’s done.

On the client, the /etc/network/interfaces file should be edited to allow the clients to connect to the otter world via the above configured machine:

iface eth0 inet static
 address 192.168.0.101
 gateway 192.168.0.1
#this is the address of the internal card defined above
 dns-nameservers 'some_dns_servers'

Reboot and it’s done. The server can also be configured to assign IP addresses on connection via DHCP.

Create a network share on Ubuntu using Samba

sudo apt-get update
sudo apt-get install samba

Create a user and a password for samba:

sudo smbpasswd -a <username>

Create a shared directory:

mkdir 'samba_shared_folder'

Backup smb.conf and edit it to include the shared folder:

sudo cp /etc/samba/smb.conf ~

sudo nano /etc/samba/smb.conf ~

Once “smb.conf” has loaded, add this to the very end of the file; there should be nospaces between the lines, and note que also there should be a single space both before and after each of the equal signs.
[<folder_name>]
 path = 'full path to folder'
 valid users = <user_name>
 read only = no

Restart the service:

sudo service smbd restart

[wip] Using a Bramble as a Gramble

The Bramble has been setup and running in a previous post.

1- Making sure MPI works fine

In the NFS folder create a mpi_hello.c file with

#include 
#include 

int main(int argc, char** argv) {
 int myrank, nprocs;

 MPI_Init(&argc, &argv);
 MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
 MPI_Comm_rank(MPI_COMM_WORLD, &myrank);

 printf("Hello from processor %d of %d\n", myrank, nprocs);

 MPI_Finalize();
 return 0;
}

and compile it with mpicc

mpicc mpi_hello.c -o mpi_hello

Run with

mpirun -np  -host <host1,host2,...> mpi_hello

and it should output something as

Hello from processor 2 of 20
Hello from processor 7 of 20

If you monitor the activity of each node, for example with htop -d 1, you will a transient increase in CPU activity.

2- To use Gromacs, it has to be compiled with correct options.

We are installing Gromacs at the NFS /apps folder. To work with MPI, the correct MPI flags are required:

cd /apps
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.1.4.tar.gz
tar xvf gromacs-5.1.4.tar.gz
cd gromacs-5.1.4
mkdir build
cd build
cmake .. -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ -DGMX_MPI=on -DGMX_BUILD_MDRUN_ONLY=on -DCMAKE_INSTALL_PREFIX=:/apps/gromacs-5.1.4-mpi -DBUILD_SHARED_LIBS=off -DGMX_BUILD_OWN_FFTW=ON -DGMX_DEFAULT_SUFFIX=mpi
make -j 4
make check
make install

To add the programs to the environment list just edit the /home/ubuntu/.bashrc and add the line

source /apps/gromacs/5.1.4/bin/GMXRC

and restart the terminal or source the profile file.

3- Run some Gromacs job.

Jobs can now be easily distributed over the nodes using

mpirun -np  -host <hostname1,hostname2,...> gmx_mpi mdrun

4- Playing with options to speed-up

4.1- Compiling with own FFTW

cd /apps
wget http://www.fftw.org/fftw-3.3.6-pl1.tar.gz
cd fftw-3.3.6-pl1
./configure CC=mpicc --enable.mpi
make -j 4
make install

4.1- Enabling sub-cycle counters at command time by recompiling with the new FFTW libraries and with GMX_CYCLE_SUBCOUNTERS:

cd /apps
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.1.4.tar.gz
tar xvf gromacs-5.1.4.tar.gz
mv gromacs-5.1.4 gromacs-5.1.4-dev
cd gromacs-5.1.4-dev
mkdir build
cd build
cmake .. -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ -DGMX_MPI=on -DGMX_BUILD_MDRUN_ONLY=on -DCMAKE_INSTALL_PREFIX=:/apps/gromacs-5.1.4-dev -DBUILD_SHARED_LIBS=off -DGMX_BUILD_OWN_FFTW=ON -DGMX_DEFAULT_SUFFIX=mpi_dev -DGMX_CYCLE_SUBCOUNTERS=on 
make -j 4
make check
make install

[wip] Setting up a Bramble

0- Hardware

5 Raspberry Pi 3 cards
5 microSHDC 16 GB cards (class 10, 90 MB/s read speed)
5 basic USB-micro USB B cale
[Compact 10-Port USB Charger] Anker PowerPort 10

1- Get a Linux image. We used Ubuntu Server Standard 16.04 for Raspberry Pi 3 from Ubuntu Pi Flavour Maker. Other optins include Rasbian Jessie distro, the official Raspberry Debian distro and Ubuntu Raspberry.

2- Write the image to an SD card; We used a Windows machine for this with  Win32DiskImagerLinux and Mac instructions are also available. Download, extract the files and run as admin; select the iso file; select the drive where the SD card is (make sure it is the correct drive); click write and wait for the write to finish. This has to be done for all SD cards.

3- For each Pi card, insert the SD card and power it. Update the system and install some required packages:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install openssh-server build-essential cmake

4- Configure the IP addresses in each node:

4.1- edit the /etc/hosts file in all nodes:

127.0.0.1 localhost
192.168.0.1 rasp1
192.168.0.2 rasp2
192.168.0.3 rasp3
192.168.0.4 rasp4
192.168.0.5 rasp5

4.2- in each node, assign the correct hostname in the /etc/hostname file:

rasp1 # for the head node; rasp2~5 for the remaining nodes

4.3- reboot the nodes
4.4- configure passwordless ssh, as the user that will run the cluster; on each node:

ssh-keygen -t rsa
#copy it to each of the other nodes:
ssh-copy-id @

5- Configure NFS

At the head node:

sudo apt-get install nfs-kernel-server
sudo cp /etc/exports /etc/exports.back
#add to /etc/exports on head
/home/ubuntu *(rw,sync,no_subtree_check)
/apps *(rw,sync,no_subtree_check)
sudo exportfs -a
sudo service nfs-kernel-server start

At the compute nodes:

sudo apt-get install nfs-common

On all machines:

sudo mkdir /apps
sudo chown -R ubuntu /apps

sudo ufw allow from 192.168.126.134
sudo ufw allow from 192.168.126.233
sudo ufw allow from 192.168.126.6
sudo ufw allow from 192.168.126.17
sudo ufw allow from 192.168.126.3

sudo mount rasp1:/home/ubuntu /home/ubuntu
sudo mount rasp1:/apps /apps

#insert the following lines at the of /etc/fstab
sudo nano /etc/fstab
rasp1:/home/goncalo /home/goncalo nfs
rasp1:/apps /apps nfs

sudo mount -a

6- Expand the SD card contents in order to use all the SD card space available by using raspi-config:

sudo apt-get install raspi-config
sudo raspi-config
#select 1. Expand Filesystem

 

NOTES

locale was wrong for us, the default is en_GB, it had to be changed:

sudo localedef -i pt_PT -f UTF-8 pt_PT.UTF-8

Compiling NWCHEM under Ubuntu with MPICH support

Note: Some code is written in the csh shell (setenv) but should be written in bash (export ="name")

1. Install required packages:

sudo apt-get install python-dev gfortran libopenblas-dev libopenmpi-dev openmpi-bin tcsh make 

2. Set environment variables at a /home/$USER/.nwchem_login file containing

export USE_MPI=y
export NWCHEM_TARGET=LINUX64
export USE_PYTHONCONFIG=y
export PYTHONVERSION=2.7
export PYTHONHOME=/usr
export BLASOPT="-lopenblas -lpthread -lrt"
export BLAS_SIZE=4
export USE_64TO32=y
setenv NWCHEM_TOP /nwchem
setenv NWCHEM_TARGET LINUX64
setenv NWCHEM_MODULES all

The following environment variables need to be set when NWChem is compiled with MPI:

setenv USE_MPI y
setenv USE_MPIF y
setenv USE_MPIF4 y
setenv MPI_LOC /openmpi-1.4.3 (for example, if you are using OpenMPI)
setenv MPI_LIB /openmpi-1.4.3/lib
setenv MPI_INCLUDE /openmpi-1.4.3/include
setenv LIBMPI "-lmpi_f90 -lmpi_f77 -lmpi -lpthread" 

Adding one of the following blocks according to the implemented MPI:
MPICH:

setenv MPI_LOC /usr/local #location of mpich installation
setenv MPI_LIB $MPI_LOC/lib
setenv MPI_INCLUDE $MPI_LOC/include
setenv LIBMPI "-lfmpich -lmpich -lpmpich"

MPICH2:

setenv MPI_LOC /usr/local #location of mpich2 installation
setenv MPI_LIB $MPI_LOC/lib
setenv MPI_INCLUDE $MPI_LOC/include
setenv LIBMPI "-lmpich -lopa -lmpl -lrt -lpthread"

OpenMPI:

setenv MPI_LOC /usr/local #location of openmpi installation
setenv MPI_LIB $MPI_LOC/lib
setenv MPI_INCLUDE $MPI_LOC/include
setenv LIBMPI "-lmpi_f90 -lmpi_f77 -lmpi -ldl -Wl,--export-dynamic -lnsl -lutil"

3. Source the .nwchem_login file at the .bashrc file:

source .nwchem_login

4. Compile the software:

make nwchem_config NWCHEM_MODULES="all python"
make 64_to_32
make

or:

cd $NWCHEM_TOP/src
make nwchem_config
make FC=gfortran >& make.log

Note:
Sometimes a default.nwchemrc file at the user home directory is required to list the directories NWChem should use:

nwchem_basis_library /usr/local/NWChem/data/libraries/
nwchem_nwpw_library /usr/local/NWChem/data/libraryps/
ffield amber
amber_1 /usr/local/NWChem/data/amber_s/
amber_2 /usr/local/NWChem/data/amber_q/
amber_3 /usr/local/NWChem/data/amber_x/
amber_4 /usr/local/NWChem/data/amber_u/
spce    /usr/local/NWChem/data/solvents/spce.rst
charmm_s /usr/local/NWChem/data/charmm_s/
charmm_x /usr/local/NWChem/data/charmm_x/

In this case, edit the .bashrc file to include

source nwchem_default

Setting up Gaussian09 on Ubuntu

This has worked without issues on Ubuntu 15.04 and above to install the Gaussian 09 binaries.

  1. Copy all Gaussian files to a folder under the home folder, for example
/home/user/g09
  1. Create a scratch directory, for example
/home/user/g09scratch
  1. Create a file at the same folder to set environment variables, for example
/home/user/.login

with the following content

alias g09="/home/user/g09/g09"
export g09root = "/home/user"
export GAUSS_EXEDIR="/home/user/g09"
export GAUSS_SCRDIR="/home/user/g09scratch"
export g09root GAUSS_SCRDIR
  1. Source the parameters by including the line
source /home/user/.login

at the end of the /home/user/.bashrcfile.
5. Jobs can be run from the terminal at the job file location simply by issuing the command

g09 filename

Notes:
A. In different systems g09 will complain of different missing libraries. Typically sudo apt-get install of the missing libraries will solve this. Some frequent missing libraries and packages are:
build-essential
libc6-dev-i386
gfortran
csh
libstdc++6
libc6

B. Some scripts may not run if you are compiling the executables, requiring some modifications:
sed -i 's/gau-machine/.\/gau-machine/' *
sed -i 's/set-mflags/.\/set-mflags/' bldg09
sed -i 's/bsd\/set-mflags/.\/bsd\/set-mflags/' bldg09
sed -i 's/gau-hname/.\/gau-hname/' set-mflags
sed -i 's/cachesize/.\/cachesize/' set-mflags
cp gau-unlimit ..
cp cachesize ..
cp set-mflags ..
cp ../gau-machine .
sed -i 's/getline/get_line/' fsplit.c

C. Gaussian may complain that files are world accessible. The easiest way to solve this issue is to grant read/write permission only to user.

sudo chown -R root:gauss09 g09
sudo chmod -R o-rwx g09
sudo chmod -R 777 gv

Compiling GROMACS 2016.1 in Ubuntu 16.04 with GPU support

  1. Install Intel Paralell XE cluster from binary file with ALL components

This install the Intel C, C++ and F compiler, the MKL libraries and the Intel MPI version. To run on one single machine, mpi is not required. To run across multiples machines, a mpi implementation is required, either Intel, MPICH or OpenMPI.

  1. Set environment variables (optionally add to .bashrc)

source intel/bin/compilervars.sh -arch intel64 -platform linux
export PATH=$PATH:"/opt/intel"
export MKLROOT="/opt/intel"
export CC=icc
export CXX=icc
export F77=ifort
export CFLAGS="-O3 -ipo- -static -std=c99 -fPIC -DMKL_LP64 -DM_PI=3.1415926535897932384"
export CPPFLAGS="-I$MKLROOT/include -I$MKLROOT/include/fftw"
export LDFLAGS="-L$MKLROOT/lib/intel64 -L$MKLROOT/../compiler/lib/intel64"
export LD_LIBRARY_PATH="$MKLROOT/lib/intel64:$MKLROOT/../compiler/lib/intel64:$LD_LIBRARY_PATH"

  1. Compile GROMACS
    tar xfz gromacs-2016.1.tar.gz
    cd gromacs-2016.1
    mkdir build
    cd build
    cmake .. -DCMAKE_C_COMPILER=icc -DGMX_MPI=on -DGMX_GPU=on -DGMX_USE_OPENCL=on -DCMAKE_CXX_COMPILER=icc -DGMX_SIMD=AVX2_256 -DCMAKE_INSTALL_PREFIX=/opt/gromacs-2016.1-mod -DGMX_FFT_LIBRARY=fftpack -DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_mod -DGMX_LIBS_SUFFIX=_mod -DGMX_FFT_LIBRARY=mkl
    make
    #make -j 6
    make check
    sudo make install
    source /opt/gromacs-2016.1-mod/bin/GMXRC

Documentation

INTEL Parallel Studio XE Install Guide for Linux

NVIDIA CUDA Quick Start Guide

NVIDIA CUDA Installation Guide for Linux