This tutorial is a reworked/optimized version of the Hands-on session of the GCB 2019 in Heidelberg based on the latest release of BiBiGrid
- System base on Linux, OSX (tested) or Windows Subsystem for Linux (untested)
- required software packages
- Python > 3.6
- git (required)
- openssh
- Openstack API access
git clone https://github.com/BiBiServ/bibigrid.git
cd bibigrid
The following steps assume that you are inside of the bibigrid folder. It should contain:
$ ls
bibigrid bibigrid_rest.sh bibigrid.sh bibigrid.yml documentation README.md requirements-dev.txt requirements-rest.txt requirements.txt resources tests
The goal of this session is to set up a small HPC cluster consisting of 3 nodes (1 master, 2 workers) using BiBiGrid with Slurm (workload manager), Network File System (allows file sharing between servers) and Theia (Web IDE). This tutorial targets users running BiBiGrid on de.NBI cloud.
Use the prefilled configuration template resources/bibigrid.yml as a basis for your personal BiBiGrid configuration. Later in this tutorial you will use OpenStackClient or access Openstack's dashboard manually to get all necessary configuration information from your project.
Copy the configuration template to ~/.config/bibigrid/
.
mkdir ~/.config/bibigrid
cp resources/bibigrid.yml ~/.config/bibigrid/bibigrid.yml
In this section you will create an application credential and download the autogenerated clouds.yaml
. clouds.yaml
contains all required authentication information. Follow the images:
Don't use the input field secret.
- Its input is not hidden
- OpenStack will generate a strong secret for you, if you leave it blank.
Pick a sensible expiration date.
Safe the downloaded clouds.yaml
under ~/.config/openstack/
and ~/.config/bibigrid/
. That will allow both OpenstackClient and BiBiGrid to access it.
Why not store BiBiGrids `clouds.yaml` in openstack and save the extra copy?
In the future BiBiGrid will support more than just one cloud infrastructure. Therefore, using the ~/.config/openstack
folder would be a disadvantage later.
A virtual environment is something that gives you everything you need to run specific programs without altering your system.
Creating a Virtual Python Environment
python3 -m venv ~/.venv/bibigrid
In order to actually use the virtual environment we need to source that environment:
source ~/.venv/bibigrid/bin/activate
Following pip installations will only affect the virtual environment. The virtual environment is only sourced
in the terminal where you executed the source command. Other terminals are not affected.
You will now install packages required by BiBiGrid within your newly created virtual environment. If you haven't sourced
your environment yet, please go back. To install all BiBiGrid requirements, we simply install from the given requirements file:
pip install -r requirements.txt
Try executing openstack subnet list --os-cloud=openstack
within this environment. If it runs without errors, you are ready to proceed. Otherwise you need to check your clouds.yaml
and your virtual environment.
Following the next steps you will update the premade template.
Why are some keys in the template already set?
In this hands-on, we want to make things as easy as possible for you. Just check whether the key you've found is the correct one and matches with the one we've written down in the configuration file already.
BiBiGrid needs to know which sshUser to use in order to connect to your master. You can set this key in your ~/.config/bibigrid/bibigrid.yml
file.
The sshUser depends on your server image. Since we run on top of Ubuntu 22.04 the ssh-user is ubuntu
. Set the template's sshUser
key to ubuntu
.
We have created a subnet for this workshop for you. Determine your subnet's Name
by running:
openstack subnet list --os-cloud=openstack
Set the template's subnet
key to the result's Name
key.
BiBiGrid needs to know type
and image
for each server. Since those are often identical for the workers,
you can simply use the count
key to indicate multiple workers with the same type
and image
.
Images are virtual disks with a bootable operating system. Choosing an image means choosing the operating system of your server.
Since images are often updated, you need to look up the current active image using:
openstack image list --os-cloud=openstack | grep active
Since we will use Ubuntu 22.04 you might as well use:
openstack image list --os-cloud=openstack | grep active | grep "Ubuntu 22.04"
Set the template's image
key of all instances to the result's ID
or NAME
entry of the Ubuntu 22.04 row.
All servers will share the same image.
Do I have to update my configuration file whenever there is a new image version?
If you use the method described above, yes. However, you can also use a regex instead of a specific name to select an image during runtime. This has also avoids issues that may arise whenever an image is deactivated while your cluster is still running. For our Ubuntu 22.04 images you could use ^Ubuntu 22\.04 LTS \(.*\)$
, but usually you need to check what image names are available at your location and choose the regex accordingly. For more information on this functionality take a look at BiBiGrids full documentation.
Flavors are available hardware configurations.
The following gives you a list of all flavors:
openstack flavor list --os-cloud=openstack
Set the template's flavor
keys (provide an ID
or NAME
- we will use NAME
in the following examples) to flavors of your choice - in this tutorial we will use de.NBI medium
for our master and de.NBI small
for our two workers. You can use a different flavor for the master and each worker-group.
Example: Multiple worker groups
The key workerInstances
expects a list. Each list element is a worker group
with an image
+ type
combination and a count
. In our tutorial we use a single worker group containing two workers. Since they are in the same worker group, they are identical in flavor and image. We could, however, define two worker groups with one worker each in order to use different flavors for them.
workerInstances:
- type: de.NBI tiny
image: ubuntu-22.04-image-name
count: 1
- type: de.NBI default
image: ubuntu-22.04-image-name
count: 1
Some clouds run one or more post-launch services on every started instance, to finish the initialization after an
instance is available (e.g. to configure local proxy settings or local available repositories). That might interrupt
BiBiGrid setting up the node (via Ansible). Therefore, BiBiGrid needs to wait for your post-launch service(s) to finish. For that BiBiGrid needs the
services' names. Set the key waitForServices
to the list of services you would like to wait for. For Bielefeld
this would be de.NBI_Bielefeld_environment.service
. In the future you should be able to find post-launch service names by
taking a look at your location's Computer Center Specific site - if
post-launch services exist for your location.
waitForServices:
- de.NBI_Bielefeld_environment.service
Run ./bibigrid.sh -i bibigrid.yml -ch -v
to check your configuration. The command line argument
-v
allows for greater verbosity which will make it easier for you to fix issues.
./bibigrid.sh -i bibigrid.yml -c -v
creates the cluster with a more verbose output. Cluster creation time
depends on the chosen flavor and the overall load of the cloud and will take up to 15 minutes.
Since several clusters can run simultaneously, listing all running clusters can be useful:
Execute ./bibigrid.sh -i bibigrid.yml -l
. You will receive a general overview over all clusters started
in your project.
After a successful setup, BiBiGrid will print some information. For example:
Cluster 6jh83w0n3vsip90 with master 123.45.67.890 up and running!
SSH: ssh -i '~/.bibigrid/tempKey_bibi-6jh83w0n3vsip90' [email protected]
Terminate cluster: ./bibigrid.sh -i 'bibigrid.yml' -t -cid 6jh83w0n3vsip90
Detailed cluster info: ./bibigrid.sh -i 'bibigrid.yml' -l -cid 6jh83w0n3vsip90
You can now establish an SSH connection to your cluster's master by executing the SSH
line of your create
's
output:
ssh -i '~/.bibigrid/keys/tempKey_bibi-6jh83w0n3vsip90' [email protected]
But make sure to use the one generated for you by BiBiGrid since
- cluster-id (here
6jh83w0n3vsip90
), - key name (here
~/.config/bibigrid/keys/tempKey_bibi-6jh83w0n3vsip90
) - user@IP (here
[email protected]
)
will differ on every run. Run sinfo
after logging in. You should see something like this:
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
openstack up infinite 2 idle~ bibigrid-worker-6jh83w0n3vsip90-[0-1]
openstack up infinite 1 idle bibigrid-master-6jh83w0n3vsip90
All* up infinite 2 idle~ bibigrid-worker-6jh83w0n3vsip90-[0-1]
All* up infinite 1 idle bibigrid-master-6jh83w0n3vsip90
However, doing everything on the running cluster from a terminal can be quite bothersome. That's were Theia comes in.
Why are there two partitions (openstack and all) with the same nodes?
BiBiGrid creates one partition for every cloud (here openstack
) and one partition called all
containing all nodes from all partitions. Since we are only using one cloud for this tutorial, we only have openstack
and all
.
Theia Web IDE's many features make it easier to work on your cloud instances. Take a look:
When enabled, Theia Web IDE is configured to listen on localhost port 8181 on the master instance. Since this address is not directly available you have to forward it to your machine using ssh. Execute
./bibigrid.sh -i bibigrid.yml -ide -cid [cluster-id]
to connect to Theia. You may even use ./bibigrid.sh -i bibigrid.yml -ide
since BiBiGrid will attempt to connect to your last created cluster if no cluster-id is given. Theia will be run as systemd service
on localhost. A Theia IDE tab will be automatically opened in your browser.
In this section, you will execute the resFinder
workflow to create a heatmap of antibiotic resistances using your cluster. We will only focus on the workflow language Nextflow within this tutorial. However, you could use any software that comes with a SLURM executor instead or even run the jobs directly through SLURM's CLI.
Digression: Job Scheduling (SLURM)
Slurm is used for job scheduling/workload management. To see all nodes in your cluster execute sinfo
. You will notice that workers are idle~
. That means they are idle
and ~
(powered down). Slurm uses many symbols and words to indicate node states. See here for more about that. To see all running jobs, execute squeue
. You will notice that no job is currently running.
After successfully connecting to Theia IDE, we will now run our first job on our cluster. Let's start with a "hello world".
-
Open a terminal
-
Create a new shell script
nano /vol/spool/helloworld.sh
:
#!/bin/bash
echo Hello from $(hostname) !
sleep 10
- Make
helloworld.sh
executable using chmod:chmod u+x /vol/spool/helloworld.sh
- Change into the /vol/spool/ directory:
cd /vol/spool/
- Submit this script as an array job 50 times :
sbatch --array=1-50 --job-name=helloworld helloworld.sh
(run the job 50 times). The jobhelloworld
runs now. It will take a while to finish, but you can already inspect some information while it runs. - The master will now power up worker nodes (as you described it in
bibigrid.yml
) to assist him with this job. Executesinfo
after a few seconds to see the current node status. - View information about all scheduled jobs by executing
squeue
. You will see your jobhelloworld
there. - You can see
helloworld
's output using catcat /vol/spool/slurm-*.out
.
sudo apt install default-jre
cd /vol/spool
wget -qO- https://get.nextflow.io | bash
Execute locally in this repository's folder in order to copy our test workflow to the master (use your own key path and master ip)
scp -i '~/.config/bibigrid/keys/tempKey_bibi-6jh83w0n3vsip90' resources/Resistance_Nextflow.tar.xz [email protected]:/vol/spool/Resistance_Nextflow.tar.xz
Execute on remote within /vol/spool
in order to unpack our workflow and run it on the master.
tar -xvf Resistance_Nextflow.tar.xz
./nextflow run resFinder.nf
Using squeue
in another terminal will show you that this execution is not running on our slurm cluster.
In order to run our workflow on our slurm cluster, we need to set the executor to slurm. We have done that using a profile definition (see nextflow.config
).
./nextflow run resFinder.nf -profile slurm
Once our workflow has finished, we can see the generated heatmap in outputs/collected_heatmaps/
.
Ansible, an open source community project by Red Hat, enables the idempotent setup of servers - installing software you need and so on. Knowing more about Ansible can be very helpful when handling clusters.
Let's automate our setup using Ansible! First let us include the role additional
. Open ~/playbook/site.yml
and add additional
to the hosts: master
section:
- become: 'yes'
hosts: master
roles:
- role: bibigrid
tags:
- bibigrid
- bibigrid-master
- role: additional
tags:
- additional
become: False
vars_files:
- vars/common_configuration.yml
- vars/hosts.yml
Next, let us take a look what the additional role actually does. Currently, it just shows a debug message. We would like to add what we have done on our cluster so far:
- debug:
msg:
- "Hello {{ ansible_user }}!"
- name: Unarchive ZIP file from GitHub repository
unarchive:
src: "https://github.com/deNBI/bibigrid_clum/raw/main/resources/Resistance_Nextflow.tar.xz"
dest: "/vol/spool/"
remote_src: yes
- name: Install Java JRE on Debian/Ubuntu
become: True
apt:
name: default-jre # Package name for Java JRE on Debian-based systems
state: present # Ensure that the package is present, you can use "latest" as well
- name: Get Nextflow
shell: wget -qO- https://get.nextflow.io | bash
args:
chdir: /vol/spool/
- name: Execute Nextflow workflow
shell: ./nextflow run resFinder.nf -profile slurm
args:
chdir: "/vol/spool" # Change to the directory where your workflow resides
And let's execute our role, but first we need to remove everything we have done manually (for simplicity we will not uninstall java):
sudo rm -r /vol/spool/* # in order to reset
bibiplay -t additional
Taking a look at /vol/spool/
, we can see that the output
folder has been generated once again.
Terminating a running cluster is quite simple. Execute ./bibigrid.sh -i bibigrid.yml -t -cid [cluster-id] -v
.
You have probably already guessed it, ./bibigrid.sh -i bibigrid.yml -t
also does the trick, since BiBiGrid will fall
back on your last created cluster if no cluster-id is specified.
Congratulations! You have finished BiBiGrid's Hands-on.
You may want to take a look at the "real" bibigrid.yml
inside BiBiGrid's repository. It has a few more keys. However, everything you learned here stays true.
If you would like to deepen your knowledge maybe give BiBiGrid's Features or the Software used by BiBiGrid a read.
You can learn more about Ansible (and Ansible Galaxy) here:
Issues can be created here.