Our Platform supports the user with training their own weights for predefined models and even to create their own models. In the first case this could help to improve the quality of the model by training in on data coming from a particular seismic networks (see e.g. Johnson et al. 2021).
To start with training we recommend to use the our HPC clusters which provide enough computing power and access to GPU. The prerequisites are following:
Environment for training AI models. Please install the epos-ai-train
environment, by downloading epos-ai-train.yml
from our repository and running:
mamba env create -f epos-ai-train.yml |
Activate the environment it. is necessary to run for each new shell session:
conda activate epos-ai-train |
To check if you have enabled the environment correctly you should see the name of the environment in the shell prompt.
To submit a job on an HPC cluster use the following job script scheme:
#!/bin/bash ## Set you JOB_NAME to to make it easier to see in the job queue #SBATCH --job-name=JOB_NAME ## Max task execution time (format is HH:MM:SS) #SBATCH --time=00:15:00 ## Name of grant to which resource usage will be charged #SBATCH --account=GRANT_ID ## Name of partition #SBATCH --partition=plgrid ## Number of allocated nodes #SBATCH --nodes=1 ## Number of tasks per node (by default this corresponds to the number of cores allocated per node) #SBATCH --cpus-per-task=1 ## Change to sbatch working directory cd $SLURM_SUBMIT_DIR ## Activate Python environment source PATH_TO_MINIFORGE_INSTALLATION/miniforge/bin/activate conda activate epos-ai-train ## Your usual invoke method e.g. python train.py |
To enable GPU access for the job please add to the top:
#SBATCH --gres=gpu:1 |
where the number :1 states the number of GPU devices, can be changed to higher numbers. To find which partitions provide access to GPU, please run:
sinfo -o '%P || %N || %G' | column -t |
if in the last column is not null then, it indicates that the partition have access to GPU devices. Please setup the correct partition in the job script.
To find more about submitting jobs please see in corresponding documentation for Ares or Athena.
To run an interactive session of Jupyter notebooks using jupterLab please follow the official PLGrid documentation.
The exemplar SLURM Jobs script can be as follows:
#!/bin/bash ## Set you JOB_NAME to to make it easier to see in the job queue #SBATCH --job-name=JOB_NAME ## Max task execution time (format is HH:MM:SS) #SBATCH --time=00:15:00 ## Name of grant to which resource usage will be charged #SBATCH --account=GRANT_ID ## Name of partition #SBATCH --partition=plgrid ## Number of allocated nodes #SBATCH --nodes=1 ## Number of tasks per node (by default this corresponds to the number of cores allocated per node) #SBATCH --cpus-per-task=1 ## Number of GPUs #SBATCH --gres=gpu:1 ## get tunneling info XDG_RUNTIME_DIR="" ipnport=$(shuf -i8000-9999 -n1) ipnip=$(hostname -i) user=$USER ## print tunneling instructions to jupyter-log-{jobid}.txt echo -e " Copy/Paste this in your local terminal to ssh tunnel with remote ----------------------------------------------------------------- ssh -o ServerAliveInterval=300 -N -L $ipnport:$ipnip:$ipnport ${user}@ares.cyfronet.pl ----------------------------------------------------------------- Then open a browser on your local machine to the following address ------------------------------------------------------------------ localhost:$ipnport (prefix w/ https:// if using password) ------------------------------------------------------------------ ## Activate Python environment source PATH_TO_MINIFORGE_INSTALLATION/miniforge/bin/activate conda activate epos-ai-train ## Change to sbatch working directory cd $SLURM_SUBMIT_DIR jupyterlab --no-browser --port=$ipnport --ip=$ipnip |
For accessing the started JupyterLab instance please follow the official guide.