...
Most of Relion jobs could be run as batch jobs using SLURM
Log into Prometheus login node
Code Block language bash title Log into Prometheus login node ssh <login>@pro.cyfronet.pl
Move to Relion project directory
Code Block language bash title Change directories cd $SCRATCH/<relion-project>
Info title Usage of filesystems Relion project during computations should be stored in $SCRATCH filesystem on Prometheus. More info - https://kdm.cyfronet.pl/portal/Prometheus:Basics#Disk_storage. For longer storage user should use $PLG_GROUPS_STORAGE/<team_name> filesystem.
Submit job
Code Block language bash title Job submision sbatch script.slurm
Example CPU-only SLURM script
Code Block language bash title Relion CPU-only SLURM script #!/bin/bash # Number of allocated nodes #SBATCH --nodes=1 # Number of MPI processes per node #SBATCH --ntasks-per-node=4 # Number of threads per MPI process #SBATCH --cpus-per-task=6 # Partition #SBATCH --partition=plgrid # Requested maximal walltime #SBATCH --time=0-1 # Requested memory per node #SBATCH --mem=110GB # Computational grant #SBATCH --account=<name-of-grant> export RELION_SCRATCH_DIR=$SCRATCHDIR module load plgrid/tools/relion/3.1.2 mpirun <relion-command>
Example GPU SLURM script
Code Block language bash title Relion GPU SLURM script #!/bin/bash # Number of allocated nodes #SBATCH --nodes=1 # Number of MPI processes per node #SBATCH --ntasks-per-node=4 # Number of threads per MPI process #SBATCH --cpus-per-task=6 # Partition #SBATCH --partition=plgrid-gpu # Number of GPUs per node #SBATCH --gres=gpu:2 # Requested maximal walltime #SBATCH --time=0-1 # Requested memory per node #SBATCH --mem=110GB # Computational grant #SBATCH --account=<name-of-grant> export RELION_SCRATCH_DIR=$SCRATCHDIR module load plgrid/tools/relion/3.1.2 mpirun <relion-command> --gpu $CUDA_VISIBLE_DEIVCES