Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Most of Relion jobs could be run as batch jobs using SLURM

  1. Log into Prometheus login node

    Code Block
    languagebash
    titleLog into Prometheus login node
    ssh <login>@pro.cyfronet.pl


  2. Move to Relion project directory

    Code Block
    languagebash
    titleChange directories
    cd $SCRATCH/<relion-project>


    Info
    titleUsage of filesystems

    Relion project during computations should be stored in $SCRATCH filesystem on Prometheus. More info - https://kdm.cyfronet.pl/portal/Prometheus:Basics#Disk_storage. For longer storage user should use $PLG_GROUPS_STORAGE/<team_name> filesystem.


  3. Submit job

    Code Block
    languagebash
    titleJob submision
    sbatch script.slurm
    1. Example CPU-only SLURM script

      Code Block
      languagebash
      titleRelion CPU-only SLURM script
      #!/bin/bash
      # Number of allocated nodes
      #SBATCH --nodes=1
      # Number of MPI processes per node 
      #SBATCH --ntasks-per-node=4
      # Number of threads per MPI process
      #SBATCH --cpus-per-task=6
      # Partition
      #SBATCH --partition=plgrid
      # Requested maximal walltime
      #SBATCH --time=0-1
      # Requested memory per node
      #SBATCH --mem=110GB
      # Computational grant
      #SBATCH --account=<name-of-grant>
      
      export RELION_SCRATCH_DIR=$SCRATCHDIR
      
      module load plgrid/tools/relion/3.1.2
      mpirun <relion-command>
      

    1. Example GPU SLURM script

      Code Block
      languagebash
      titleRelion GPU SLURM script
      #!/bin/bash
      # Number of allocated nodes
      #SBATCH --nodes=1
      # Number of MPI processes per node 
      #SBATCH --ntasks-per-node=4
      # Number of threads per MPI process
      #SBATCH --cpus-per-task=6
      # Partition
      #SBATCH --partition=plgrid-gpu
      # Number of GPUs per node
      #SBATCH --gres=gpu:2
      # Requested maximal walltime
      #SBATCH --time=0-1
      # Requested memory per node
      #SBATCH --mem=110GB
      # Computational grant
      #SBATCH --account=<name-of-grant>
      
      export RELION_SCRATCH_DIR=$SCRATCHDIR
      
      module load plgrid/tools/relion/3.1.2
      mpirun <relion-command> --gpu $CUDA_VISIBLE_DEIVCES