Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • inside graphical interactive job using pro-viz service (Main documentation - polish -  Obliczenia w trybie graficznym: pro-viz)
  • in SLURM batch job though SLURM script submitted from command line
  • in SLURM batch job submitted from Relion GUI started via pro-viz service in dedicated partition

Interactive Relion job with Relion GUI
Anchor
pro-viz
pro-viz

In order to start interactive Relion job with access to Relion GUI

  1. Log into Prometheus login node

    Code Block
    languagebash
    titleLog into Prometheus login node
    ssh <login>@pro.cyfronet.pl


  2. Load pro-viz module 

    Code Block
    languagebash
    titleLoad pro-viz module
    module load tools/pro-viz


  3. Start pro-viz job
    1. Submit pro-viz job to qeuue


      1. CPU-only job

        Code Block
        languagebash
        titleSubmission of CPU pro-viz job
        pro-viz start -N <number-of-nodes> -P <cores-per-node> -p <partition/queue> -t <maximal-time> -m <memory>


      2. GPU job

        Code Block
        languagebash
        titleSubmission of GPU pro-viz job
        pro-viz start -N <number-of-nodes> -P <cores-per-node> -g <number-of-gpus-per-node> -p <partition/queue> -t <maximal-time> -m <memory>


    2. Check status of submitted job


      Code Block
      languagebash
      titleStatus of pro-viz job(s)
      pro-viz list


    3. Get password to pro-viz session (when job is already running)\

      Code Block
      languagebash
      titlePro-viz job password
      pro-viz password <JobID>

      exemple output

      Code Block
      languagebash
      titlePro-viz password example output
      Web Access link:
        https://viz.pro.cyfronet.pl/go?c=<hash>&token=<token> 
      link is valid until: Sun Nov 14 02:04:02 CET 2021
      
      session password (for external client): <password>
      full commandline (for external client): vncviewer -SecurityTypes=VNC,UnixLogin,None -via <username>@pro.cyfronet.pl -password=<password> <worker-node>:<display>


    4. Connect to graphical pro-viz session
      1. you could use weblink obtained in previous point
      2. you could use VNC client (i.e. TurboVNC). Configuration of client described in Obliczenia w trybie graficznym: pro-viz (in polish)
  4. Setup Relion environment
    1. When connected to GUI open Terminal and load Relion module 

      Code Block
      languagebash
      titleLoad Relion module
      module load plgrid/tools/relion


    2. Start Relion GUI in background

      Code Block
      languagebash
      titleStart relion
      relion &


  5. Use Relion GUI for computation.
     

...

  1. Image Added
  2. After finishing work terminate job

    Code Block
    languagebash
    titlePro-viz job password
    pro-viz stop <JobID>


Relion in SLURM batch jobs

...

  1. Log into Prometheus login node

    Code Block
    languagebash
    titleLog into Prometheus login node
    ssh <login>@pro.cyfronet.pl


  2. Move to Relion project directory

    Code Block
    languagebash
    titleChange directories
    cd $SCRATCH/<relion-project>


    Info
    titleUsage of filesystems

    Relion project during computations should be stored in $SCRATCH filesystem on Prometheus. More info - https://kdm.cyfronet.pl/portal/Prometheus:Basics#Disk_storage. For longer storage user should use $PLG_GROUPS_STORAGE/<team_name> filesystem.


  3. Submit job

    Code Block
    languagebash
    titleJob submision
    sbatch script.slurm
    1. Example CPU-only SLURM script

      Code Block
      languagebash
      titleRelion CPU-only SLURM script
      #!/bin/bash
      # Number of allocated nodes
      #SBATCH --nodes=1
      # Number of MPI processes per node 
      #SBATCH --ntasks-per-node=4
      # Number of threads per MPI process
      #SBATCH --cpus-per-task=6
      # Partition
      #SBATCH --partition=plgrid
      # Requested maximal walltime
      #SBATCH --time=0-1
      # Requested memory per node
      #SBATCH --mem=110GB
      # Computational grant
      #SBATCH --account=<name-of-grant>
      
      export RELION_SCRATCH_DIR=$SCRATCHDIR
      
      module load plgrid/tools/relion/3.1.2
      mpirun <relion-command>
      


    2. Example GPU SLURM script

      Code Block
      languagebash
      titleRelion GPU SLURM script
      #!/bin/bash
      # Number of allocated nodes
      #SBATCH --nodes=1
      # Number of MPI processes per node 
      #SBATCH --ntasks-per-node=4
      # Number of threads per MPI process
      #SBATCH --cpus-per-task=6
      # Partition
      #SBATCH --partition=plgrid-gpu
      # Number of GPUs per node
      #SBATCH --gres=gpu:2
      # Requested maximal walltime
      #SBATCH --time=0-1
      # Requested memory per node
      #SBATCH --mem=110GB
      # Computational grant
      #SBATCH --account=<name-of-grant>
      
      export RELION_SCRATCH_DIR=$SCRATCHDIR
      
      module load plgrid/tools/relion/3.1.2
      mpirun <relion-command> --gpu $CUDA_VISIBLE_DEIVCES
      
      


      Info
      titleGPUs usage

      GPUs are available only for selected grants in partitions plgrid-gpu and plgrid-gpu-v100. One should aways use --gpu $CUDA_VISIBLE_DEIVCES to request GPUs allocated for job.


      Info
      titleRelion command

      Relion command syntax could be checked using GUI and copied to script

      Image Added

      Image Added


  4. Check job status


    Code Block
    languagebash
    titleJob submision
    squeue 

    or

    Code Block
    languagebash
    titleJob submision
    pro-jobs


Submitting SLURM jobs from Relion GUI

  1. Start job as in pro-viz session but using plgrid-services partition/queue.
  2. In Relion GUI use "Submit to queue" in "Running" tab
    1. Select submission scripts from directory
       Image Added
  3. Monitor jobs either from Relion GUI or command line using squeue or pro-jobs commands