Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Note
titleDisclaimer

Ares is still under development, and even despite our best efforts, Ares might experience unscheduled outages or even data loss.

Content

Table of Contents
maxLevel3

Support

Please get in touch with the PLGrid Helpdesk: https://helpdesk.plgrid.pl/ regarding any difficulties in using the cluster.

For important information and announcements, please follow this page and the messages displayed in the login message.

Access to Ares

Computing resources on Ares are assigned based on PLGrid computing grants (more information can be found here: Obliczenia w PLGrid). To perform computations on Ares you need to obtain a computing grant and also apply for Ares access service through the PLGrid portal.

If your grant is active, and you have applied for the service access, the request should be accepted in about half an hour. Please report any issues through the helpdesk.

Machine description

Available login nodes:

  • ssh <login>@ares.cyfronet.pl

Note that Ares uses plgrid PLGrid accounts and grants, make . Make sure to request the access service called "Ares access" access service in the PLGrid portal.

Ares is built with Infiniband EDR interconnect and nodes of the following specification:

PartitionNumber
of nodes
CPURAM

Proportional RAM for one CPU

Proportional RAM for one GPU

Proportional CPUs for one GPUAccelerator
plgrid
and
(includes plgrid-
*
long)532 + 256 (if not used by plgrid-bigmem)48 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz192GB3850MBn/an/a
plgrid-bigmem25648 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz384GB7700MBn/an/a
plgrid-gpu-v100932 cores, Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz384GBn/a46000M4

8x Tesla V100-SXM2

Job submission

Ares is using Slurm resource manager, jobs should be submitted to the following partitions:

bigmem
NameTimelimit

Resource type

(account suffix)

Access requirementsDescriptionRemarks
plgrid72h-cpuGenerally available.Standard partition.
plgrid-long168hUsed for jobs with extended runtime.testing1h-cpuGenerally available.plgrid-testing1hHigh priority, testing jobs, limited to 3 jobs1 running job and 2 nodes.
plgrid-72hJobs using an extended amount of memory.now12h-cpuGenerally available.plgrid-now12hThe highest priority, interactive jobs, limited to 1 running or queued job, and 1 node.
plgrid-long168h-cpuRequires a grant with a maximum job runtime of 168h.Used for jobs with extended runtime.
plgrid-bigmem72h-cpu-bigmemRequires a grant with CPU-BIGMEM resources.Resources used for jobs requiring an extended amount of memory.
plgrid-gpu-v10072h48h-gpuRequires a grant with GPGPU resources.GPU partition.

If you are unsure of how to properly configure your job on Ares please consult this guide: Job configuration

Accounts and computing grants

Ares uses a new naming scheme of naming accounts for CPU and GPU computing accounts, which are supplied by the -A parameter in sbatch command. Currently, accounts are named in the following manner:

Resourceaccount name
CPUgrantname-cpu
CPU bigmem nodesgrantname-cpu-bigmem
GPUgrantname-gpu

grants. CPU only grants are named: grantname-cpu, while GPU accounts use grantname-gpu appropriate suffix. Please mind that sbatch -A grantname won't work on its own, you . You need to add the the -cpu, -cpu or -bigmem, or -gpu suffix! Available computing grants, with respective account names (allocations), can be viewed by using the hpc-grants command.

Resource allocated on Ares doesn't use normalization, which was used on Prometheus and previous clusters. 1 hour of CPU time equals 1 hour spent on a computing core with a proportional amount of memory (consult the table above). The billing system accounts for jobs with more memory than the proportional amount. If the job uses more memory for each allocated CPU than the proportional amount, it will be billed as it would have used more CPUs. The billed amount can be calculated by dividing the used memory by the proportional memory per core and rounding the result to the closest and larger integer. Jobs on CPU partitions are always billed in CPU hours.

The same principle was applied to GPU resources, where the GPU-hour is a billing unit, and there are proportional memory per GPU and proportional CPUs per GPU defined (consult the table above).

The cost can be expressed as a simple algorithm:

Code Block
cost_cpu    = job_cpus_used * job_duration
cost_memory = ceil(job_memory_used/memory_per_cpu) * job_duration
final_cost  = max(cost_cpu, cost_memory)

and for GPUs, where a GPU has the respective amount of memory per GPU and CPUs per GPU, respectively:

Code Block
cost_gpu    = job_gpus_used * job_duration
cost_cpu    = ceil(job_cpus_used/cpus_per_gpu) * job_duration
cost_memory = ceil(job_memory_used/memory_per_gpu) * job_duration
final_cost  = max(cost_gpu, cost_cpu, cost_memory)

Storage

Available storage spaces are described in the following table:

LocationLocation in the filesystemPurpose
$HOME/net/people/plgrid/<login>Storing own applications, and configuration files. Limited to 10GB.
$SCRATCH

/net/

pr2

ascratch/

scratch/

people/<login>

High-speed storage used for short-lived data heavily used in computations. Data present for more older than 30 days can be deleted without notice.. It is best to rely on the $SCRATCH environment variable.
$PLG_GROUPS_STORAGE/<group name>group storage/net/pr2/projects/plgrid/<group name>Long-term storage , used for data living for the period of computing grant. Should be used for storing significant amounts of data.

Current usage, capacity and other storage attributes can be checked by issuing the hpc-fs command.

System

...

Utilities

Please use the following commands for interacting to interact with the account and storage management system:

  • hpc-grants - shows shows available grants, resource allocations, consumed resourced
  • hpc-fs - shows available storage
  • hpc-jobs - shows currently pending/running jobs
  • hpc-jobs-history - shows information about past jobs

...

Applications and libraries are available through the modules system. Please note that the module structure was flattened, and module paths have changed compared to Prometheus! The list of available modules can be obtained by issuing the command:

module avail

a the list is searchable by using the '/' key. The specific module can be loaded by the add command:

...

and the environment can be purged by:

module purge

Sample job scripts

Example job scripts are available on this page: Sample scripts

More information

Ares is following Prometheus' configuration and usage patterns. Prometheus documentation can be found here: https://kdm.cyfronet.pl/portal/Prometheus:Basics