Support
Please get in touch with the PLGrid Helpdesk: https://helpdesk.plgrid.pl/ regarding any difficulties using the cluster.
Machine description
Available login nodes:
- ssh <login>@ares.cyfronet.pl
...
Partition | Number of nodes | CPU | RAM | Accelerator |
---|---|---|---|---|
plgrid and plgrid-* | 532 | 48 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz | 192GB | |
plgrid-bigmem | 256 | 48 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz | 384GB | |
plgrid-gpu-v100 | 9 | 32 cores, Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz | 384GB | Tesla V100-SXM2 |
Job submission
Ares is using Slurm resource manager, jobs should be submitted to the following partitions:
Name | Timelimit | Remarks |
---|---|---|
plgrid | 72h | Standard partition. |
plgrid-long | 168h | Used for jobs with extended runtime. |
plgrid-testing | 1h | High priority, testing jobs, limited to 3 jobs. |
plgrid-bigmem | 72h | Jobs using an extended amount of memory. |
plgrid-now | 12h | The highest priority, interactive jobs, limited to 1 running job. |
plgrid-gpu-v100 | 72h | GPU partition. |
Storage
Available storage spaces are described in the following table:
...
Current usage, capacity and other storage attributes can be checked by issuing the hpc-fs
command.
System utilities
Please use the following commands for interacting with the account and storage management system:
hpc-grants
-
shows available grants, resource allocationshpc-fs
- shows available storagehpc-jobs
- shows currently pending/running jobshpc-jobs-history
- shows information about past jobs
Software
Applications and libraries are available through modules system, list of available modules can be obtained by issuing the command:
...
and the environment can be purged by:
module purge
More information
Ares is following Prometheus' configuration and usage patterns. Prometheus documentation can be found here: https://kdm.cyfronet.pl/portal/Prometheus:Basics