Support
Please get in touch with the PLGrid Helpdesk: https://helpdesk.plgrid.pl/ regarding any difficulties using the cluster.
Machine description
Available login nodes:
- ssh <login>@ares.cyfronet.pl
Ares is built with Infiniband EDR interconnect and nodes of the following specification:
Partition | Number of nodes | CPU | RAM | Accelerator |
---|---|---|---|---|
plgrid and plgrid-* | 532 | 48 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz | 192GB | |
plgrid-bigmem | 256 | 48 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz | 384GB | |
plgrid-gpu-v100 | 9 | 32 cores, Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz | 384GB | Tesla V100-SXM2 |
Job submission
Jobs should be submitted to the following partitions:
Name | Timelimit | Remarks |
---|---|---|
plgrid | 72h | Standard partition. |
plgrid-long | 168h | Used for jobs with extended runtime. |
plgrid-testing | 1h | High priority, testing jobs, limited to 3 jobs. |
plgrid-bigmem | 72h | Jobs using an extended amount of memory. |
plgrid-now | 12h | The highest priority, interactive jobs, limited to 1 running job. |
plgrid-gpu-v100 | 72h | GPU partition. |
Storage
Available storage spaces are described in the following table:
Please note that the storage
system was modified, old $SCRATCH is available only on login01 node as a
readonly filesystem under the /net/ascratch/people/<login> directory. It will
be available until 28th of March.
...
Location | Physical location | Purpose |
---|---|---|
$HOME | /net/people/<login> | Storing own applications, configuration files |
$SCRATCH | /net/pr2/scratch/people/<login> | High-speed storage used for short-lived data heavily used in computations. |
group storage |
/net/pr2/projects/plgrid/<group name> |
...
If you are using an account named aresXX, please create a proper PLGrid grant,
as aresXX temporary accounts will be disabled in the near future.
Long term storage, used for data living for the period of computing grant. |
Current usage, capacity and other storage attributes can be checked by issuing the hpc-fs
command.
System utilities
Please use the following commands for managing your accountsinteracting with the account and storage management system:
hpc-grants
...
-
-
shows available grants, resource allocations hpc-fs
...
- - shows available storage
hpc-jobs
...
- - shows currently pending/running jobs
hpc-jobs-history
...
- - shows information about past jobs
...
Software
Applications and libraries are available through modules system, list of available modules can be obtained by issuing the command:
module avail
a module can be loaded by:
module add openmpi/4.1.1-gcc-11.2.0
and the environment can be purged by:
module purge
Jobs should be submitted to plgrid, plgrid-long, plgrid-bigmem, plgrid-gpu and
other plgrid-* queues.
The standard partition is built with nodes containing 48 cores and 192 GB of
RAM, while the plgrid-bigmem contains nodes with 48 cores and 384 GB of RAM.
GPU partition contains nodes with 32 cores, 384 GB of memory and 8 NVIDIA Tesla
V100 GPUs.