Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Ares was recently equipped with a new filesystem, which will take over the role of the $SCRATCHscratch space. The new filesystem is significantly faster and allows for significant advancement in greatly improves computing efficiency. This  This advantage comes at the cost of the reduced maximum size of the scratch space, don't hesitate to contact the helpdesk if this is a significant issue in your case. To fully utilize the new storagefilesystem, we've also updated the data retention policy, please consult the information below. New scratch is currently available in early access mode. Both storage systems, new and old, will function for a certain period so that the migration process can progress go smoothly without interruption of work. Instructions on how to use the new filesystem are included below. Please note that Soon, the new filesystem is still being tuned and optimized, and periods of reduced accessibility might occurscratch filesystem will be the default, and the old scratch will be decommissioned. If you choose to use the new scratch space, please don't use the old one.

Migration timeline:

  • 01.01.2025 - The old $SCRATCH space will be switched to read-only mode. All user accounts will be switched to using the new scratch space by default.
  • 17.01.2024 - The old $SCRATCH space will be disconnected from the cluster nodes.

Naming convention

The filesystems were named as follows:

  1. ascratch - the OLD scratch filesystem, located at the path: /net/ascratch/people/<login name>/
  2. afscra - the NEW scratch filesystem, located at the path: /net/afscra/people/<login name>/

Space management

You can check the availability of the new scratch space using the "hpc-fs" command:

...

Note the $SCRATCH[afscra] entry - this is the new scratch space, while the $SCRATCH[ascratch] is the old one. You can store up to 12TB of data and 1 million of files in the new scratch space.

How to use the new scratch space

By default, the system works in the same way as before and doesn't change the environment. The environment variable $SCRATCH points to /net/ascratch/poeplepeople/<your login>/ directory. If you want to switch to the new filesystem, please complete the following steps:

  1. Ensure you don't have any running or queued jobs on the cluster. Queued jobs can be present in the system.
  2. Use On the login node, use the "hpc-scratch afscra" command on the login node to switch to the new scratch space.
  3. Shut down Terminate any working processes on the login node (screen, tmux, etc.), and log out from the machine.
  4. Log on to the system, issue execute the "hpc-scratch" command, and make sure that the output says that you are using SCRATCH with a path starting with /net/afscra/

all All done! You are set to use the new scratch space. From now on, the $SCRATCH environment variable points to the new location, and Slurm will create job-specific temporary directories in the new $SCRATCH space.

The hpc-scratch is a utility for displaying information and managing your $SCRATCH environment variable. The old scratch is still available under the full path of /net/ascratch/people/<login>/, and its location is set as the $SCRATCH_OLD environment variable.

...

There is no synchronization between the new and old scratch space. If the old scratch contains any critical data, please move it to one of the other storage spaces.

Reporting issues

If you encounter any issues while using the new $SCRATCH space, please create a ticket in the helpdesk: https://helpdesk.plgrid.pl/

Data storage policy

$SCRATCH space is crucial for performing efficient computations. To fully utilize the filesystem's potential, a policy allows for the automated deletion of old data. Data is considered old and can be stored on the automatically removed from $SCRATCH according to the following rules:

  1. All data Files stored in $SCRATCH can be automatically removed after 30 days of not being modified. This point is identical to the old $SCRATCH policy.
  2. Slurm workdirs, created as TMPDIR which include $SCRATCHDIR and $TMPDIR for jobs, located in $SCRATCH/slurm_jobdir/<job id>/ can be automatically removed 7 days after the given job is finished.