GFI&SKD Storage
Coming along with the computing system cyclone.hpc.uib.no there are also three raid based storage units, each with size of 120TB, and all mounted within the same shared lustre filesystem. Two of the storage units are funded by GFI, the third is funded by SKD.
The storage areas are available from these main folder paths on cyclone (local access) and any other GFI&SKD linux-client (network acces via nfs):
- /Data/gfi and /Data/skd
For GFI&SKD windows-clients and mac-clients inside UiB-network, the storage areas are in general available from the following network shares and you can also mount subfolders of these:
- From Win10, use \\leo.hpc.uib.no\gfi and \\leo.hpc.uib.no\skd
- From Macos, use smb://uib.no;<userid>@leo.hpc.uib.no/gfi (or change tail gfi to skd). Please include the entire domain path uib.no rather than uib only also on Macs.
Howto on connecting from Windows/Mac is published from UiB ITA (just change string for correct server and path):
* Win7/Win10: How to connect to a network share
* Mac OS X: Connecting to your network share
The storage is organised by priority on larger shared folders for modelling data, in group and projects folders with and without need for backup and folders for individual use with or without backup. Because backup of large quantities are connected with considerable annual cost, users and groups will need to consider which data need a daily backup and which are reproducible and have a reasonably security within a modern storage system without backup.
As a general rule, we recommend backup of all data files that cannot be reproduced by re-executing an analysis script, re-running a model or that can be downloaded from other locations. Examples for data files to be placed in backup folders are scripts for starting a model run, matlab and python scripts for data analysis, manuscripts, and so on. Examples for data files that can remain without backup are results from model runs where scripts are available to re-run, data files downloaded from other servers, and plots that can be recreated by running a script.
Most folders assigned for group access have general read-write access on group level. Owners of each folder and file may still have set exclusive individual access on these folders and files. If other members of the assigned group find this inconvenient, please contact the formal owner of the file or folder.
Individual quotas (inside /Data/gfi/users and /Data/skd/users) are set by default to 200GB and adjusted by request on individual needs - normally < 1TB, but still in respect for other users needs and limited by the max. size of the parent folder. Larger amount should be considered for their own project space. Project and group quotas are managed by configured maximal limits of main project/group folders.
Access on top level folders are often set read-only to keep folder conventions & structure strict and tidy. When in need for storage space and/or adjustments and customisation, please submit issue by email to: driftATgfi.uib.no.
Please note that "the work-folder" is meant for short time storage - Any file or folder older than 30 days inside "the work-folder" will be deleted automatically and without further notice.
Sizes, quotas, purpose etc. for the variety of folders inside GFI&SKD storage are given below.
GFI&SKD Storage folders inside /Data/gfi/ | ||||||
---|---|---|---|---|---|---|
Subfolder | Purpose | Projid | Size limit | Group assign | User quota | Backup |
share | shared model data | 48 | 20TB | GFI & SKD | ||
share/era5 | shared era5 data | 48 | 130 TB | GFI & SKD | ||
scratch | individual and group | 79 | 25 TB | GFI | ||
work | shared short time (30days) Symbolic link -> /Data/skd/work | 50 | 28.75 TB | GFI & SKD | ||
users | individual with backup | 7-44 | 20 TB | GFI | * | * |
projects | various project shares (sum of subfolders) | 71-78 | 4.25 TB | GFI | * | |
projects/metdata | project group share | 72 | 1.5 TB | METDATA | * | |
projects/farlab | project group share | 73 | 1 TB | FARLAB | * | |
projects/isomet | project group share | 74 | 0.25 TB | ISOMET | * | |
met | group share | 45 | 15 TB | MET | ||
spengler | group share | 46 | 15 TB | SPENGLER | ||
exprec | group share | 47 | 4 TB | EXPREC | ||
metno | daily&arch. met.no prognosis & obs | 80 | 1.5 TB | GFI | ||
Undisposed | Undisposed storage | 5.25 TB | GFI | |||
Total | Size of GFI storage | 240T | GFI & SKD |
GFI&SKD Storage folders inside /Data/skd/ | ||||||
---|---|---|---|---|---|---|
Subfolder | Purpose | Projid | Size limit | Group assign | User quota | Backup |
share | Symbolic link ->/Data/gfi/share/ - shared model data | 48 | GFI & SKD | |||
skd-share | Several subfolders /Data/gfi/share/* -> /Data/skd/skd-share/. | 66 | 51.25 TB | GFI & SKD | ||
scratch | individual and group share | 67 | 20 TB | SKD | ||
work | shared short time (30days) | 50 | 28.75 TB | GFI & SKD | ||
users | individual with backup | 51-65 | 5 TB | SKD | * | * |
projects | various project shares | 49 | 10 TB | SKD | ||
Total | Size of SKD storage | 120T | GFI & SKD |
To monitor your quota, you can run the following command from an ssh login on cyclone:
- lfs quota -hp <projid> /shared/
- example for /Data/gfi/scratch: lfs quota -hp 79 /shared/