GFI&SKD Storage

From gfi
Revision as of 11:28, 20 December 2022 by Ngfih (talk | contribs)

Coming along with the computing system cyclone.hpc.uib.no there are also three raid based storage units, each with size of 120TB, and all mounted within the same shared lustre filesystem. Two of the storage units are funded by GFI, the third is funded by SKD.

The storage areas are available from these main folder paths on cyclone (local access) and any other GFI&SKD linux-client (network acces via nfs):

- /Data/gfi  and /Data/skd

For GFI&SKD windows-clients and mac-clients inside UiB-network, the storage areas are accessed via two different servers - dependent on whether they are included in backup or not. The folders marked for backup in scheme below are accessible from a different server and path compared with those with no-backup. And you can mount subfolders of these:

- From Win10, no-backup folders use: \\leo.hpc.uib.no\gfi and \\leo.hpc.uib.no\skd
  From Win10, backup folders use:
           \\klient.uib.no\FELLES\MATNAT\GFI and \\klient.uib.no\FELLES\MATNAT\SKD
- From Macos, no-backup folders use: smb://uib.no;<userid>@leo.hpc.uib.no/gfi
              (or change tail gfi to skd).
- From Macos, backup folders use:
              smb://uib.no;userid@klient.uib.no/FELLES/MATNAT/GFI (or change tail gfi to skd).
  Please include the entire domain path uib.no rather than uib only also on Macs.

Howto on connecting from Windows/Mac is published from UiB ITA (just change string for correct server and path):

* Win7/Win10: How to connect to a network share

* Mac OS X: Connecting to your network share

The storage is organised by priority on larger shared folders for modelling data, in group and projects folders with and without need for backup and folders for individual use with or without backup. Because backup of large quantities are connected with considerable annual cost, users and groups will need to consider which data need a daily backup and which are reproducible and have a reasonably security within a modern storage system without backup.

As a general rule, we recommend backup of all data files that cannot be reproduced by re-executing an analysis script, re-running a model or that can be downloaded from other locations. Examples for data files to be placed in backup folders are scripts for starting a model run, matlab and python scripts for data analysis, manuscripts, and so on. Examples for data files that can remain without backup are results from model runs where scripts are available to re-run, data files downloaded from other servers, and plots that can be recreated by running a script.

Most folders assigned for group access have general read-write access on group level. Owners of each folder and file may still have set exclusive individual access on these folders and files. If other members of the assigned group find this inconvenient, please contact the formal owner of the file or folder.

Individual quotas (inside /Data/gfi/users and /Data/skd/users) are set by default to 200GB and adjusted by request on individual needs - normally < 1TB, but still in respect for other users needs and limited by the max. size of the parent folder. Larger amount should be considered for their own project space. Project and group quotas are managed by configured maximal limits of main project/group folders.

Access on top level folders are often set read-only to keep folder conventions & structure strict and tidy. When in need for storage space and/or adjustments and customisation, please submit issue by email to: driftATgfi.uib.no.

Please note that "the work-folder" is meant for short time storage - Any file or folder older than 30 days inside "the work-folder" will be deleted automatically and without further notice.

Sizes, quotas, purpose etc. for the variety of folders inside GFI&SKD storage are given below.

GFI&SKD Storage folders inside /Data/gfi/ - (from leo.hpc.uib.no)
Subfolder Purpose Projid Size limit Group assign User quota Backup
share shared model data 48 20TB GFI & SKD
share/era5 shared era5 data 48 150 TB GFI & SKD
scratch individual and group 79 25 TB GFI
work shared short time (30days) Symbolic link -> /Data/skd/work 50 28.75 TB GFI & SKD
met group share 45 15 TB MET
spengler group share 46 15 TB SPENGLER
exprec group share 47 5 TB EXPREC
metno daily&arch. met.no prognosis & obs 80 1.5 TB GFI
Undisposed Undisposed storage 8.5 TB GFI
Total Size of GFI storage - (from leo.hpc.uib.no) 240 TB GFI & SKD
GFI&SKD Storage folders inside /Data/gfi/ - (from UiB SAN - felles.uib.no)
Subfolder Purpose Projid Size limit Group assign User quota Backup
users individual with backup None 20 TB GFI * *
projects various project shares (sum of subfolders) None 4.25 TB GFI *
projects/metdata project group share None 1.5 TB METDATA *
projects/farlab project group share None 1 TB FARLAB *
projects/isomet project group share None 0.25 TB ISOMET *
Total Size of GFI storage - (from UiB SAN - felles.uib.no) 24.25 TB GFI & SKD


GFI&SKD Storage folders inside /Data/skd/
Subfolder Purpose Projid Size limit Group assign User quota Backup
share Symbolic link ->/Data/gfi/share/ - shared model data 48 GFI & SKD
skd-share Several subfolders /Data/gfi/share/* -> /Data/skd/skd-share/. 66 51.25 TB GFI & SKD
scratch individual and group share 67 20 TB SKD
work shared short time (30days) 50 28.75 TB GFI & SKD
stormrisk group share 83 5 TB STORMRISK
projects various project shares 49 15 TB SKD
users individual with backup - This is from UIB SAN felles.uib.no None 5 TB SKD * *
Total Size of SKD storage 125 TB GFI & SKD


To monitor your quota (on folders from leo.hpc.uib.no), you can run the following command from an ssh login on cyclone:

- lfs quota -hp <projid> /shared/
- example for /Data/gfi/scratch: lfs quota -hp 79 /shared/

To monitor your 30 GB homefolder quota on cyclone, you can run the following command from an ssh login on cyclone:

- lfs quota -hp $(id -u) /shared/

or just

- lfs quota -hp $(id -u) $HOME