WRF: Difference between revisions
(Lots) |
No edit summary |
||
(4 intermediate revisions by the same user not shown) | |||
Line 18: | Line 18: | ||
For example: 201 by 101 grid points, 10k by 5k domain, sst_diff = 0, p-pert, P_nx=51, P_ny=51, P_amp=6.d2 | For example: 201 by 101 grid points, 10k by 5k domain, sst_diff = 0, p-pert, P_nx=51, P_ny=51, P_amp=6.d2 | ||
module load ncview | If you want to view output from initJET first do: module load ncview | ||
Then you can use ncview to view the initial state | |||
ssh uib-username@fram.sigma2.no | ssh uib-username@fram.sigma2.no | ||
wget https://github.com/wrf-model/WRF/releases/download/v4.5.2/v4.5.2.tar.gz | wget https://github.com/wrf-model/WRF/releases/download/v4.5.2/v4.5.2.tar.gz | ||
tar -xzf v4.5.2.tar.gz | |||
module load WRF/4.4-foss-2022a-dmpar (software needed to compile WRF) | |||
export NETCDF=/cluster/software/netCDF-Fortran/4.6.0-gompi-2022a (setting environment variable that points WRF to the right places) | |||
Execute command: ./configure | |||
Correct answer is choice 34 (dmpar with NGU) | |||
And just use basic (default 1) | |||
One can start a terminal-in-terminal with command: tmux (control B and then D detaches terminal. Command to get back: tmux a) | |||
Command to compile: ./compile em_real | |||
Make directory for run: mkdir run | |||
Instead of compiling own WRF, one can use existing copy: cp /cluster/projects/nn8124k/wrf_intro/* run/. | |||
qos gives priority on queue: devel or short give higher priority | |||
Adjust namelist to fit correct grid etc. | |||
Need to also copy met_em files from cyclone to Fram before running WRF | |||
scp ./met_em.d01.2000-01-0*nc username@fram.sigma2.no:~/run/. | |||
In jobscript change: name, account, time, nodes, mail-user, qos | |||
In namelist change: | |||
To run the job: sbatch jobscript.sh | |||
To check the job queue: squeue -u username | |||
You can also check job directly: scontrol show job JOBID | |||
If run crashes, you can look at "slurm" file, alternatively go to run directly that you defined and look at rsl.error files. | |||
To cancel a job: scancel <job-id> | |||
[[Category:WRF]] | [[Category:WRF]] | ||
[[Category:Models]] | [[Category:Models]] |
Latest revision as of 10:40, 28 February 2024
This page will feature some help on getting started with WRF at GFI.
Login to Cyclone
Create folder for playing with WRF with initjet
In that folder do: git clone https://git.app.uib.no/Clemens.Spensberger/initjet.git
Go into the initjet folder
Command: module load netCDF-Fortran/4.4.4-foss-2018b OpenBLAS/0.3.1-GCC-7.3.0-2.30
Go into the folder src and type: make main.exe
Using: vi namelist.initJET you can edit the namelist for your WRF simulation
For example: 201 by 101 grid points, 10k by 5k domain, sst_diff = 0, p-pert, P_nx=51, P_ny=51, P_amp=6.d2
If you want to view output from initJET first do: module load ncview
Then you can use ncview to view the initial state
ssh uib-username@fram.sigma2.no
wget https://github.com/wrf-model/WRF/releases/download/v4.5.2/v4.5.2.tar.gz
tar -xzf v4.5.2.tar.gz
module load WRF/4.4-foss-2022a-dmpar (software needed to compile WRF)
export NETCDF=/cluster/software/netCDF-Fortran/4.6.0-gompi-2022a (setting environment variable that points WRF to the right places)
Execute command: ./configure
Correct answer is choice 34 (dmpar with NGU)
And just use basic (default 1)
One can start a terminal-in-terminal with command: tmux (control B and then D detaches terminal. Command to get back: tmux a)
Command to compile: ./compile em_real
Make directory for run: mkdir run
Instead of compiling own WRF, one can use existing copy: cp /cluster/projects/nn8124k/wrf_intro/* run/.
qos gives priority on queue: devel or short give higher priority
Adjust namelist to fit correct grid etc.
Need to also copy met_em files from cyclone to Fram before running WRF
scp ./met_em.d01.2000-01-0*nc username@fram.sigma2.no:~/run/.
In jobscript change: name, account, time, nodes, mail-user, qos
In namelist change:
To run the job: sbatch jobscript.sh
To check the job queue: squeue -u username
You can also check job directly: scontrol show job JOBID
If run crashes, you can look at "slurm" file, alternatively go to run directly that you defined and look at rsl.error files.
To cancel a job: scancel <job-id>