HWRF  trunk@4391
Installing HWRF from the Repository

This page explains how to install HWRF from the public repository housed here:

Specifically, it explains how to install from the branch, tag or trunk that you checked out. The guide is actually generated from special comments and documentation files inside that repository.

Prerequisites

The bulk of the code is Fortran, and most of the scripts are Python, but you will need a few other languages or utilities as well.

First, you need C and Fortran compilers, with MPI and OpenMP support. Supported compilers are:

Compiler MPI Commands To obtain
Intel Intel icc, ifort, mpiicc, mpiifort http://www.intel.com/
Intel MPICH icc, ifort, mpicc, mpifort http://mvapich.cse.ohio-state.edu/
Intel SGI icc, ifort, mpicc, mpifort http://www.sgi.com/
IBM XL IBMPE xlc, xlf, mpcc, mpfort http://www.ibm.com/
PGI MPICH pgc, pgfortran, mpicc, mpfort http://www.pgroup.com/
GNU MPICH gcc, gfortran http://gcc.gnu.org/

Prerequisites: Scripting Languages

Next, you need several scripting languages. If you have Linux, MacOS or open-source BSD distribution, these are likely already installed, or can be installed via your OS installation command (apt-get, yum, etc.)

Language Why Command To obtain
POSIX sh Job setup /bin/sh Always present on POSIX-compliant operating systems.
Python 2 Workflow python https://www.python.org/downloads/release/python-2710/
GNU make Build system gmake http://www.gnu.org/software/make/
Perl Build system perl http://www.perl.org/

Note that Python must be version 2.x, and at least version 2.6.6. Python 3 is a completely different language than Python 2, and the HWRF scripts are all Python 2 scripts. You can get your Python version with this command:

python --version

If your "python" program is version 3, you may also have a "python2" program:

python2 --version

If your "python" command is version 3, and python2 is version 2, you can still run HWRF. However, you will need to edit the *.py files in ush/, scripts/ and rocoto/, and change:

#! /bin/env python

to:

#! /bin/env python2

Prerequisites: Workflow Automation Programs

If you want to run a large-scale HWRF test of many storms, you will need a workflow automation system. The HWRF only supports Rocoto, but with some work you can use ecFlow as well:

What Why Command To obtain
Rocoto Workflow Automation rocoto https://github.com/christopherwharrop/rocoto/releases
ecFlow Alternative to Rocoto ecflow-client https://software.ecmwf.int/wiki/display/ECFLOW/Releases

Using ecFlow for HWRF is only supported for NCEP Central Operations. Other users must be adventurous to try it.


Step 1: HWRF Repository Checkout

The first step is to check out the HWRF from the repository.

svn checkout https://svn-dtc-hwrf.cgd.ucar.edu/trunk

If you have never checked out HWRF before on your machine, the svn command will ask you for your password eight times. This is because the HWRF is stored in several repositories which are connected by Subversion externals.

Step 1.2: Get GSI If Needed

If you intend to run a configuration that uses data assimilation, you will need the GSI program (exec/hwrf_gsi). You can get it one of three ways: two repositories or a pre-built executable.

The GSI is stored in two different Subversion repositories: an EMC repository and a DTC repository. The EMC repository serves as the master repository while the DTC repository mirrors it. It is easier to get access to the DTC repository, but the DTC repository tends to be a few weeks behind. Unfortunately, at the moment, the DTC repository lacks critical changes that are needed by HWRF. We are waiting for the EMC GSI team to merge the changes.

Todo:
The DTC repository lacks critical changes that the EMC GSI team has not yet merged to the GSI trunk. Until this is fixed, you must install from the EMC HWRF branch, or use a pre-built executable.

GSI Option 1: EMC HWRF Branch

To check out from the EMC HWRF branch:

 cd sorc
 svn co https://svnemc.ncep.noaa.gov/projects/gsi/branches/HWRF/ EMCGSI
Note
Do not check out both GSI repositories or the build scripts will complain.

GSI Option 2: Pre-Built GSI Executable

There are pre-built GSI executables on some NOAA clusters. If you cannot access the GSI source code, you can use a pre-built executable instead by copying or symlinking it:

 cd exec
 ln -s /path/to/hwrf_gsi .

GSI executable locations:

Cluster Location
NOAA Jet /mnt/pan2/projects/hwrfv3/hurrun/EMCGSI/hwrf_gsi
NOAA WCOSS /hwrf/save/emc.hurpara/EMCGSI/hwrf_gsi
NOAA Zeus /scratch1/portfolios/NCEPDEV/hwrf/save/hurpara/EMCGSI/hwrf_gsi
NOAA Theia /scratch3/NCEPDEV/hwrf/save/hurpara/trunk/exec/hwrf_gsi
NCAR Yellowstone Contact the DTC HWRF Helpdesk.
Todo:
Provide a pre-built GSI executable on NCAR Yellowstone

GSI Option 3: Public DTC Repository

Warning
You cannot install this way (see above).

To check out from the public (DTC) repository:

cd sorc
svn co https://gsi.fsl.noaa.gov/svn/comgsi/trunk GSI

Step 2: Compilation

The HWRF has compilation scripts that build all components. They can handle the NCAR Yellowstone cluster as well as NOAA Jet, Theia, WCOSS and Zeus. Other machines will require porting.

Step 2.1: Load Modules

The NOAA and NCAR clusters use the module command to set up your environment to use specific compilers, libraries and commands. The HWRF has a module on each cluster that loads the needed modules. Suppose you installed HWRF here on Jet:

 /path/to/hwrf-trunk

Then the commands to load the modules are:

 module purge
 module use /path/to/hwrf-trunk/modulefiles/jet
 module load HWRF/build

There are other subdirectories of modulefiles/ including yellowstone, theia, wcoss, jet and zeus.

Note: Currently on jet, Intel compiler version 15 is used. And if the cpu level optimization options "-axCORE-AVX2,AVX -xSSE4.2" are desired, they can be enabled by setting an environmental variable:

setenv ARCHINTELOPT "-axCORE-AVX2,AVX -xSSE4.2"

in

/path/to/hwrf-trunk/modulefiles/jet/HWRF/build

before loading this module. However, currently the UPP component may not work with these cpu level optimization options.

Step 2.2: Build HWRF

The HWRF build system automatically builds all components and installs the executables to the right locations.

To compile:

cd sorc
make dist_clean
make

The make command may exit with any number of error messages. If you see complaints about not having libraries loaded, it probably means you forgot to load the HWRF/build module. If it cannot compile some component, read the compile.log file in the component's directory for details.

Step 2.2: Compile HWRF

If the make command ran correctly (exited with status 0), then this command will install the executables:

make install

It will also check if make did not build any executables, so the make install serves as a check of the make success.


Step 3: Get the Fix Files

The fix files for the operational HWRF are not in Subversion due to their large size. You must obtain them elsewhere (details below) and make one symbolic link. (The nwport/fix is no longer used.)

ln -s /path/to/hwrf-20151210-fix/fix/ fix

You can find the fix files on disk or tape:

Who What Where
NOAA HPSS Tape hsi get /NCEPDEV/hpssuser/g01/wx20st/svn-exec-fix/hwrf-20151210-fix.tar.gz
NOAA Jet Disk /lfs3/projects/hwrf-data/fix-files/hwrf-20151210-fix/
NOAA Theia Disk /scratch3/NCEPDEV/hwrf/noscrub/fix-files/hwrf-20151210-fix/
NOAA WCOSS Disk /hwrf/noscrub/fix-files/hwrf-20151210-fix/
NOAA Zeus Disk /scratch1/portfolios/NCEPDEV/hwrf/noscrub/fix-files/hwrf-20151210-fix/
NCAR Yellowstone Disk /glade/p/work/strahan/hwrf-20151210-fix/
NCAR/CISL HPSS Tape hsi get /home/strahan/svn-fix/hwrf-20151210-fix.tar.gz

For the tape options, you cannot use htar, hpsstar or AIX tar on the archive. Instead, you must get it using "hsi get" and decompress with GNU tar:

hsi get /NCEPDEV/hpssuser/g01/wx20st/svn-exec-fix/hwrf-20151210-fix.tar.gz
tar -xzpf hwrf-20151210-fix.tar.gz

You can check to see if you have the right file using md5sum:

me@machine:/some/dir> md5sum hwrf-20151210-fix.tar.gz
bcf7681bd740a029a010d2c8a92611fd  hwrf-20151210-fix.tar.gz

If a different hexadecimal number is printed, you have the wrong file.

For the graphics processes in real-time, the a/b deck files need to be provided since these decks files may contain restricted information from other forecast centers.

Jet: ln -sf /mnt/lfs2/projects/hwrfv3/hurrun/abdeck ./abdeck

Step 4: system.conf

The parm/system.conf file specifies directory paths and data sources for the HWRF system. This file does not exist; you have to create it by modifying one of the existing ones. There is one default system.conf for each machine, system.conf.$machine

cd parm
cp system.conf.jet system.conf

After you create the system.conf, you will want to modify some sections. The principle section is the [dir] section, which sets most directory paths for the HWRF system. You may also want to change the location where GFS/GDAS/GEFS/ENKF input data is stored (the [hwrfdata] section), and the archive location.

For configuration information on those and other sections, see our conf file guide (see HWRF Configuration Guide).

Step 5: Run HWRF

You have now installed HWRF. Next, you need to run it. For that, see our guide to running HWRF through Rocoto (see HWRF Rocoto Workflow).