HWRF  trunk@4391
HWRF Multistorm Configuration
Note
Last updated: September 29, 2015

The HWRF multistorm is an alternate configuration of HWRF. It uses a large 27-km parent domain, known as basinscale, that covers both the Atlantic and eastern North Pacific basins. Up to five tropical cyclones can be forecast simultaneously using sets of 9- and 3-km telescopic domains nested in the parent domain. The pairs of high-resolution nests move to follow the storms.

The multistorm configuration runs with 61 vertical levels, similarly to the HWRF 2015 operational configuration for the Atlantic basin. Initialization procedures include both vortex relocation and hybrid 3DVAR data assimilation. The HWRF regional ensembles (for data assimilation or for forecasting) and coupling with the ocean are not supported for the multistorm configuration.

In order to determine which storms will be forecast at any given cycle, the HWRF multistorm configuration employs a prioritization algorithm that takes into account the storms' basin, proximity to land, and intensity.

Once the storms have been selected, the initialization is run independently for each storm, following the same procedure used in the default (triple nest, single storm) HWRF configuration. After data assimilation is performed, information from all storms is gathered using a multistorm_input task and a final merge is performed to prepare input files for the forecast job. Post-processing and product generation are also run independently for each storm.

A single source code, build system, and set of scripts are used to run the HWRF in either the multistorm or default configurations. The sections below describe how to check out, build, install, and run the code. In large part, the same procedures are used for building and running the default and multistorm HWRF configurations. Therefore, multistorm users can obtain helpful information from:

What Where
HWRF Documentation Page http://www.dtcenter.org/HurrWRF/users/docs/
Public Homepage http://www.dtcenter.org/HurrWRF/users/
The rest of this guide Main Documentation Page

Building and installing the code

The multistorm configuration uses the same source code, build system, and scripts as the default configuration. The same executables can be used to run either configuration. The multistorm capability is available in the trunk of the HWRF repository starting at revision 4066:

https://svn-dtc-hwrf.cgd.ucar.edu/

The process is the same as for the single storm HWRF, described in detail in the HWRF repo installation page. Generally, the process is as follows:

  1. Set up environmental variables and load modules
  2. cd sorc && make
  3. make install
  4. Link fix files

Running a Case

Great detail on running and monitoring HWRF can be found in the Running HWRF page. This section describes the overall process and the difference between multi-storm and single storm HWRF execution:

Note
Currently the multistorm capability has been tested only on NOAA's Jet supercomputer.
  1. Customize account in rocoto/sites files
  2. Link parm/system.conf.jet to parm/system.conf
  3. Customize account in parm/system.conf.jet
  4. Edit parm/hwrf_basic.conf to have run_multistorm=yes. This change assures that file parm/hwrf_multistorm.conf will be automatically used. A description of this file is below.
  5. Edit runhwrf_wrapper for your case. The following example taken from runhwrf_wrapper will run 18L, and will find all other current storms in the EP and AL basins occurring at the specific cycle.

Here is an example of the runhwrf_wrapper contents:

YMDH=2012102806 # Can be a range of dates -- 2012102800-2012102818
STID=00L
./run_hwrf.py $force -m 18L -M L,E "$YMDH" "$STID" "$CASE_ROOT" "config.EXPT=$expt" \
"dir.HOMEhwrf=$HOMEhwrf" "config.run_multistorm=yes"
Note
Order of arguments matter!

The main options regarding multistorm are:

  1. -m — assures that the storm will be selected by the prioritization routine
  2. -M — used to select the basins in which the prioritization routine will look for storms

The use of the -m|M options above pulls in parm/hwrf_multistorm.conf automatically, and settings contained in this file will prevail over any user-specified configuration files.

Multi-nest parallelism and I/O settings in parm/hwrf_multistorm.conf

When running HWRF, some processors can be used exclusively for I/O, while others are used exclusively for computations. The sum of the I/O and compute processors should equal the number of cores requested from the batch system. One end-to-end HWRF run using the multistorm configuration, just as the default configuration, involves three types of WRF runs (ghost, analysis, and forecast).

The ghost and analysis runs use one I/O group and six nio_tasks_per_group. Since nproc_x=nproc_y=-1, all three domains are decomposed over all compute cores. The number of compute cores is specified in file multistorm_tasks/init.ent.

The forecast run uses a different number of cores for I/O and computation depending on the number of storms being forecast in a given run. Four I/O groups are used no matter the number of storms, and the number of I/O tasks per group depends on the number of storms. In the array below (configured in parm/hwrf_multistorm.conf), the first position corresponds to the parent domain (domain 1), and the subsequent positions refer to pairs of 9- and 3-km domains associated with up to five storms.

nio_tasks_per_group=4,4,2,4,2,4,2,4,2,4,2

The multistorm configuration takes advantage of the multi-nest parallelism available in the WRF model. With this capability, each domain can be assigned to a different group of processors, as determined by variables comm_start, nest_pes_x, and nest_pes_y in the dm_task_split WRF namelist block.

In the recommended HWRF multistorm configuration, the number of compute processors depends on the number of storms. The total number of processors (for I/O and computation) is specified in the machine-dependent files in rocoto/sites. The compute cores assigned to each domain, for the case of 1, 2, ..., 5 storms, are specified using variables nest_pes_x and nest_pes_y under the subsections [wrf_1storm], [wrf_2storm], ..., [wrf_5storm] in file parm/hwrf_multistorm.conf.

General Description of Multi-Nest Parallelism in WRF

Multi-nest parallelism is controlled by variables comm_start, nest_pes_x, and nest_pes_y in the dm_task_split WRF namelist block. A general description about these variables, whose length should be no shorter than the value of max_domain from the WRF domains namelist block, is below.

Comm_start is used to specify the starting MPI task id for the domain within the set of tasks allocated to the job. The product of nest_pes_x(d) and nest_pes_y(d) specify the number of tasks that will compute a domain, where d is the domain id. Sets of tasks may overlap or they may be disjoint. This means that domains may share tasks. They can all share all the tasks and execute sequentially (as in the current WRF), some domains may share some tasks, or tasks may be associated exclusively to one domain. The only other restriction is that successive entries in comm_start may be the same or they may increase from left to right but may not go in descending order.

Example 1

Three domains running on six MPI tasks; each domain is executed by two tasks and with each successive pair of tasks executing one of the domains exclusively:

...
max_dom = 3,
/
&namelist_split
comm_start  = 0, 2, 4,
nest_pes_x  = 1, 1, 1,
nest_pes_y  = 2, 2, 2,
/

Inside the running WRF process, this will be the task decomposition:

Task ID 0 1 2 3 4 5
Running Domains 1 1 2 2 3 3

Example 2

Three domains running on six MPI tasks; first nest runs on the first two MPI tasks exclusively, the second nest runs exclusively on the remaining four tasks:

...
max_dom = 3,
/
&namelist_split
comm_start  = 0, 0, 2,
nest_pes_x  = 1, 1, 2,
nest_pes_y  = 6, 2, 2,
/

Inside the running WRF process, this will be the task decomposition:

Task ID 0 1 2 3 4 5
Running Domain 1 1 1 1 1 1
Recurse Into 2 2 3 3 3 3

Example 3

Three domains running on six MPI tasks; all domains run on all tasks. Note that this is the way WRF ran before. If the user doesn’t specify anything in the namelist_split block, the default is to run the old way:

...
max_dom = 3,
/
&namelist_split
comm_start  = 0, 0, 0,
nest_pes_x  = 2, 2, 2,
nest_pes_y  = 3, 3, 3,
/

Inside the running WRF process, this will be the task decomposition:

Task ID 0 1 2 3 4 5
Running Domain 1 1 1 1 1 1
Recurse Into 2 2 2 2 2 2
Recurse Into 3 3 3 3 3 3

In the diagrams above, if there is more than one domain in the column for a task, they run serially on that task. If there is more than one domain in a row, they run concurrently across the row.

In the recommended HWRF multistorm configuration, the number of compute processors depends on the number of storms. The total number of processors (for I/O and computation) is specified in the machine-dependent files in rocoto/sites. The compute cores assigned to each domain, for the case of 1, 2, ..., 5 storms, are specified using variables nest_pes_x and nest_pes_y under the subsections [wrf_1storm], [wrf_2storm], ..., [wrf_5storm] in file parm/hwrf_multistorm.conf.

Known Issues and Future Work

Todo:
For a forecast run of a single storm, d01,d02,d03 must use the same number of compute tasks for x and y else the wrf.exe forecast hangs.
Todo:
There is no current mechanism for a user to select specific storms to be run in a multistorm forecast.
Todo:
The HWRF multistorm configuration must be run with Rocoto Workflow Management System. No wrapper scripts are available.
Todo:
When running for a range of dates, the multistorm system will break if one of the dates does not have a storm.
Todo:
Develop a routine that eliminates duplicates is not working correctly for the multistorm configuration. When the TC Vitals files has a single storm listed as both an invest and a numbered storm for the same cycle, the system will launch them as if they were two distinct storms, and later fail. Multiple entries of a storm with the same name are handled appropriately.
Todo:
An option could be added in the future to support running the model using the basinscale parent domain only when no storms are present.
Todo:
Ocean coupling is not supported in the multistorm configuration.
Todo:
The HWRF multistorm configuration has only been tested on jet.