Workstation Eta FAQ


Will the workstation Eta run on such-and-such workstation?
Does my computer have enough memory and speed to run the model?
How does the workstation version differ from the operational Eta?
Which LINUX f90 compilers work with the workstation Eta?
What is in the future for the workstation Eta?
NEW:  Sigma vs. Eta differences
NEW:  How can I reduce the amount of smoothing in my sigma-coordinate runs?
NEW:  new_prep.sh core dumps on LINUX (revised)


Will the workstation Eta run on such-and-such workstation?

The workstation Eta has been ported and tested on all machines that are available to us within EMC/NCEP.  This includes HP, SGI, IBM, and LINUX workstations.  Some limited testing has also been done on a DEC in cooperation with the University of Maryland.
 

  Does my computer have enough memory and speed to run the model?

Whenever people ask the "How much machine is required" question, we have to give a pat "It depends" answer.  This response drives people crazy, but changes to model dimensions and resolution have a profound impact on the memory and CPU requirements to complete a model run within a reasonable amount of time.  A machine with 256 MB of RAM is probably sufficient to run a moderately sized (in terms of grid points) domain.
 

How does the workstation model differ from the operational Eta?

The actual model code is nearly identical to what runs operationally.  The main differences are in how the model gets its initial conditions.  The workstation model starts from isobaric GRIB data from Eta or AVN model runs and interpolates them to get initial conditions.  No data assimilation is done by the workstation model, which is significantly different from the 12 hour "EDAS" of the operational Eta model.
 

Which LINUX f90 compilers work with the workstation Eta?

Based on limited personal experience and feedback from users, it seems as though Absoft and Portland Group make the compilers best equipped to compile all of the code.  As LINUX grows in popularity and becomes the most heavily used version of the model, it would be useful to get more feedback from users on their successes and failures at installing the model using different compilers.
 

What is in the future for the workstation Eta?

With the release of the nonhydrostatic version including the option of Kain-Fritsch convection, there are no major updates planned in the near future.  One possible release in the future would be inclusion of data assimilation (3DVAR) code, although no target date has been set for this item.

Sigma vs. Eta differences
 

Based on testing at NCEP, certain fields look very similar when an identical domain is run in sigma and eta coordinates (e.g., 500 hPa geopotential height).  Precipitation, particularly where orographic influences are important, can show significant differences.  Another field where sigma and eta runs may have locally significant differences is low-level temperature.  Sigma runs tend to cool off at night more than eta-coordinate runs, particularly over elevated terrain.

Smoothing in sigma-coordinate runs

The level of smoothing done in coordination with spline fitting is controlled by the parameter nsmud.

The smoothing of the sea-level pressure reduction is done within the "quilt" job.  If running on multiple processors such that the model does the quilting function directly, the file to modify is SLPSIGSPLINE.F in etafcst_all.  If quilt is being run as a separate job, the file to modify is SLPSIGSPLINE.f in the quilt directory.  Reducing nsmud reduces the amount of smoothing.

Smoothing of below-ground isobaric fields is done within the postprocessor; nsmud is set within SIG2PSPLINE.f in the post_new directory.

new_prep.sh core dumps on LINUX

[disregard what is written below and read the 19 April 2001 entry on the news page ]

A problem noticed in NCEP testing has also been noted by a user:  on certain LINUX machines the new_prep.sh gives an error exit and writes out a core file, yet the files generated are capable of running the Eta model with no problems.  This error is not completely understood at the moment, but is tied to the use of higher-resolution SST data.  Possible solutions:

- Do almost nothing.  Suggest doing the following sequence in the eta/bin directory:  rm core; touch core; chmod 000 core.  This creates a core file that cannot be written to, so the "core dump" won't take time.  This option

- Revert to using low-resolution SST data.  In the directory worketa_nh/eta/src/prep/initbc/ copy GRIBST.f_LOWRES to GRIBST.f and SSTHIRES.f_LOWRES to SSTHIRES.f and recompile the code.  This solution avoids the problem but is considered undesireable due to the use of a poorer quality SST analysis.